AI & Humaness

·

4 min

Constant Companion, The Potential Impacts

By Fiona Reynolds | 11 May 2024

Blog image
Blog image
Blog image

My dog, Scarlet, is my constant companion. I work from home, and when I am in a meeting at my standing desk in the sunroom, she is asleep on the couch beside me. As I’m writing this blog in the space where I sit (I may not be in a classroom, but I still have my work stations), she’s at my feet. When my husband and child come home from school, she growls at them, every day, as if to say, “She’s mine.” Scarlet is 12 years old and still has a running rivalry with my youngest daughter, who used to be close to the same size as Scarlet but is now quite a bit taller. Scarlet still worms her way between us when I’m sitting and chatting with my daughter and tries to get me to pet her.

Scarlet cannot talk, but she is good at getting our attention and telling us what she needs. She has different tones and gestures for when she wants attention, when she wants to go outside, when she’s happy (my favorite—she has the best grin you’ve ever seen), and when she’s angry, upset, or scared. I wonder sometimes, “Did she adapt to us and learn how to communicate in ways that we could understand, somewhere in her mind logging the actions and sounds that we responded most correctly to, or did we adapt to her? Did we learn how to notice when she needed to go outside, get a cuddle, or get back to the house.” Probably it’s a little bit of both. 

I think of this also in the context of generative AI. Are we adjusting to it and the ways it communicates best, or will it adjust to us as it gains more knowledge (or information) about us? My favorite example of GenAI adjusting to us is that it provides better responses when we are polite to it, using please and thank you. My hunch is that, as it was fed reams of data, one of the patterns it noticed is that people responded with more interest and engagement when they were treated politely. Research from Microsoft shows that AI mirrors the tone that you use with it, so if you are polite and detailed in your prompting, it will give you the same. However, as you can imagine, this phenomenon of AI mirroring can also be used to perpetuate biases and reinforce negative communication patterns. I think it’s fair to say that there will be a few bots that will have quite a rich vocabulary of profanity and will use it with users who want that style of communication. (No judgment.)

What about the other use cases where we might adapt to AI? For instance, will we begin to frame problems in terms of prompts and, in doing so, miss out on important lenses that we would have thought through if prompting wasn’t our mental model? Will we learn to go to AI first with our unformed ideas, work with it to develop the concepts, and only share them with people once we’re pretty sure we sound smart, reinforcing a culture of perfectionism and discouraging the free exchange of unrefined ideas? If students become overly reliant on AI for framing their ideas or refining their work, could it hinder their ability to think independently and develop their own problem-solving abilities? This could lead to a generation of learners who struggle with original thinking and creativity, which are essential for personal growth and societal progress.

Additionally, the adaptation of AI to human communication styles raises questions about the transparency and accountability of these systems. As AI becomes more adept at mirroring human behavior, it may become increasingly difficult to discern when AI is influencing or shaping the communication and decision-making processes. This lack of transparency could lead to unintended consequences and ethical dilemmas, particularly in educational settings where the development of critical thinking and ethical reasoning is more and more why we exist.

We can, of course, take another path, especially in schools, by developing parameters for when we use AI as a tool or partner and when we don’t. We have the ability, especially at this time in GenAI’s development and its usage in schools, to shape who or what is adapting to whom. By establishing clear boundaries for when and how AI should be used as a tool or partner, we can temper potential risks and ensure that AI enhances the educational experience and students’ personal growth. Perhaps this should be our first priority as we get ready for AI integration in schools.

Artificial Intelligence

Creating a Thriving Classroom with Trust

By Fiona Reynolds | 11 August 2024

Artificial Intelligence

Creating a Thriving Classroom with Trust

By Fiona Reynolds | 11 August 2024

Artificial Intelligence

Finding Balance: The Prometheus-Epimetheus

By Fiona Reynolds | 28 July 2024

Artificial Intelligence

Finding Balance: The Prometheus-Epimetheus

By Fiona Reynolds | 28 July 2024

Artificial Intelligence

SB 1047: Implications for California Educators

By Laurynn Evans and Fiona Reynolds | 10 July 2024

Artificial Intelligence

SB 1047: Implications for California Educators

By Laurynn Evans and Fiona Reynolds | 10 July 2024