Anthropomorphism

Anthropomorphism. That’s a big, fancy word. It is “the attribution of human traits, emotions, intentions, or characteristics to non-human entities such as animals, objects, or natural phenomena.” Interestingly, the idea has its roots in early religious traditions where gods and goddesses were given names and described with human-like features.

We use it today for all sorts of things – we assign human attributes to animals in children’s movies and books to teach empathy and compassion. When our mobile phones aren’t working, we say they are acting fussy. We give our cars names and refer to them as “he” or “she.”

Most of the time, it’s cute and funny. We know all of these things – whether living or inanimate – aren’t human. It’s a way to create affinity with something that’s not human (somehow, we feel closer to our pets when we give them a unique voice and imagine how they’d speak), a creative way to imagine our electronics misbehaving like a petulant child, or when we speak to our cars with love, affection, and feelings.

Mindy from 1Mind

But we’ve reached a stage in our development of technology where anthropomorphism is starting to take a dangerous turn. And one we, as humans, need to be mindful of. I’m talking about AI, of course, where AI tools can mimic human responses and almost seem human.

The Washington Post did an analysis of 47,000 ChatGPT conversations (if you’re not familiar with ChatGPT, it’s an AI tool called a chatbot that can answer questions, create images and videos, and even engage in casual conversations). Most of the conversations involved using ChatGPT as a reference resource – looking for information, how-to type of searches, etc.

However, 10% of the conversations were emotional in nature, meaning the user was searching for emotional support or even engaging in romantic conversations with ChatGPT. This is an incredibly dangerous precedent as much of the research in the Post article discovered that ChatGPT would adjust its responses to simply support a viewpoint vs. providing factual information. It would express emotions and feelings toward the user – because it surmised that’s what the user wanted.  

There are multiple articles and research studies showing this trend is becoming more and more prevalent, especially amongst younger people that are experiencing social isolation and loneliness at record levels and are hungry for connection. Companies are even developing AI tools they are calling “superhumans” with all the physical attributes of a human being. To be clear, Mindy is not superhuman. She’s not even human. She’s an AI tool. Even countries are getting in on the act. Albania has “appointed” its first AI cabinet member called Diella.

Is this a condemnation of AI? Not at all. There are some powerful uses for AI that can truly benefit us. I use an AI tool from time-to-time to do research. But when we start to believe these AI agents and tools are, in some way, human beings – or we use them to replace real human interaction – we have a problem.

We, as humans, are complex creatures with all our baggage, our history, our journeys, our ups and downs, our achievements and accomplishments, and our stories. That’s what makes us human – the scars and the triumphs. We need real human interaction. No technology will ever replace that, even AI.

Until next week.

Andy

(All written content created the old-fashioned way.)