Whenever AI makes you ‘feel’ something, you’re being manipulated
Your AI friend doesn’t exist.
A multinational team of researchers recently published an opinion paper detailing the challenges and potential dangers for humans who experience attachment to artificial intelligence models.
According to the research:
“Relational AIs allow us to have a relationship with a partner whose body and personality are chosen and changeable, who is always available but not insistent, who does not judge or abandon, and who does not have their own problems. What if you prefer someone less pliable and more realistic? AI can provide that, too, with many users choosing AIs with human-like qualities such as defiant independence, manipulation, sassiness, and playing hard to get.”
Manipulation
Right up front: these artificial intelligence models are not beings. As we wrote before, when discussing why you can’t give a chatbot real-world agency by placing it inside of a robot, large language models (LLMs) don’t actually exist. They’re sets of rules. When you prompt an LLM, those rules are exploited to generate content. Once that content is generated, the “chatbot” you were prompting disappears forever.
When an AI model receives a prompt, it breaks the words down into bite-sized chunks. It then takes those chunks and feeds them to what, essentially, is a Plinko machine with billions or trillions of pegs. Your prompt’s “chunks” are dropped into the machine like balls and, wherever they land, a letter, word, or phrase is generated.
The machine takes the results of all the chunks you dropped and makes new chunks. Then it drops those chunks into the machine. It repeats this process until its task is considered complete.
There is no “brain” or central processing unit inside of an AI system. The “black box” isn’t the AI’s center or where the “magic” happens. The black box is the computations occurring inside of the machine. Because humans can’t count trillions of interactions at the same time (it would take a human nearly 40,000 years to count to a trillion). We aren’t able to watch every interaction and, thus, we use the term “black box” to describe results we can’t verify.
For this reason, we can be surprised by an AI’s output. And, the bigger the Plinko machine the better the AI is at imitating its training data.
But this also means that AI models don’t actually exist. Going back to the Plinko analogy, the Plinko game itself has no brain. It’s just wood, metal, and glass. The chunks bouncing around inside the machine are just metal. They’re not alive or sentient or smart or agentic. And, when they come out the other side, arranged neatly in ways that allow us to interpret them as words, images, or sounds, they’re still just lifeless chunks.
The machine isn’t aware, it doesn’t feel or think and, even if it could, it would still interpret your inputs as “chunks” that are indistinguishable from all the other chunks it sees. Your chunks aren’t separated from other people’s chunks. It’s all just ones and zeroes.
In other words, when you “like” a chatbot, you’re just liking math and electricity.
Critical thinking
Chatbots are designed to make us feel good. They’re gracious, polite, friendly, and undeniably pandering. As the aforementioned researchers wrote:
“AIs can also be used by people to manipulate other people. Relational AIs can be harnessed by private companies, rogue states, or cyberhackers, first offering companionship and luring people into intimate relationships to then push misinformation, encourage wrongdoing, or otherwise exploit the individual.”
While it’s beyond the scope of this article to discuss the potential for harm in AI-human relationships, it bears mentioning that there is no outcome where the AI itself becomes something greater than the sum of its parts.
If, for example, a person is engaging with a chatbot designed to mimic romantic relationships, that person is not interacting with an artificial “entity.” They’re interacting with math. The “AI model” operates inside of the machine’s latent space.
The AI cannot see your words. The AI inside the machine doesn’t get the prompt you write like an instant text message. Your message is broken down into its base components, fed to the machine, and then the machine compares those base components to all the other component arrangements in its training set and starts arranging base components in the order most likely to achieve its reward function.
In other words: You’re the middle segment in a human-AI centipede.
The AI is rewarded for generating outputs that imitate the outputs it was trained on. So, for instance, if you’re having romantic conversations with an AI system, you’re not receiving titillating outputs from an artificial being. You’re drinking a soup composed of words and phrases that were created by other humans.
In cases where chatbots are designed specifically for the purpose of “relationship” interactions, you’re almost certainly exchanging romantic text messages with other users on the system. Whenever you or another user responds positively to a prompt, the AI model adds a weight that makes it more likely that output will appear for other users.
In real-world terms, this would be like someone having a relationship with a mask and a book. It wouldn’t matter who was wearing the mask, as long as whenever they spoke they only read statements from the book.
You could update the book with whatever you wanted in order to change the outputs. And, if the person wearing the mask today isn’t available tomorrow, you could just ask someone else to wear it.
The question you have to ask yourself is this: if you swapped books with another person who was in the same kind of relationship, and had your masked person read from their book, would that make the relationship more “real?”
Read more: Putting LLMs inside of robots won’t solve the embodiment problem
Art by Nicole Greene