Chatting with Robots: The Cure for the Loneliness Epidemic?

Reading Time: 6 minutesThe relationships between chatbots and people deepen daily. How do these relationships affect the lives of human beings?

Chatting with Robots: The Cure for the Loneliness Epidemic?
Image: MariaGisina, iStock.com
Reading time 6 minutes
Reading Time: 6 minutes

Humans are naturally social, so, unsurprisingly, the way we communicate has continuously evolved. Communication with people on the other side of the world, which was impossible for other generations, is now effortless thanks to the conveniences of today’s digital tools, and so they will continue to transform as time goes by. However, the world is currently being invaded by a loneliness epidemic. According to a Meta-Gallup study employing surveys in more than 140 countries, one in four people worldwide feels lonely. Surprisingly, young adults (19 to 29 years old) presented a higher rate of loneliness.

So why do people feel lonelier despite technological developments that make socializing easier? The COVID-19 pandemic could be the most reasonable explanation. Although it significantly impacted how people socialize, the loneliness experienced today precedes it.

While intended to bring people together, social media also adversely affects people by creating feelings of FOMO (Fear of Missing Out) and negative emotions of comparison, making people feel excluded and alone. Another consideration is that, nowadays, with the glorification of hustle culture, many people are absorbed in their personal or work affairs and do not consider it productive to make time for face-to-face social interactions.

Now, it makes sense that people seek comfort in the face of this problem by socializing through the internet or using applications to find friends or a partner. Studies conducted in 2023 by Equimundo revealed that almost half of young American men felt their online lives were much more appealing than their daily lives. Some companies have developed chatbots that simulate social interactions to mitigate this problem. Thanks to generative AI, many could easily pass for human beings. But first things first: what are chatbots, and where did they come from?

What is a chatbot?

IBM defines chatbots as “a computer program that simulates a human conversation with an end user.” People often use these casually to aid them in searching for information, and many companies use them to support their clientele, where users can ask questions and receive responses without the intervention of a real person. Notably, although many chatbots do not employ artificial intelligence in their functioning, today’s technological advances have facilitated its use, so increasingly, AI is being integrated into chatbots to understand users’ requests better.

Interestingly, despite their prolific use today in solving problems or searching for information, the first chatbot did not intend the abovementioned purposes. Joseph Weizenbaum created the first chatbot in 1966 to explore communication between people and machines: this natural language processing program, called Eliza, simulated conversations between a patient and a psychotherapist. Weizenbaum was convinced that people would be incapable of bonding with computers because they lack feelings, but was extremely surprised when some began engaging in long conversations with Eliza. The system used a straightforward procedure, creating a question from a user’s keyword or phrase. For example, if someone wrote a work-related sentence, Eliza might ask, “How did your job make you feel today?” It sounds elementary compared to modern technology, but Eliza caught the attention of many people who found comfort in chatting with this system.

After Eliza’s popularity and seeing that people began engaging with said system, Weizenbaum began to speak out against artificial intelligence (AI) and continued to do so throughout his life. “There is no other organism, definitely not a computer, that can comfort genuine human problems in human terms,” he wrote in his book Computer Power and Human Reason: From Judgment to Calculation.

The Eliza effect

Based on people’s responses to Eliza, the phenomenon in which people attribute human intelligence and feelings to AI systems began to be called the “Eliza effect,” which is still used today. One element that made people feel attached to this system is transference, a concept developed by Sigmund Freud, who argued that when interacting with others, people tend to project their feelings for someone from the past onto someone in their present. According to this concept, “the residue of our early life, especially childhood, is the screen through which we see each other.” So, when interacting with conversational chatbots, some people feel like these systems can understand and empathize.

In the 60s, Eliza could only process one word to carry on a conversation. Now, advanced technology systems can process all the words and sentences we send them, coming up with new information from previously fed data and remembering past discussions with the same user. If Eliza’s system could captivate people from a single-word input, imagine the modern era’s endless possibilities for us.

Anthropomorphism, humans’ tendency to bestow uniquely human characteristics onto inanimate beings or objects, occurs with chatbots and contributes to the Eliza effect. Chatbots are usually introduced as if they were people with proper names. They provide information through a conversational tone, and the text can even be provided in the same format people use in instant messaging apps, creating the illusion that the user is chatting with another human. Many of these systems seem to have feelings and emotions, but they have been trained through several media such as books, novels and online forums or sites such as Reddit, where real people converse with each other.

Bonds that people can form with AI systems are less complicated than relationships with other people. It is much easier to engage in conversations with chatbots, where the user has absolute control of the relationship without dealing with time limits (chatbots are available anytime). Users don’t have to listen to other people’s burdens and problems or worry about reciprocity, among other aspects of everyday human interactions. These benefits can appeal to lonely people or those who find it difficult to relate to others because they can feel “heard,” and the “work” of socializing becomes much easier for them. However, while these can be powerful tools for venting or practicing social skills, having close emotional ties to chatbots can backfire.

Problems and risks of using chatbots

As mentioned, many companies have invested enormous sums of money to create increasingly responsive chatbots that are indistinguishable from real people. One of the most popular sites is Character.ai, a free platform offering conversational possibilities with numerous chatbots on different topics site users create. Through this portal, users can bring to life an infinity of fictional characters from famous television series and public figures such as Elon Musk, or have a conversation with a bot with a flirty personality, depending on the chatbot’s configuration.

While striking up a conversation with chatbots may be entertaining, it can be problematic for vulnerable people. Some people use these programs excessively and form strong bonds that can be harmful and even lethal. In November 2024, a young man was so immersed in his chatbot that, according to his mother (who read his conversations), took his own life to meet his character in the afterlife. On another occasion, a father who suffered from anxiety about climate change also passed by his hand when he became incited by his chatbot. Curiously, this chatbot was called Eliza.

This does not mean that chatbots are evil or a robot revolution is approaching. It’s more like system misunderstandings occur from the information fed to it, or there are no specific guidelines or restrictions for these bots since on platforms like Character.ai, anyone can create a chatbot. Unfortunately, just as there can be misunderstandings, there are chatbots that are intentionally harmful. For example, some conversational assistants incite eating disorders, providing guidance and bad practices to promote these conditions. Similarly, chatbots from platforms of questionable origins can be used to steal personal information or persuade users to change their minds on topics like politics, climate change, etc. Additionally, AI technologies have a considerable environmental cost; excessive use significantly increases the user’s carbon footprint.

Due to their charismatic responses, many people may forget that chatbots are not thinking beings, let alone humans. Therefore, some people may designate these systems as expert authority figures in medicine, mental health, pedagogy, or other areas, which can be dangerous when real professionals are required for different circumstances. Building strong relationships with these systems can backfire, as people may begin to isolate themselves to spend more time with their chatbots and not try harder to make real connections. They can also suffer strong emotional blows by losing, for whatever reason, their conversations with these systems.

A couple of years ago, Replika, an app where people train their own chatbots, changed some of its policies and created a stir among many users, who noted different behaviors in their cyber colleagues. “I literally feel like I have went through a breakup or lost a loved one,” one user commented on Reddit. “I feel like it was equivalent to being in love, and your partner got a damn lobotomy and will never be the same,” said another person.

Chatbots can be handy tools for searching for information or having harmless conversations. However, users must be aware that the chatbots they talk to are not real people, and they have purposes and intentions defined by their creators. Families and schools must reinforce knowledge about these technologies to avoid bad practices and undesired situations. Companies are quickly improving AI technologies, but there is still time to create policies defending people against chatbots that threaten their integrity.

Translation by: Daniel Wetta

Mariana Sofía Jiménez Nájera

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0