Alarmist Texts on Artificial Intelligence

Reading Time: 8 minutes Andrés García Barrios reflects on the hype of artificial intelligence, its ethical questions, and a couple of warnings for the reading community.

Alarmist Texts on Artificial Intelligence
Reading time 8 minutes
Reading Time: 8 minutes

I do not know if it has happened to you, dear readers, but for me, in recent days, the avalanche of ideas about artificial intelligence (AI) has made it very difficult for me to think in only one direction and with a single criterion. The more I listen and read, the more I get confused about theories and assumptions, old and new problems, good and bad intentions, and hopes and horrors.

Of course, it is always better to feel confused in a sea of reasoning than to cling to the first solid-appearing idea and say, “I can stand on this,” without checking its strength. As soon as I feel that something in my support is wobbling, I withdraw from there in search of a more solid pillar of thought.

I confess that I have not yet found any firm basis for the subject in question (Whoever has, I beg you, leave it in the comments.) Actually, this happens to me with almost all the issues I deal with. I am better at finding errors in other people’s ideas and noticing when things appear sound. I don’t consider it a quality, but it is given to me. So, I can’t provide a good version of what’s going on with all this AI here, but I can give you a couple of warnings about ideas I think it’s better to walk away from before they sink, taking down those who have trusted in them.

I will mention two examples. The first is not just about AI but about innovative technology in general, rather about some ethical questions they bring up and we should question. The second is technical (although it does have ethical aspects) and warns us about the use of language and how it can deceive us, not only as readers but as writers and creators of thought.

This is crucial in the educational environment, where in addition to our safeguards, we have the mission of guiding our students in developing skills in the shortage of solid ground (or should we say, ” the proliferation of quicksand?”).

Family history

My great-grandfather was a wealthy man who owned sugar plantations in old Cuba. One day, when World War II began, he came home happy to give the great news: with Europe at war, the demand for Cuban sugar would skyrocket, and the family fortune would grow with it. Instead of rejoicing, my great-grandmother looked at him sadly: “How can you be happy with what the suffering of so many people will mean?” Legend has it that the great-grandfather took out his wallet and gave it to his wife, entrusting her from that moment the administration of the family fortune.

Will the readers believe that in the hands of my great-grandmother and then my grandmother, already in Mexico, the fortune ended up lost? Of course, you will believe me! It was completely lost. They were naïve women who cared about others and managed all that money with no criterion other than a concept of kindness incompatible with any business model. My great-grandfather, who would have known how to increase his fortune, did not feel able to do that while simultaneously tending to the call of his religion to love thy neighbor. Privileging the latter, he put the administration of the fortune in the hands of his wife. Result? All the heirs of that beautiful couple and I today must earn our daily bread with the sweat of our brows.

Clearly, it would have been preferable if my great-grandparents had been able to combine commerce and love of neighbor. Still, I gladly adhere to my great-grandfather’s decision because he and my great-grandmother bravely recognized their inability. I clarify my opinion that this beautiful couple of millionaires could have achieved what many do. In our culture, we all understand the difficulty in combining money and kindness. I’m sure a few succeed. However, eighty years have passed since that family anecdote. I hear business people talk about the new models of automation and globalization of business processes but not about anyone willing to give up personal benefit when the lives of others are at stake.

Everyone knows how far their heroism can go. However, it is clear to me that we must be cautious before supporting those who irresponsibly detach when their well-being harms others. A few days ago, the journalist Johann Hari, author of the bestseller After the Scream (turned into a film), offered an interview where he discussed the dangerous loss of attention that we human beings have suffered in recent decades, a failure with severe consequences to all individual and social order. A good part of the blame lies with new technologies, which absorb and deteriorate our attention. “I spent a lot of time,” Hari says, “in Silicon Valley interviewing the people who designed these things. Their only goal, they admit, is for us to spend more and more time hooked. That’s all. They don’t do it because they are evil but because it makes them money. However, all this unexpectedly affects democracy: People stop paying attention to profound and important issues.” 

If it is a question of paying attention to important matters, let us not let words pass by without grasping their meaning, much less admit them just because they come from an “influencer” opinion leader. Note that the phrase “Not because they are evil people, but because it makes them money” makes us face an ethical dilemma of the first order. Of course, it has spanned the centuries, but if you remember, it whipped up an extreme fury against Nazism. In universal terms, it says more or less, “Is there any imperative that justifies us consciously causing our fellow human beings harm they do not deserve?” Hari’s answer seems to be yes; making money is one of those imperatives. That should at least grab our attention and force us to stop. Perhaps he said it only in passing, without much thought, even wanting to be benevolent with his interviewers; maybe that was their way of thinking. For now, we already have a measure with which to continue reading the article. It is this: either Hari (whatever he says) is willing to advocate making money at the expense of the mental health of humanity, or he allows himself to write with a moral laxity that should alert us.

Machines that outperform average talent

Around the time of the previous text, I received the article, “We are talking about the possible end of human history,” by Yuval Noah Harari, published in The Economist, a news magazine with one of the largest global distributions. The title’s cry of alarm (perhaps chosen by the editor), the scope of the publication, and the almost monumental fame of Harari (author of the bestseller Sapiens and one of the most famous opinion leaders worldwide), force one to consider what the article says carefully. The first thing that jumps out is this sentence closing the first paragraph: “Artificial intelligence has acquired remarkable capabilities to manipulate and generate language…”

I suggest to the reader that these two words (manipulate and generate) applied to language put us on alert.

In an article I published in The Observatory, I discussed an issue I must now return to briefly. It concerns how language clouds our minds when we try to explain reality to ourselves and others. If I say (like Harari) that a machine “manipulates” language, I come dangerously close to attributing an intention to that machine. Many of my readers are specialists, and perhaps it is part of their professional jargon to say, for example, that a computer “manipulates information.” However, as a writer, I have an obligation to remain a non-specialist. Hearing that inert objects “manipulate” evokes a kind of animism that these objects acquire. Manipulating has two meanings: “operating with the hands” and “distorting the truth.” Of course, I have heard the word “manipulate” in all kinds of contexts: “the system manipulates,” “the media manipulate,” and “the machine manipulates,” but since Freud and Lacan, we know that the use of a word never completely nullifies its original meaning. You need to be a specialist to break with that original link between word and intention and adapt a term’s meaning to a different field. When a specific use of language lends itself to misinterpretation, it is convenient to clarify the matter. This should be such a case because saying “manipulate” creates or can create the illusion that the machine has hidden intentions (and is, therefore, in some strange way, our fellow man) and puts us closer and closer to believing that robots can one day turn against us.

The problem is aggravated when discussing a robot “generating language.” It is true that specialists also talk about the language of machines, the language of flowers… But we all know that the original meaning is that language is the set of words that human beings use to communicate. This definition also contains the conviction that someone uses language when he wants to say something to someone else, where “someone” means a human being who cares about what he says and is told.

A robot doesn’t care at all what it “says” or what it is “told.” A robot is nobody. A robot is a “someone” like a typewriter or a sheet of paper. If we add that we still have no idea if human beings really “generate” language or if it originates and becomes structured in genes, we can see that affirming that AI “generates language” is a true attack against a minimum possibility of understanding.

To top it off, Harari finishes this first reflection by saying that since “language is the stuff of which almost all human culture is made,” by manipulating language, AI “has hacked the operating system of our civilization.”Carumba! Now it turns out that our civilization has an “operating system!” Some will say that I am paranoid, but I believe that, under the pretext that he is only using a common metaphor, Harari has successfully used a kind of linguistic spell in which our civilization has become a kind of immense electronic device that can be “hacked” by machines that “have acquired the ability to manipulate and generate language.” Magically and without realizing it, the prestigious historian has transported us to a world where science and superstition are siblings. From there, suggesting that we speak of the possible end of human history is less than one step.

Harari fruitfully performs these types of spells (let’s call them “misunderstandings,” which in no way detracts from their danger). Let’s look at another example. Always justifying the alarming central premise of his article about the end of history, Harari asks a few paragraphs later: “What will happen when a non-human intelligence is better than the average human being at telling stories, composing melodies, drawing images, and writing laws and scriptures?” 

Let’s carefully consider the question. Note that the sentence speaks of “average” human beings. Some are suddenly given to tell stories, compose songs, and make drawings without the slightest trace of originality, generating works of average art. It is also possible to affirm that these will easily be surpassed by robots that contain the works and techniques of the greatest artists in their database from ancient times to the present day. However, if we continue carefully examining Harari’s text, we can see that in a new act of linguistic sleight-of-hand, he extends that statement to the drafting of “laws and scriptures” (referring to sacred texts) and thus strings in the same conclusion the idea that these can be written by an “average” human being, suggesting that they will therefore also be easily imitable by AI. However, by definition, laws are created and written by people representing an entire human group, and those people never have anything “average!” On the other hand, the sacred scriptures imply a transcendent act of creation so that only in an extreme situation (never “average”) can anyone conceive them. Just because AI can easily match the works of average human beings doesn’t mean it will one day be able to surpass true artists, legislators, and prophets.

Authentic art, laws, and scriptures imply a creative leap that AI does not give or have to. If, by chance, the machine produces a sequence of letters interpretable as a great poem, law, or prophetic text, it will not be able to notice or value it. It will be necessary for a human being to discover the presence of a significant text among thousands of useless scribbles and decide that it contributes something. This same idea is expressed in an old thought experiment that states that if a monkey randomly typed on a typewriter for an infinite time, it would end up rewriting The Bible, Don Quixote, thescript of Kung Fu Panda, and all the great texts of history. Could we call that monkey “creative?” Obviously not! It would have typed those masterpieces without realizing it. And if that eternal monkey one day produced an “original” poem by chance, he would not be aware of it. A human being would have to come and say, “What the monkey unintentionally typed is not only legible but produces a deep emotion in me …”

I say the above, leaving aside the analysis of Harari’s text and giving my perspective on the possibility of robots becoming “creative.” In no way do I propose it firmly but only as a point of view among the many whose reliability must continue to be assessed. I, therefore, ask the reader to receive it with reservations. Can artificial intelligence acquire awareness and creativity? We can all voice our small opinions, but the only thing we can say forcefully is, “Only God knows.”

Translation by Daniel Wetta

Andrés-García-Barrios
Andrés García Barrios

Writer and communicator. His work brings together experience in numerous disciplines, almost always with an educational focus: theater, novel, short story, essay, television series and museum exhibitions. He is a contributor to the Sciences magazines of the Faculty of Sciences of the UNAM; Casa del Tiempo, from the Autonomous Metropolitan University, and Tierra Adentro, from the Ministry of Culture. Contact: andresgarciabarrios@gmail.com

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0