The Education We Want | Robots That Write Like humans and Humans That Write Like Robots

Reading Time: 8 minutes In this new installment of “The Education We Want,” Andrés García Barrios answers the question of how to deal with the risk of students using a chatbot to write what should be their own texts.

The Education We Want | Robots That Write Like humans and Humans That Write Like Robots
Reading time 8 minutes
Reading Time: 8 minutes

A few days ago, my article “To Read or Not to Read Books” in the series “The Education We Want” was published here in The Observatory. I wrote it without realizing that the subject was closely related to an issue being discussed a lot in this space: At least five texts published or cited here deal with it (and I do not doubt that soon all kinds of reactions and comments will be added, given its radical importance).

The first is the editorial note by Karina Fuerte, in which she explains the growing use of chatbots for writing texts that seem to be written by humans. Karina concludes her note by opening a survey in which she asks us if we believe that this text was written by her or is the work of a chatbot. A week later, in her next editorial, she gives us the answer: the note had been written by the artificial intelligence program chatGPT, and yet more than half of the respondents attributed it to Karina’s pen. Stunned, in addition to confessing that she will have to deal with the hard shock with her therapist, the pretending author proposes to us to perform a serious self-criticism and ask ourselves if, in our texts in general, we resort too much to a clearly impersonal style of writing to the extent that it cannot be distinguished from robotic writing. For example, she cites an article published in The Chronicle of Higher Education pointing out the long tradition of impersonality incorporated into academic texts.

What characterizes robotic word processing? It works entirely statistically. It starts by selecting all the ideas on the subject in its database; then, it compares and hierarchizes them according to our requests and rewrites them, making a grammatically correct synthesis. (Obviously, its database can be as vast as all the internet content). Its second characteristic is that the resulting text completely lacks personality, that is, a stylistic expression of its own.

I mentioned my article “To Read or Not to Read” because it speaks precisely about the value of writing in which the author learns to use the rules not to imitate pre-established models but, on the contrary, to reflect their personality and their human individuality. Personal style is something living that cannot and should not be avoided. Let us learn to fill our texts fully with ourselves, I say.

In her first editorial, Karina also alludes to the danger of students generating school essays and reports with these tools and passing them off as their own. In another text, Sofía García-Bullé picks up the thread and emphasizes the opinion of some experts that the solution to such fraudulent acts is not punishment but the application of a code of honor that instills in students values of dignity and respect for themselves (and their work, their peers, and teachers).

One of the ideals of every society is to achieve this balance between ethical tradition and technological progress. Achieving this is not easy. Generally, rapid technical development does not sit back and wait for tradition to catch up. Instead, it tends to walk away from it, insisting (a little lightly) that it will not fail to consider it. Therefore, tradition must stop being a thing of the past, take a quick leap into the future, and surprise technology by taking the lead.

I will try to explain.

I begin by remembering how electronic calculators began to flood the market in my adolescence (already distant, as I am about to evidence). My father, a true fan of mathematical reasoning, restricted our use. I thought exercising that kind of reasoning was fundamental to achieving autonomy as human beings. At the same time, some of our friends argued that calculators should be used freely so we could leverage the development that technological advances allow. They believed that calculators were part of a new culture that came to free us from the burdens of basic mathematical operations and allow us to focus on new challenges.

Both positions were somewhat correct. The solution was perhaps to achieve a balance between them. However, experience soon told us that society would gradually abandon the admiration for mathematical reasoning and favor the index finger pressing keys and touching screens. (My octogenarian uncle Pepe says that to remember he is not stupid for his inability to use the computer like his grandchildren, he placed to its side an old cardboard slide rule, a much more complicated tool to use, which he mastered well in his youth).

The balance between past and present, I insist, will be achieved by looking broadly to the future. But the unhidden challenge is immense, I think, much more than our already enormous and legitimate concern about robotic plagiarism. To understand the size of this challenge, we must know its positive features and consider the benefits that text writing programs can bring us (which are, by the way, only a tiny sample of the advances of artificial intelligence). A very clear one for scientific and social research is that today we can obtain in a matter of seconds a report as brief or extensive as we want on any of the topics that humanity has already registered electronically (Karina’s editorial note is an example of 600 words). This information can reach us hierarchized in subtopics according to the order we choose. Still, it has an additional benefit: the ability of the tool to group ideas according to the number of times referenced in the database. It can add subjective aspects related to that information with unprecedented precision, for example, the most generalized idea about a particular subject at a specific time, fundamental for studying history. (We can ask what people think about bots today or change it to know what was opined in the fifteenth century upon the invention of the printing press).

Now imagine that human beings can program a processor to imitate all the historical writing styles and produce texts that seem to be executed by people. This means texts that are not as impersonal as those we get today but are truly “humanized.” What do I mean by humanized? In my article “To Read or Not to Read Books,” I affirm that all “human” text is written in emotional states of uncertainty and in different texts (some even have to be written on the knees, in the truck, in a hurry to deliver to the editor); the result will reflect, however subtly, that emotional state and the context in which it was written. The theorist Wolfgang Iser further explains that an author never says everything; consciously or unconsciously, they leave open some “spaces of indeterminacy” that the reader must fill (the readers’ expertise to fill them depends partly on their familiarity with the subject and experience as readers). Each author’s very personal way of writing creates their style. Thus, the chatbot that I imagine can analyze, compare, and do who-knows-what other operations on the texts to delimit these subtleties and reproduce the “style” of any author, even the most sophisticated, such as Homer or Shakespeare. (With those tools, my fantastic chatbot, you can even go further and discover in a matter of seconds the influence of these two authors on the writers of German Romanticism, for example.) 

Clearly, my chatbot does not have to develop what we call “a soul” or acquire the ability to one day intentionally deceive us and impersonate our grandmothers, sending us a text signed by her. Certainly, if we program it to impersonate our grandmothers in certain circumstances, it will obey our order when those circumstances arrive. Such a bot will simply continue to respond to our programming, however sophisticated it may be, in such a way that even in this case, the ethical problem will continue to be the one pointed out by Karina Fuerte and Sofía García-Bullé of incurring or not incurring fraud. However, (here, I enter the core of this article) I am very interested in pointing out how the ethical complexity of the matter grows immeasurably if we compare the skills of our bot (I insist, imaginary) with human beings as considered by one of the most influential neuroscientists of the moment. In his book Cerebro y Libertad (The Brain and Freedom), Joaquín M. Fuster raises one of the most current theories about consciousness. In it, he states (I use my own words) that our brain, which functions as a unit, is capable of receiving an immense amount of information and associating it in practically infinite ways and then, in the prefrontal cortex, reviews those associations, discriminates between them, and channels the most viable ones (the most relevant, let’s say) to execute an action. It’s what Fuster describes as “making a decision.” If the amount of information (and therefore associations) was minimal, the “decision” would have many limitations. However, since it is “infinite,” the possibilities extend with a such magnificent amplitude that we can say the cerebral cortex is endowed with freedom. 

How do we participate in this whole affair? According to Fuster, the “we,” “our,” and “I” consciousness, that part of us that has the experience of being “alive,” of being “someone,” is only a witness without significant interference in the process. We are mere observers of what is happening as if we are looking from behind the glass and cannot intervene in what happens on the other side. The term Fuster uses at some point is that we are an epiphenomenon (a kind of side effect) of those brain processes. The term “epiphenomenon” is difficult to explain, but an example clarifies it: It is said that our shadow is an epiphenomenon of the fact that light rays reach our body; that is, a shadow is a secondary phenomenon that has no impact on the main phenomenon (that of light touching our skin, stimulating its receptors, producing heat, etc.). Me? I am just a kind of shadow of the activity of a brain that interacts with particular circumstances. As a secondary phenomenon, I have no decisive interference in this mental activity or interaction (even my language, my way of speaking, is nothing but a resource of the prefrontal cortex to process information and transmit it to other brains). As Fuster says somewhere, “The prefrontal cortex is free; we are not.”

(Allow me to open a parenthesis to refer the reader to a text by the imminent Spanish philosopher Juan Arana, where he reviews the Spanish edition of Fuster’s book with complete attention and a severe critical approach).

According to Fuster’s shocking philosophical perspective (it should be clarified that, although he is a renowned researcher, the conclusions described here are not a scientific theory), Karina Fuerte would not be the real author of her first editorial note nor the second, because when writing them, she would only have had the impression of doing it when in reality she was just an observer of the “free” execution of her prefrontal cortex, the actual author of the text.

As we can see, the most radical dimension of the ethical problem is not the dilemma between programming or not a bot to impersonate us, with the consequent lack of respect for ourselves and our peers. The most transcendent dilemma is whether or not to subscribe to the proposal that we are akin to a super bot whose consciousness is nothing more than a relatively useless epiphenomenon of brain processes. In other words, before the risk of losing our dignity is the loss of our identity, justified by a philosophy of science that seems to surrender to personalization.

Faced with this, which for me is a dark fantasy, I prefer to look to the future and propose and fight to bring it the ethical tradition of considering ourselves beings capable of interacting with reality and modifying it (in a word, fighting to create a future in which human beings also fit). Therefore, returning to the subject from the beginning, I would like to point out in as much depth as possible the ethical opportunity that opens from the question of how to face the risk of students using the chatbot to write what should be their own texts.

First, similar to those friends of my youth, many students may refuse to waste the advances because they consider them as elements of culture and not as personal resources for fraud. If so, we can certainly bet on instilling in them an ethic that advocates their dignity and self-respect so that we all discover a balance between traditional values and progress. In the search for this balance, we must decide whether to continue asking our students for academic essays where they reproduce one by one what they have learned (as plugged into an electronic processor) or ask them for “personal” texts where, as authors, they endow that knowledge with their perspective and pour into it, if necessary, their privacy. If the school favors what identifies us as human beings, it will inevitably be betting on a society in which the information and the decisions emerging from it will permanently be impregnated with something we can call personality or human identity, that is, that part of ourselves where ethical values are ultimately safeguarded.

Translation by Daniel Wetta

Andrés-García-Barrios
Andrés García Barrios

Writer and communicator. His work brings together experience in numerous disciplines, almost always with an educational focus: theater, novel, short story, essay, television series and museum exhibitions. He is a contributor to the Sciences magazines of the Faculty of Sciences of the UNAM; Casa del Tiempo, from the Autonomous Metropolitan University, and Tierra Adentro, from the Ministry of Culture. Contact: andresgarciabarrios@gmail.com

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0