Beyond ChatGPT: Toward Regulating Artificial Intelligence

Reading Time: 5 minutes The risk profile that artificial intelligence can promote within the education sector has not yet been addressed in detail.

Beyond ChatGPT: Toward Regulating Artificial Intelligence
Reading time 5 minutes
Reading Time: 5 minutes

Amid the controversial debate about the role that artificial intelligence (AI) will have in the days and years ahead, some voices suggest that its development not be interrupted to promote innovation, while others call for a pause in its development so that institutions responsible for its regulation can prepare, artificial intelligence materializes and positions itself within our uncertain future.

Although the present trigger for this debate is the popular, generative AI, ChatGPT, the Chief Executives Board of the United Nations (UN) has pointed out in its Session 34 Report the need to “address challenges arising from the development of ‘frontier” technologies – artificial intelligence, cyberspace, biotechnology, and new weaponry'” (UN, 2017; p. 1). Focusing on disaster risk reduction (DRR) and contemplating the future of work, it recognizes that the actors introducing these technologies are not States and have not demonstrated their ability to regulate or protect citizens from their negative impacts. The regulatory absence and lack of ethical provisions by the companies developing these technologies have led to an emergence of social engineering tactics with AI tools like voice imitation and more serious scenarios, like realistic videos of events that never happened (deep fakes), which proliferate as a threat to social and global security.

In November 2021, UNESCO published its first Recommendation on the Ethics of Artificial Intelligence to align international efforts involving the development of artificial intelligence. Adopted by the organization’s 193 member countries, the recommendation seeks to facilitate the actions of public decision-makers, providing tools to evaluate their capacity to implement it in public policy (Readiness Assessment Methodology), and identify and forecast the impacts that an artificial intelligence system may have. (Ethical Impact Assessment).

Without going into detail about the organization of the recommendation, one section of our interest is public policy area number 8, Education and Research, which suggests mechanisms to encourage the involvement of different actors to achieve eleven goals, among which are:

  1. Provide adequate education to teach AI literacy to the general public.
  2. Promote the acquisition of skills prerequisite for education in artificial intelligence.
  3. Promote awareness programs on developments in artificial intelligence.
  4. Promote research initiatives on the responsible and ethical use of artificial intelligence in teaching, teacher training, and online teaching, as well as challenge and risk mitigation.

However, while these guidelines direct us toward the ideals to achieve, they do not establish the next steps for educational systems and institutions to follow.

What about education?

Education has not yet addressed in detail the risk profile of artificial intelligence. In principle, the arrival of tools such as ChatGPT has triggered a reaction of rejection in higher education institutions due to the potential for students to generate texts, reports, or essays and plagiarize. Considering this, Gift and Norman (2023) argue that academic authorities should not focus on the criminalization or prohibition of these tools but rather reevaluate what makes us human – our sense of dignity, virtue, and respect for our work, teachers, and peers.

However, this approach is far from the vision of disaster risk reduction initially proposed by the UN, visualizing only the potential damages in a case of academic performance, and ignoring other more relevant ones, such as the multiple implications of a generation of children and adolescents growing up hand in hand with artificial intelligence. In this scenario, Rumsfeld (2002) points out that a panorama of “unknown unknowns” confronts us, risks that we do not know to exist, where risk assessment and foresight can help us begin to identify the unknowns we face, with the ultimate goal of identifying threat, vulnerability, exposure, and risk factors.

Table 1. Rumsfield’s matrix exemplifies the implications of a generation growing up with artificial intelligence.

Risks… what we know… what we don’t know
We knowKnowledge and facts
The knowledge that we consciously recognize and know how to access.

Ex. Publications on the ability of AI algorithms to mold and guide political opinions.
Knowledge horizon
The knowledge that we recognize consciously, but we have no answer.  

Ex. Long-term effect of using contemporary AI
What we don’t knowIntuition and unawareness
Unconscious biases that we can recognize with reflection or support.  
Ex. Inclination to reinforce AI-driven habits (e.g., binge-watching).
Erroneous beliefs, ignorance
Concepts we assume are true without questioning or being able to prove them.  

Ex. “Writing” the future of artificial intelligence, developing it without assuming its misuse.

Along this line, let’s focus on one example: mental health. An article from Bloomberg (2023) relates how ChatGPT (although not specifically designed for this purpose) has been used to provide mental care. Although the number of users who might be looking to acquire mental health care from AI is unknown, the number of queries on the Reddit social network by users who lack resources or access to health care has increased considerably. They seek recommendations for “prompts” for better interactions with AI, sometimes assessing their experiences with AI as “better” than with actual therapists (2), among other reasons. As the article “Artificial Intelligence for Mental Health: A Review of AI Solutions and their Future” points out, this situation coincides with findings reporting a lack of geographic or financial accessibility to trained health professionals worldwide. For example, Argentina and the United States has approximately 222 and 29 psychologists per 100,000 inhabitants, respectively, in contrast to Mexico, with barely three mental health professionals per 100,000 inhabitants. (Castañeda-Garza, Ceballos, Mejía Almada; 2023).

While artificial intelligence providers aimed at education, such as Duolingo or Khan Academy, preserve a space appropriate to their mission, their use is low compared to other platforms and social media applications. Without a guide for adults and lacking adequate education for teachers and families, these AI tools have less impact than their entertainment alternatives.

AI Regulatory Initiatives

The purpose of regulating artificial intelligence is to facilitate the understanding between the State and society, establish the enforceability of people’s inherent rights, and define the obligations of AI providers. Thus, it is relevant to note some recent events:

March 22, 2023:  After the accelerated growth of artificial intelligence tools, several experts and researchers in artificial intelligence, such as Elon Musk (CEO of SpaceX, Tesla) and Steve Wozniak (Co-founder of Apple), published an open letter calling for a pause in its development to ensure the design of adequate security protocols to establish confidence that its effects will be positive and its risks manageable.

March 27, 2023: Deputy Ignacio Loyola Vera proposed an initiative in the Chamber of Deputies of Mexico to issue the Law for the Ethical Regulation of Artificial Intelligence and Robotics, arguing that it needs to be regulated now; to wait until its use is widespread “will be too late.”

May 11, 2023: The European Union Parliament published the Artificial Intelligence Act, adopting a first draft, seeking a human-centered perspective, and adhering to the transparency and management of AI, contemplating its risks.

May 16, 2023: Sam Altman, Director of OpenAI, testified on Capitol Hill before the US Senate that his worst fear is that artificial intelligence will not have regulation soon, and something will go wrong, emphasizing, “and if it goes wrong, it will go very wrong.

May 23, 2023: The U.S. White House published a press release announcing a series of efforts to advance the research, development, and deployment of artificial intelligence responsibly. This effort includes creating documents such as 1) the Artificial Intelligence Rights Act, 2) the Framework for Artificial Intelligence Risk Management, 3) a roadmap for an Artificial Intelligence Research Research Resource (NAIRR), 4) the National Strategic Plan for Artificial Intelligence Research and Development, 5) a request for information (RFI) on critical issues associated with artificial intelligence, and finally 6) a report from the Office of Educational Technology of the United States Department of Education on the risks and opportunities of AI in education.

In our next article, we will continue discussing these regulations’ particularities and observations for the educational area.


Gerardo Castañeda Garza has a degree in Psychology and a Ph.D. in Educational Innovation from Tecnologico de Monterrey.

He serves as the Data Acquisition Coordinator of the Living Lab & Data Hub of the Institute for the Future of Education at Tecnologico de Monterrey. He has collaborated in a variety of interdisciplinary projects at the international level, including the Binational Laboratory for the Intelligent Management of Energy Sustainability and Technological Training (CONACYT-SENER), with energy literacy research and disaster risk reduction projects with Tohoku University and Hosei University in Japan.

Currently, he is part of the TRANSFORM international network of researchers based at the University of Waterloo, which promotes the transition and development of resilient business models in small and medium-sized enterprises (SMEs), and the Research Group on Educational Innovation (GIIE) at Tecnologico de Monterrey.

His line of research revolves around interdisciplinarity in education and sustainability issues, enjoying a generalist profile that addresses and combines multiple areas of knowledge with the desire to increase societal resilience. In his daily life, he enjoys green tea and working out regularly.

Gerardo Castañeda Garza

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0