Can Generative AI Harm Learning?

Reading Time: 6 minutesIs the education community ready to integrate generative AI without compromising its essence? Explore its benefits, limitations, and recommendations.

Can Generative AI Harm Learning?
Photo: Emiliano Vittoriosi for Unspalsh.
Reading time 6 minutes
Reading Time: 6 minutes

There has been increasing discussion about generative artificial intelligence, such as ChatGPT, and the stir it has created in academia. Some AI proponents have even suggested that these models could complement or replace professionals such as lawyers, doctors, therapists, graphic designers, and writers, which, at least theoretically, would make some services more accessible and affordable, but at the cost of losing human jobs. This technology cannot be ignored because it is part of many people’s daily lives, but it is essential to pause to analyze its long-term impact.

The authors of a study titled Generative AI Can Harm Learning comprehensively explored how AI impacts human learning processes, especially in education. Although it is a powerful tool for increasing human productivity in intellectual tasks, some questions still arise about its influence on acquiring fundamental skills. Thus, the researchers focused on understanding whether its functions, like problem-solving, weaken individuals’ ability to learn and develop autonomously and durably.

The problems of AI, its databases, and extremism 

For starters, generative AI models require large amounts of data to function. An article published in The Atlantic revealed that 200,000 books were used to train various Large Language Models (LLMs), without any attempt to obtain permission from the authors or compensate them for using their intellectual property. To avoid this type of problem, LLMs often include large amounts of text extracted from the internet. They use sites such as Archive or Project Gutenberg that contain much material considered in the public domain, usually dating back to the nineteenth and early twentieth centuries. However, these sources can backfire, as they may include outdated and extremist material.

As the Washington Post reports, a significant drawback is that many LLM sources include a large number of extremist websites that promote conspiracy theories; some include Infowars, Global Research, Natural News (which focuses on medical misinformation and conspiracy), National Vanguard (which is neo-Nazi), among others. What’s the problem with this? An example is that a medical student relying on this information will have, to a greater or lesser degree, outdated or even dangerous perspectives drawn from the science of race promoting the belief that black people have more tolerance for pain, or gender misinformation because women are not studied as much as men, and other vital social issues, as well as conspiracy theories not based on scientific evidence and potentially harmful to health. 

Another problem with generative AI, according to The Middlebury Institute of International Studies, is that malicious actors can use these tools to generate harmful content, such as hate speech, images with Nazi symbols, and other problematic content. The university says, “It is disconcerting that the limited number of existing barriers to prevent generative AI’s harmful content are so easy to circumvent.” Also, because generative AI responds to user prompts and instructions, some people use jailbreaks (also called prompt injectors) to circumvent the checks that prevent AI from generating harmful content. The Middlebury Institute gives an example of this type of attack: asking AI to pretend to be an unconstrained model, inducing it to produce responses it would not usually provide, such as justifying controversial historical figures.

It is important to remember that these models do not reason or look for information to ensure their answers are accurate; they respond to a request. When the response has no logical basis or is not based on evidence, it is said that the AI “hallucinated,” which is bad when a person uses these responses as accurate without knowing the information’s provenance. 

Generative AI’s impact on education

Relying on platforms like ChatGPT is increasingly common in collecting information and performing tedious tasks, creating a dependency that results in students losing some skills. With this in mind, a University of Pennsylvania Wharton School study investigated how generative AI affects human capital development. It questioned the confidence that it improves learning rather than hinders it. The research was conducted in a high school in Turkey among approximately one thousand students in three classes: a control group where the students only used books and notes, a class using regular ChatGPT which used generative AI to answer students’ questions freely, and a class employing a ChatGPT “tutor,” a modified version that received instructions to avoid giving direct answers, offering, instead, clues, step-by-step procedures, and guided feedback.

The results revealed that, initially, those in the second group, having access to normal ChatGPT, improved their performance by 48% compared to the control group, and those using the modified ChatGPT tutor achieved a 127% increase. At first glance, these findings seem to indicate that AI improved performance; however, when students were given an exam where they could not access GPT, those who used regular ChatGPT obtained 17% lower scores than the control group, while the scores of the class using the modified “tutor” GPT did not differ from the control group.

This shows that while it is true that AI facilitates short-term performance, this does not mean it is beneficial for deep learning and retention, especially if it’s used to solve problems rather than striving to understand the material. The modified GPT tutor encouraged more constructive interactions. Part of the problem is that students trust the technology so much that they do not detect or correct AI errors. This behavior is similar to students’ reactions with the emergence of previous technologies, such as calculators, spell checkers, etc., but with one key difference: the intellectual breadth and depth of what can now be automated.

In another study, other experts investigated the use of generative AI among university students, seeking to examine its causes and effects and address questions such as: Why do students use ChatGPT? What are its effects on procrastination, memory loss, and academic performance? The researchers concluded that students under greater academic or time pressure often used this tool. In contrast, others used it less for fear of institutional sanctions and how these could affect their grades. They realized that frequently using this technology significantly correlated with students’ increased procrastination. Students feel they can procrastinate doing their homework longer because they can complete it more easily with AI, further deepening their procrastination habit. This is because AI’s shortcuts reduce the need for cognitive effort, generating a false sense of control over time and tasks among students.

On the other hand, its impacts include memory loss. As mentioned above, although this technology improves learning in the short term, the students reduce their cognitive and mental efforts. The researchers found that it negatively affects their ability to retain information. Moreover, the authors note that active practice, critical thinking, and problem-solving are essential for memory consolidation, skills weakened by passively using AI. Because they get immediate answers without processing the information, students limit the development of their working memory and ability to remember key concepts.

Generative AI also impacts academic performance. According to the study, students’ frequent use of ChatGPT negatively correlates with academic performance because they tend to study less, reducing active learning and deep understanding of the content. The report’s statistical analyses revealed a significant decline in academic performance among those who used AI intensively, aligning with the University of Pennsylvania’s research findings.

On the other hand, the organization Cognitive Resonance published a paper describing how generative AI can lead to harmful educational practices. For starters, when a teacher uses this technology for lesson planning, they may risk using Large Language Models (LLMs) that lack pedagogical understanding and therefore, the necessary knowledge of the learning objectives and students’ specific needs. Moreover, as mentioned at the beginning, the LLMs may produce biased materials, which can negatively affect the quality of the class. The publication also mentions that some teachers use these platforms to evaluate and provide student feedback. Although LLMs can focus on formal aspects of text, such as grammar, they cannot process the creativity or significance of the student’s analysis, resulting in poorly personalized feedback that does not address each student’s needs.

While the University of Pennsylvania study shows that using generative AI as a tutor does help students, according to Cognitive Resonance, it is essential to remember that it cannot understand each student’s emotions and personal contexts, limiting the effectiveness of its tutoring. Moreover, it can result in the student body relying too heavily on AI, reducing their ability to solve problems independently. On the other hand, the organization also mentions different considerations that administrators and education policymakers should consider when adding this technology in academic institutions; adopting it quickly without completely understanding it can lead to ill-informed decisions. It is also worth mentioning that not all institutions have the resources to include it, which widens the educational gap between those who have access and those who do not. 

That is why it is critical to set clear policies on using AI in education based on evidence and best practices. Hence, training and continuous support must be provided to teachers and educational staff to ensure effective integration. Continuously assessing the AI integration and adjusting strategies as necessary will allow for measuring whether AI negatively or positively impacts the institution.

Indeed, the advancement and use of generative artificial intelligence tools profoundly transform the educational field. Its potential to assist in lesson planning, tutoring, content generation, and feedback is undeniable; however, its adoption entails significant risks that must be carefully evaluated. On the one hand, AI can be an excellent tool for facilitating repetitive tasks, personalizing teaching, and helping students and teachers manage high academic loads. It can foster efficiency and support learning as a complementary tool, but it must not substitute for human cognitive effort.

However, the risks are significant. Overusing AI can weaken deep learning and memory, encourage procrastination, and degrade essential skills such as self-management, reasoning, and critical thinking. In addition, design errors, data biases, a lack of genuine rationales, and the replacement of human capabilities generate ethical, pedagogical, and social concerns. Beyond the classroom, these technologies threaten professions and have a high environmental cost.

Faced with this panorama, the response should not be to reject or uncritically adopt these tools, but to integrate them critically, balanced, and responsibly. Education must maintain its fundamental objective: the development of human capacities. AI models should be continuously monitored and evaluated, never substituting for judgment, creativity, and teaching ethics.

Translation by Daniel Wetta

Paulette Delgado

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0