When discussing generative artificial intelligence in academia, much talk focuses on the problematic ways students can misuse it and its impact on their educational journey, such as plagiarism, dependence, and the long-term decrease in their critical thinking skills, among other effects. So, regarding students, generative AI discussions usually focus on the negative, but what about teachers and institutions?
Contrary to discussions revolving around students and AI, the assumptions about teachers are about how they could use generative AI technologies correctly to help them plan their classes and let these technologies do the tedious work that usually drains their valuable time.
However, an infamous case at Northeastern University caused a stir in academia. A student realized that her teacher was using ChatGPT in their classes to such an extent that it became apparent. The professor admitted that he was, indeed, using it. Consequently, the student asked the university to refund her tuition since she found this unfair. When her request was denied, and the professor faced no recriminations, she decided to leave this university and enroll in a new one.
She is not, nor will she be, the first student to feel uncomfortable when finding out their teacher uses AI. The New York Times reports that many students turn to sites such as Rate My Professors to accuse their teachers of using this tool indiscriminately, which shows their concern that their education should be in the hands of humans, not machines. In the United States, the Time for Class Survey reported that 18% of teachers perceived themselves as frequent users of generative AI last year; however, this estimated percentage nearly doubled when the instrument was applied this year. Unsurprisingly, the percentages increase over time, especially when generative AI tools are becoming ubiquitous, adopted, and normalized by more people.
What was first a concern that students would use these systems without criteria is now considered from the opposite perspective: students seek to learn from legitimate human teachers and not from machines, not only because many pay large sums of money to study in particular institutions, but also because they wish to develop the necessary skills to navigate an uncertain future.
Students often use these tools to complete their assignments, projects, and exams. Many teachers may have measures in place to uncover their use, such as AI detectors. However, these can be untrustworthy. Although students’ use of these tools may be apparent, it may be difficult to verify, or the platforms can make mistakes and flag the learner for using AI when they didn’t use it. Consequently, professors can begin to feel fed up, unmotivated, and, in extreme cases, stop teaching. Moreover, teachers’ misuse of these platforms could give their students the wrong idea: their educators don’t want to get involved with them and are not interested in providing them with high-quality education.
It should be noted that fully trusting AI becomes risky because these systems are so saturated with information that they could hallucinate and yield incorrect and/or false results. Also, when professors overuse these platforms, the students could doubt their knowledge and experience. It’s very often said that actions speak louder than words. Let’s take the example of an art teacher who shows their students an illustration made with AI. Such images often have many inconsistencies that make them easy to distinguish as AI, and some students can realize that this isn’t an artwork created by a real human. It would be contradictory and incoherent for an art teacher to show students a piece lacking human imagination, creativity, and feeling, especially when the art industry has suffered due to generative artificial intelligence. Leaving all the work to generative AI is a serious ethical violation, not only having repercussions for the student body’s future but also on the educators’ abilities to organize, instruct, and plan. Furthermore, it damages their reputation and, consequently, the institution where they work. The following examples illustrate some bad teacher practices:
- Allowing AI to review a text based on the teacher’s rubric without the educator’s review or feedback.
- Uploading students’ work to AI platforms poses an ethical risk that their work’s intellectual property and personal data can be stolen.
- Presenting AI-generated texts to students without verifying the information’s veracity.
However, as multiple articles of The Observatory have expressed, the idea is not to demonize generative AI but to try to understand and be aware of its correct use and limitations, especially when these tools are here to stay. Generative AI could never compete with real, human teacher-and-student interaction since educators can perceive and connect with their groups daily and even learn about their behaviors and personalities; thus, they can give each student more personalized feedback, something that AI cannot do from a couple of projects.
The following examples illustrate some ways that teachers can efficiently integrate these technologies in the classroom:
- A math teacher trained a chatbot with his subject knowledge to help his students answer frequent or simple questions to avoid repeating the same information and even encourage his introverted students to resolve their doubts and questions. This way, this teacher had time to focus on more intricate questions his students may have and provide them with another alternative to help them with their studies.
- Use programs such as Grammarly to detect grammatical errors in the content of a project, essay, or rubric so that the teacher can dedicate time to an in-depth review and provide personalized feedback to each student.
- Rely on platforms such as Khan Academy to assess students’ knowledge level of a topic, adapt the curriculum to that class, or even create a personalized plan for each student.
It is essential that educators explicitly state how generative AI tools will be used. Transparency in using these platforms will help avoid misunderstandings with students, and they can understand why they would benefit themselves and their teachers.
Whether you are an educator or a student, it is essential to realize that generative AI tools, already part of our daily lives, affect crucial skills such as critical thinking, problem-solving, and decision-making, and should be used only to support their teaching. Staying informed about the AI platforms we trust isn’t optional—it’s a shared responsibility, just as being aware of their users’ environmental impact on our planet.
Generative AI can help us with some tasks to save time and make room for activities that require more attention. Still, it is necessary to be transparent and conscious and distinguish between the functions AI should perform and those requiring human attention, effort, and essence.
Translation by Daniel Wetta
This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0 















