The Communication Style with AI Influences Learning

The communication style and the emotional tone that students use with AI influence their learning. Learn about the findings of a study exploring how the emotional connection students establish with chatbots directly affects the depth of their thinking and the quality of their learning.

The Communication Style with AI Influences Learning
Reading time 7 minutes

Imagine students in your class using AI to solve complex business problems. Some interact mechanically; others engage in dialogue. Does this matter? Absolutely. How students engage with AI influences their deep learning. In this article, I present findings of a study from Complutense University of Madrid, where International Strategic Management students used a personalized chatbot to analyze a real company’s strategy. The results: not all students benefit equally from AI—the difference comes from the emotional tone and style of their relationship with the machine. I’ll share these findings, their learning benefits, and a practical proposal for your class.

AI literacy isn’t just knowing how to use the tools; it also involves understanding our interactions with them and their influence on learning. Recognizing students’ usage patterns helps teachers design meaningful experiences. My study explored how students’ emotional relationships with intelligent tutors affect their thinking depth and which pedagogical strategies enhance that relationship.

AI as a thought partner and not as a dispenser of answers

According to UNESCO (2024) data, 78% of higher education institutions in Latin America and Europe incorporate AI tools into their training processes. As we know, adopting technology alone does not guarantee improved learning; therefore, it is necessary to investigate how students interact with the tools available to them. Nguyen et al. (2024) found that many students use chatbots passively; that is, they consume responses without critically processing them. Additionally, Kosmyna et al. (2025) warn that interaction with AI can reduce cognitive load but also encourage what they call “metacognitive laziness,” i.e., when students let AI “think for them” without reflecting on the content.

Thus, we face a paradox: the same tools that are designed to enhance learning can, if used passively, weaken critical thinking. My research stems from a central thesis arising from this paradox: close, bidirectional human-machine communication is associated with active use of the chatbot (intelligent tutor) and greater development of strategic thinking.

Recent research in AI pedagogy converges on one idea: AI is most effective when we treat it as a thought partner, not a dispenser of answers. In this regard, Kirk et al. (2025) argue for a “socio-affective alignment” between humans and AI. This means that education must transcend the merely functional to cultivate meaningful interactions with technology. An inspiring success story is reported by Rodríguez-Maya and Aylas-Flórez (2025). These researchers reported that students who engaged in iterative, reflective conversations with intelligent tutors showed greater academic engagement and conceptual mastery, especially when they received personalized feedback.

In conclusion, we are called to develop a new literacy: the ability to communicate with machines reflectively, collaboratively, strategically, and yes, why not, emotionally connected.

Pedagogical study experience

During the spring semester of 2024-2025, I implemented a pedagogical experience in my International Strategic Management subject. The goal was clear: to investigate whether the way students communicate with an intelligent tutor affects the depth of their strategic thinking. To do this, I designed a personalized chatbot that guided students in applying the theoretical frameworks covered in the course. In this case, Ghemawat’s (2001) CAGE model was used to analyze the international strategy of a real company. This was not a traditional essay task, but a research task, in which inquiries and critical reflection were the main activities to be evaluated by the teacher.

The technologies incorporated were modest but effective, producing a chatbot with personalized instructions, designed by an educator based on the course’s theoretical content. Fifteen students participated in the activity. For 45 minutes, each interacted with the chatbot individually. Most chose to use ChatGPT. Each conversation was recorded for later analysis.

The methodology and technologies incorporated

To analyze the conversations between the chatbot and each student, I employed a qualitative approach, using thematic coding and analysis of the co-occurrence of extracted codes. The data analysis was conducted using ATLAS.ti, a tool designed for qualitative research. I developed a codebook and established three key code families:

  1. Type of assistance requested from the chatbot. The codes for this family collected evidence on the types of questions students ask the chatbot; for example, are they simple analysis questions or basic company data searches, or, conversely, do students apply critical thinking to elaborate the query for the chatbot?
  2. Communicative style and tone. These codes distinguished among collaborative tone (a communication style that emulates human-human communication), neutral tone (with no evidence of a relationship with the chatbot), and passive tone (in which human-AI conversation occurs without sequential logic).
  3. Strategic orientation. In this last family of codes, indications were collected on how students apply the subject’s theories in formulating questions. In addition, indications were collected about the orientation of the future consequences of business decisions.

I segmented the conversations into two groups: those that showed a socio-affective communication style with the chatbot (RELATES) and those that did not (NOT-RELATES).

The main finding was that students who adopted a relational tone and asked follow-up questions demonstrated significantly greater critical thinking and strategic reflection. The co-occurrence analysis shed light on this conclusion. For example, the code “follow-up question” appeared 23 times in relational conversations versus only four times in non-relational conversations; the code “collaborative communication style” (where the student invited the chatbot to think with them) was almost non-existent in the non-relational group; and finally, questions about “future thinking” (future implications of strategic decisions) were five times more frequent in relational conversations.

A student in the relational group asked the chatbot: “What do you think are the future risks or opportunities for (the company) as European Union (EU) laws evolve?” This question reflects collaboration, future thinking, and confidence in the process. It contrasts with the more mechanical responses from the non-relational group, in which students copied the instructions verbatim. Co-occurrence analysis revealed that the use of a neutral tone (the most prevalent) was not necessarily negative, but it tended to be accompanied by simple analyses.

Benefits for student learning and feedback

On a cognitive level, students with relational interactions demonstrated deeper, more strategic thinking, better application of theoretical frameworks to real-problems, reflection on future scenarios, and integration of multiple perspectives (internal company perspective and external market perspective).

Although the study focused on content analysis, students reported in subsequent conversations that the chatbot felt like a “thought partner” when they initiated more thoughtful conversations. One student commented, “It was different than other times I’ve used AI. I felt like we were discovering together, not just looking for answers.” The students learned, without explicitly being told, that the quality of dialogue determines the quality of learning.

Areas for improvement and next steps

Once the thematic analysis of the students’ responses was finalized, I identified the following areas for improvement in this project:

  1. Student preparation. Some arrived without a clear idea of how to communicate with AI. A preparatory session on “how to ask strategic questions” could improve the results.
  2. This study was limited to strategic business analysis. It would be valuable to explore whether the coding and co-occurrences obtained are sustained in other educational settings or disciplines.
  3. Lack of longitudinal follow-up. This was a 45-minute exercise; it would be necessary to determine whether these patterns persist in interactions sustained throughout the semester or whether the relational pattern evolves over time.
  4. Lack of evaluation of competencies. Although qualitative changes in thinking were observed, correlating them with formal assessments of higher-level competencies would be the next step.

A proposal for your teaching

As you prepare your next AI-focused class, consider this: What value do your students perceive from using AI tools? Don’t assume—ask them directly. Start a conversation or debate. Encourage your students to reflect on the difference between thinking with a machine and merely seeking answers from it.

Thus, I suggest three concrete actions:

  1. Teach your students how to build a relationship with AI. Take time to explain the difference between strategic questions and functional requests. Learn and teach students how to craft effective prompts and observe whether their iterative dialogue with technology fosters deep thinking. This is the new digital literacy.
  2. Create spaces for students to reflect on their interaction with AI. Ask them to document how their thinking evolves throughout the conversation. Where does curiosity arise? When does it go from passive consumption to co-creation? This metacognition is valuable in itself.
  3. Experiment with your own ways to integrate AI into your courses. It will not be the same for all disciplines. However, the principle of “relationship vs. transaction” is transferable. How can you design your instruction so that students feel comfortable reflecting, collaborating, and using AI tools strategically?

Reflection

My learning from this research project extends beyond the chatbot’s design. It’s not about whether AI is good or bad for education. It’s about how our students relate to the tools we provide them. AI, like any educational technology, is a mirror of our intentions. If we use it transactionally, we get transactional responses. If we use it as a partner for deep thinking, something truly transformative happens.

The experience I describe in this article confirmed to me something that many educators intuit, but rarely document: emotional engagement accelerates cognitive learning. It is not sentimentality. It is neurochemical. When we feel part of a collaborative process, our brains are activated differently. We generate more sophisticated questions, explore deeper connections, and ponder future implications.

Perhaps the biggest lesson is that the AI era does not require us to abandon the human side of education. On the contrary, it requires us to cultivate our most human quality—our ability to relate, to be curious, to question critically—and to use AI as a tool that amplifies that humanity.

Do you want to connect?

I firmly believe that innovation in education is collective. I am open to dialogue with educators who want to deepen these ideas, adapt this experience to their contexts, or collaborate on future research. If you have questions, ideas, or suggestions, or would like to explore collaborations, I encourage you to contact me. The future of education is built when we share what we learn.

About the Author

Maribel Labrado-Antolín (mlabra02@ucm.es) is a lecturer in the Department of Business Organization at the Complutense University of Madrid. Her research focuses on the impact of teleworking on well-being and productivity, with publications in JCR journals, and she currently integrates research with teaching and academic mentoring to train professionals prepared for the challenges of work in the digital age.

References

Ghemawat, P. (2001). Distance still matters: The hard reality of global expansion. Harvard Business Review, 79(8), 137-147.

Kirk, J. R., Stevenson, J., & Cann, C. (2025). Socioaffective alignment in human-AI learning partnerships: A framework for educational equity.Learning, Culture and Social Interaction, 45, 100789.

Kosmyna, N., Thiebaux, M., & Aubert, O. (2025). Cognitive load and metacognitive monitoring in AI-assisted learning: Preliminary findings.Frontiers in Education, 10, 1234567.

Labrado, M. (2026). Talking to machines: How communication style shapes student engagement with AI tutors. American Journal of STEM Education, 19, 37-58. https://doi.org/10.32674/t2qnzc90

Nguyen, B., Aamodt, T., Frommert, J., Gaskins, B., & Haider, R. (2024). Collaborative engagement with ChatGPT: Impact on academic writing quality. Computers & Education Quarterly, 52(1), 12-34.

Rodríguez-Maya, E., & Aylas-Flórez, J. (2025). Case study: AI tutoring impact on student engagement in Mexican higher education. Journal of Educational Innovation, 31(4), 267-289.

UNESCO. (2024). Artificial intelligence in education: A global perspective on opportunities and challenges. UNESCO Publishing.

Usher, M., & Amzalag, M. (2025). Graduate students’ communication styles with AI tutors: A qualitative analysis of academic writing support. Higher Education Research & Development, 44(2), 189-207.

Editing


Edited by Rubí Román (rubi.roman@tec.mx) – Editor of the Edu bits articles and producer of The Observatory webinars- “Learning that inspires” – Observatory of the Institute for the Future of Education at Tec de Monterrey.


Translation

Daniel Wetta

Maribel Labrado-Antolín
Maribel Labrado-Antolín

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0