“When machines learn to recognize and respond to emotion, technology becomes not more human, but more humane.” — Chenyu Zhang, Harvard researcher.
Emotion at the Frontier of Artificial Intelligence
In the complex landscape of AI, emotion-aware technologies are not new, but 2026 marks a moment of renewed urgency and attention, as researchers push the field beyond logic alone and toward systems that can better recognize and respond to human emotion. At the center of this evolution is Chenyu Zhang, research assistant at Harvard’s Berkman Klein Center and a published author with MIT. Zhang’s work unites emotion, learning, and machine intelligence, challenging the field to design AI that not only reasons but also relates.
“AI tutors today can explain concepts, but they rarely understand how a learner feels. True intelligence begins with empathy,” Zhang says. This captures a critical divide in AI: technical progress often outpaces attention to human emotion. Through pioneering research in affective computing and multimodal reasoning, Zhang works to narrow that gap.
Forecasts predict a $3.9 billion EdTech market by 2030, fueled by emotionally intelligent machines that can read both feelings and facts. But for Zhang, these numbers signal something deeper: a transformation in how empathy becomes an ingredient in intelligence.
Teaching Machines to Understand Human Emotion
Years ago, affective computing was a futuristic dream. Today, it commands major attention. Zhang’s paper at the ACII 2025 conference, “Ensembling Large Language Models to Characterise Affective Dynamics in Student–AI Tutor Dialogues,” explores how advanced models can decode emotional cues during learning exchanges.
“Dialogue is a dance of affect and intent,” Zhang explains. “When AI joins that dance, we must teach it when to lead, when to listen.”
This blending of emotional and cognitive intelligence is as much art as science. Current multimodal AI struggles when one input, such as tone or facial expression, conflicts with another, leading to misread emotions. Zhang’s research embraces that ambiguity, echoing psychologists’ insights that emotion is rarely linear or absolute.
Across classrooms worldwide, AI tutors now sense when students feel anxious or motivated and respond accordingly to sustain engagement. “They are bridges, not replacements for human teachers,” Zhang stresses. “They’re built to augment empathy, not automate it.”
Zhang’s Path: Research and Mission
Zhang’s determination stems from a deeply personal journey. Raised in a humble city in China, then educated in Toronto and Cambridge, he understood firsthand how unspoken frustration could block learning. As a first-generation student, he realized how a small moment of recognition could transform a classroom. Those experiences now anchor his commitment to mentorship and emotionally attuned education.
Beyond research, Zhang’s teaching reaches across institutions and communities. He has taught Python through Stanford’s Code in Place, supported learners as a teaching assistant at the MIT Media Lab, lectured at Northeastern University, and served as an AI4ALL instructor, helping underrepresented students gain access to machine learning education and opportunities in AI. Each role reinforces his philosophy: learners thrive when they feel seen. “Every learner is a glowing star,” he says. “Our job is to help them shine.”
The Rise of Emotionally Intelligent AI Tutors
Advances in adaptive AI have already reshaped K–12 education, combining real-time feedback with emotional understanding. By 2030, emotionally intelligent systems could underpin over half of classroom technology, countering the one-size-fits-all model of standardized testing. “We need AI that listens to silence as much as to words,” Zhang says. “It should know when to pause, when to probe, and when to encourage reflection.”
AI now relieves teachers of repetitive tasks while providing more accessible, personalized support, especially for underserved learners. At MIT, Zhang’s team demonstrated how combining multiple large language models could enhance emotional understanding and foster holistic learning. Their findings highlight that progress in this field is not about precision alone, but fairness, transparency, and adaptability.
Studies from 2024–2025 showed AI can now recognize and respond to emotional states with near-human accuracy, particularly through visual and text-based analysis using models like GPT-4. These advances suggest that responsibly designed AI can nurture motivation and well-being alongside cognitive growth.
Scaling Equity and Community Impact
A recurring theme in Zhang’s work is equity. His ensemble approach, tested on 16,000+ tutoring conversations across universities, revealed that negative emotions like confusion persist nearly half the time, a phenomenon he calls “emotional inertia.” Unless tutors, whether human or AI, can sense and adapt to this inertia, students disengage.
Zhang’s innovations have guided the industry toward multimodal systems that interpret text, facial cues, and speech together for better feedback and pacing. This is especially transformative in multilingual regions across Asia and Africa, where access and language often marginalize learners. “True personalization isn’t about custom quizzes,” Zhang emphasizes. “It’s about seeing who the learner is and encouraging resilience.”
Over his career, Zhang has taught and mentored more than 2,000 graduate and Ph.D. students, designed hands-on AI courses, and coordinated peer mentorship programs. His teaching philosophy asserts that inclusivity and reflection must be foundational, not aspirational, in the age of intelligent education.
Empathy, Algorithms, and Caution
Yet not all experts agree that machines can truly “feel.” Dr. Annette Bell, a psychologist at the Conference on Human-Robot Interaction, warns: “AI can simulate emotion, but it can’t experience it. The danger is mistaking synthetic empathy for the real thing.” She and others argue that emotional AI could risk replacing authentic human mentorship with algorithmic imitation.
This caution calls for strong ethical safeguards. Bell insists that AI tutors remain transparent about their artificial nature and never supplant human teachers—especially for vulnerable students. Zhang welcomes such critiques, advocating for explainable, adaptable systems that invite curiosity rather than obedience. His research promotes algorithmic accountability, in which every decision can be reviewed by educators, building trust and agency rather than dependence.
Data, Forecasts, and the Decade Ahead
Numbers underscore Zhang’s influence. Emotion-aware AI generated $3.7 billion in EdTech revenue in 2025 and is projected to quadruple by 2030, led by intelligent tutoring systems. Platforms that use Zhang’s ensemble protocols report a 16% increase in engagement, a 27% drop in dropout rates, and significant gains for multilingual learners.
By the end of the decade, education may shift from algorithmic grading to emotionally aware learning journeys. Zhang’s international collaborations are already shaping adaptive curricula, formative feedback systems, and diagnostic models that place affective computing at the center of human development.
Vision for the Next Generation
For Zhang, emotion-aware AI is about more than creating systems that react to human input. It is about designing technologies that support reflection, understanding, and more humane forms of human–AI interaction. “The true measure of a tutoring system is not only whether a student learns,” he says, “but whether they feel heard and have agency in their learning.”
This belief defines his vision: affective computing as the beginning, not the end, of empathic intelligence. When AI learns to listen, it learns to teach; not as a substitute for human connection but as an amplifier of it.
The next chapter in teaching machines to feel remains unwritten, but through Zhang’s work, the boundary between emotion and intelligence grows ever more open for exploration.