In an increasingly automated world, where human interactions are frequently mediated by machines, the ability of technology to understand not just what we say, but how we feel, is rapidly becoming the next frontier. The rise of Emotional Artificial Intelligence — AI that can recognise, interpret, and respond to human emotions , is not simply a technical milestone. It is a societal turning point. As this once speculative field begins to underpin digital customer experiences, from online shopping and banking to education and healthcare, one voice is emerging as both a moral compass and a visionary architect for how this technology should evolve: Oluwatolani Vivian Akinrinoye.
Tolani , a Nigeria-born scholar and business professional now based in Pittsburgh, United States, has co-authored one of the most comprehensive and ethically grounded studies on Emotional AI to date. Her paper, titled “Frameworks for Emotional AI Deployment in Customer Engagement and Feedback Loops,” was published in the International Journal of Multidisciplinary Research and Growth Evaluation and is rapidly gaining traction among industry leaders, policy analysts, and academic researchers worldwide. For the United Kingdom, where questions around ethical AI, biometric surveillance, and digital privacy dominate public and parliamentary discourse, Tolani’s work offers both a warning and a way forward.
“The moment we teach machines to recognise emotion, we are no longer just building tools,” Tolani says in an exclusive comment. “We are designing participants in our emotional lives. That changes everything and it must be done with extreme care.”
Her study is not merely a technical white paper. It is a philosophical framework with practical applications, laying out a five-tiered system for developing emotionally intelligent AI systems that are ethical, inclusive, and effective. These layers begin with how emotional signals are gathered, continue through how emotions are modelled and acted upon, and culminate in how organisations embed these capabilities into broader strategy and governance. Her vision is clear. Emotional AI, if built without safeguards, risks becoming a digital mirror that reflects our biases, invades our privacy, and manipulates our decisions. But with the right approach, it can also humanise machines, strengthen empathy in digital spaces, and redefine customer engagement for the better.
In the UK, where regulation of AI systems is now at the forefront of public debate, her voice could not be more timely. From Ofcom’s concerns about algorithmic harms to the Centre for Data Ethics and Innovation’s policy consultations on biometric technologies, Britain is searching for a blueprint that allows innovation to thrive without sacrificing human dignity. Tolani’s framework grounded in ethical governance, cultural adaptability, and technological precision could serve as that blueprint.
“British companies are uniquely positioned to lead in ethical Emotional AI,” she notes. “There is a strong tradition here of balancing commercial advancement with public accountability. But what’s missing is a clear, sector-agnostic roadmap and that’s what this framework aims to provide.”
At the foundation of her model is the belief that emotion is not just another datapoint. It is deeply human, highly contextual, and culturally variable. Systems that treat emotional cues like clickstreams or keystrokes will inevitably misunderstand, misclassify, or misuse them. This is especially true in diverse societies like the UK, where emotional expression varies across cultures, communities, and neurological profiles.
“Emotion is not universal,” Tolani explains. “A raised voice could mean anger, excitement, or urgency, depending on who you are and where you come from. Without cultural awareness, AI becomes not just inaccurate, it becomes unjust.”
Her framework’s first layer focuses on input , how emotional data is collected. Whether through facial recognition, voice tone, typed language, or physiological signals like heart rate or skin conductance, this data must be acquired with consent, transparency, and respect for privacy. The second layer involves how AI interprets these signals. Here, Tolani’s background in business and technology comes to the fore. She advocates for a blend of machine learning models and psychological theories, but stresses that no model is complete without continuous bias audits and inclusive datasets.
She is unflinching in her stance on algorithmic fairness. “We already know that facial recognition systems have higher error rates for people of colour,” she says. “Now imagine layering emotion detection on top of that. The risk of misinterpretation becomes not just technical, but moral. We cannot afford to get this wrong.”
Her paper’s third layer deals with adaptive feedback. This is where Emotional AI acts, modifying chatbot responses, tailoring user interfaces, adjusting content, or triggering escalations to human agents based on inferred emotional states. While this might seem like a natural extension of customer service automation, Tolani warns that it introduces a delicate boundary.
“It’s one thing for a machine to acknowledge that I sound upset,” she explains. “It’s another thing entirely for it to alter my experience based on that assumption, without my knowledge. We must always ask: Is this empathy, or is it manipulation?”
This concern leads directly into the fourth layer of her framework: ethical and regulatory governance. Tolani is a strong advocate for emotional transparency, the principle that users should always be informed when their emotions are being analysed, have access to the data collected, and be able to correct or opt out of such analysis entirely.
“Transparency builds trust,” she says. “People are more willing to interact with emotionally intelligent systems when they feel they are in control not when they feel they are being watched or profiled.”
Her final layer speaks directly to business leaders and policymakers. Strategic integration, she argues, is where Emotional AI either succeeds or fails. Too often, she observes, emotion-detection tools are bolted onto existing systems without alignment with brand values, legal teams, or customer experience strategies. What’s needed, she believes, is a cross-functional approach where AI developers, ethicists, marketers, lawyers, and users co-design systems together.
“When emotional AI goes wrong, it’s not because the algorithm is broken,” she reflects. “It’s because the organisation didn’t think about people first. Technology is never neutral. It reflects the values of those who build it.”
Her commitment to people-centred innovation extends beyond her research. Having worked on transnational collaborations that connect experts in the United States, Nigeria, and the United Kingdom, Tolani is acutely aware of the global implications of Emotional AI. In regions with weaker data protection laws, emotion-recognition tools could be used for surveillance, political control, or consumer exploitation. Even in advanced democracies, their use in areas such as education, employment, and mental health raises thorny ethical questions.
She is clear-eyed about the global stakes. “We are building a new kind of infrastructure not just for data, but for emotion,” she says. “And just like roads and bridges, this infrastructure will shape how people live, relate, and feel safe. We have a duty to get it right.”
The UK’s role in this future, she believes, is pivotal. With a strong research base in human-computer interaction, a robust civil society, and some of the world’s most forward-looking AI companies, Britain can lead not just in Emotional AI deployment but in shaping its global standards. The challenge, she suggests, is not capability but courage.
“The technology exists,” Tolani concludes. “What we need now is leadership. Not just to make Emotional AI smarter, but to make it more human.”
It is rare in today’s fragmented technological landscape to find a voice that bridges disciplines, geographies, and ethics with such clarity. Oluwatolani Vivian Akinrinoye is that voice- calm, informed, and urgently necessary. As governments and companies across the UK explore new ways to engage with citizens and consumers through AI, her framework is a reminder that empathy cannot be outsourced to code alone. It must be designed, governed, and upheld by people who understand that emotion is not a weakness of humanity, but its strength.
In spotlighting her work, we are reminded that behind every line of code is a choice. And behind every ethical breakthrough in technology, there is often a woman like Tolani building not just systems, but the future of how we feel in the digital world.
