Artificial intelligence is moving to the core of financial services, with 71 % of organizations now using AI in their finance operations. Banks, insurers, and asset managers are increasingly using it to accelerate decision-making and manage risk, as well as personalize customer experiences. As the finance sector moves beyond experimentation, the focus will increasingly be on making AI both ethical and sustainable. “You need to think about things that could put you on the front page of a newspaper,” says Christopher Bannocks, Founder of Fractional Leadership Limited, Advisor for Artefact Consulting and Fractional Chief Data, and AI Officer. Bannocks has held senior leadership roles at QBE, ING, Barclays, and Danone, and contributes to some of the most advanced thinking on AI governance and ethics. With adoption expanding and AI now integrated across more functions, organizations cannot afford to treat ethics as an afterthought.
Ethics Can’t Wait
As AI becomes more deeply embedded into finance functions handling billions in transactions and highly sensitive data, the key challenge is ensuring that existing principles are applied even more rigorously to AI systems. Rushing into AI adoption without considering unintended consequences exposes financial institutions to reputational, regulatory, and societal risk. “You’ve got to consider whether automation or AI are actually the correct tools for the job,” says Bannocks. “A good understanding grounded in what ethics really means is an important step for the group of people associated with rolling out the AI strategy.” Crucially, the ethical considerations don’t end once a model goes live. The way models interact with live data often creates a feedback loop, where small errors or biases can compound over time if left unchecked. “You may bump into ethical issues post implementation,” he says. “So it’s not just a case of planning ahead, but having a radar for issues as they emerge.”
Recognizing and Managing Ethical Dilemmas
One of the central challenges financial institutions face is identifying where bias or misrepresentation may creep into systems. The danger here lies in training models on unrepresentative data or allowing hidden biases to influence outcomes. “One of the biggest ethical dilemmas that banks face as they integrate AI into decision making is simply what could go wrong?” he says. “What could an AI believe that might not be true? What opinions could it form? And that comes back to what you are training the model on and where you will see bias.” Bannocks offers an example: using postal codes as an input in lending decisions. On the surface this may appear to be a neutral factor. In practice, it risks systematically marginalizing communities, excluding them from access to credit and perpetuating existing inequalities. “You can be completely legal yet unethical, and you can be completely ethical yet break the law,” Bannocks stresses. “We shouldn’t correlate ethics with law. They are different.”
Moving Beyond Compliance
Bannocks points to the work of ethicist Professor Muel Kaptein, who developed the “ethicability” framework, a measure of an organization’s capacity to deal with ethical dilemmas, to demonstrate how organizations must think about structuring AI ethics in a way that is both effective and actionable. “What Muel says about ethics—and I’ve found this to be broadly true—is that good ethicability is not about compliance,” he shares. “Compliance produces rules to which people adhere. That doesn’t give you a good ethical process. Instead, you need a very strong element of safety to speak up.” Bannocks advocates for embedding structured conversations across the organization. Employees should have clear channels to raise ethical concerns and leaders must be equipped to provide advice. “Rules can provide a framework but they don’t create ethical organisations alone” he argues. “It should be principles-based, grounded in organizational values.” By federating this capability, large institutions can ensure dilemmas are surfaced, escalated, and addressed consistently, without reducing ethics to a checklist exercise.
Preparing Principles for the Next Frontier: AGI
Looking ahead, the ethical stakes will only rise as AI systems grow more autonomous. This rapid development of artificial general intelligence (AGI), a more advanced form of intelligence that could perform a wide range of tasks at or above human capability, serves as a potential inflection point. “Ethics is an element of intelligence. Unless we have that built into our AIs as we move towards AGI, we will not have the clarity, transparency, and control needed for their decisions.” From decisions as straightforward as credit approvals to more complex scenarios like autonomous aviation, the ability of AI to navigate ethical trade-offs will determine whether it serves society or undermines it. “The biggest ethical dilemma in this domain is if AI is doing more than jobs, what is the human race doing? How does it generate value, and how does that sustain the human population and indeed what it means to be human?”
The Responsibility of Leadership
As financial institutions continue their AI journeys, Bannocks insists that leaders cannot delegate ethics to compliance officers or technologists alone. It must be a board-level priority, woven into governance structures and organizational culture. By creating safe spaces for dialogue, aligning decisions with principles rather than rules, and preparing for the long-term implications of AGI, financial services firms can position themselves as responsible stewards of a technology that is reshaping society. “Ethics is not a constraint on innovation, it is the foundation that allows innovation to endure.”
For more insights from Christopher Bannocks on AI ethics and transformation, follow him on LinkedIn.
