Like many other tech-centric sectors of the economy, fintech has become an enthusiastic AI adopter. Indeed, research suggests that 76% of firms report active use of AI, while nearly eight in ten organisations are now applying it to at least one core business function. Unsurprisingly, this is supercharging the wider market, with AI in fintech projected to grow from around $18 billion in value this year to over $50 billion by 2030.
At the same time, regulators are moving quickly to ensure this surge in adoption does not come at the expense of accountability. The EU AI Act has set the tone with a risk-based approach that is already being seen elsewhere, including in the United States and across Asia. What these frameworks have in common is an emphasis on transparency, data governance and human oversight. The underlying sentiment is clear: AI adoption cannot come at the cost of compliance.
Balancing innovation with compliance
This is where open source AI technologies, which make their source code publicly available for inspection and adaptation, are proving especially relevant. By going down this route, as opposed to relying solely on proprietary systems from the major AI vendors, firms are much better placed to audit behaviour to meet jurisdiction-specific requirements. Open source enables fintechs to strike an effective balance between the need to keep innovating at speed and ensuring that regulatory obligations are built into their AI strategies from the outset.
In addition, as adoption deepens, the question of infrastructure strategy becomes more important. For instance, fintechs handling sensitive data (ie, all of them) cannot afford to depend entirely on external vendors or allow unmonitored tools to proliferate across their IT estate. This kind of ‘shadow’ AI not only undermines security but also makes it much more difficult, if not impossible, to demonstrate compliance with the appropriate level of confidence.
Similarly, without clear insight, otherwise known as observability, into how systems are performing, what resources they consume and where risks lie, it is difficult to optimise costs or meet associated regulatory obligations. By contrast, strong observability allows firms to track resource use, model behaviour, and data flows in real time. In these circumstances, AI workloads then become much easier to fine-tune, easier to justify to regulators and less prone to uncontrolled risk. Crucially, fintechs that exert greater control over the AI infrastructure are much better placed to experiment with new approaches while keeping usage within permitted boundaries.
Organisations that host AI models in secure, private environments can ensure that sensitive financial data never leaves organisational oversight. For firms operating under strict regulatory regimes, this reduces the exposure that comes with third-party processing. It also gives fintechs the flexibility to evolve systems as their requirements change, without being constrained by proprietary roadmaps controlled by third parties. What this all adds up to is that infrastructure ownership allows AI innovation to continue at pace, safe in the knowledge that issues such as resilience, security, compliance and integrity are being properly addressed.
Taking practical steps
Turning these principles into practice requires more than technology alone. Yes, observability and control provide essential foundations, but they must be supported by clear processes that demonstrate accountability at each and every stage of the AI lifecycle. This begins with how models are trained and the provenance of the data used to drive this crucial stage in the development process. Ideally, organisations will record these details and, along with how and when human oversight is applied, create an auditable trail that satisfies regulators and strengthens trust among stakeholders.
Consistency is equally important. Rather than tailoring deployments to multiple jurisdictions, many businesses are now using the strictest applicable regulation by default as a benchmark across their entire AI estate. Not only does this approach help elevate standards, but it also avoids the cost and complexity of managing fragmented compliance processes, ensuring that resilience is built into systems from the outset.
Clearly, AI-related scrutiny will continue to intensify, and having the ability to explain how systems work and why they produce certain outcomes has become a regulatory and operational priority. Open-source technologies play a valuable role here by giving organisations greater visibility into the code, architecture, and performance of their models.
With source code open to inspection, organisations can audit models independently, validate how they behave and identify issues that may otherwise go unnoticed in closed, proprietary systems. This level of transparency also allows for greater customisation and control, enabling teams to adapt models to meet specific compliance or operational needs without undermining governance.
Given these advantages, it comes as little surprise to see that 84% of financial firms already see real business value from open-source AI, with nearly half planning to increase adoption this year.
By Mark Dando, General Manager for EMEA North at SUSE
