For decades, the financial industry operated like a private club. Work done with a pen and paper. Archaic to some but common-sense security to others. Sophisticated data, complex models, and the talent to wield them belonged almost entirely to banks, hedge funds, and institutions with eight-figure technology budgets. That arrangement is coming apart, and the disruption is not arriving from regulators, market crashes, or even the blockchain. The truth is that it’s coming from artificial intelligence.
This is not simply a story about automation. It is about who holds analytical power, who gets to build competitive products, and what it really means to run a financially viable firm when the most capable tools in history are available to nearly anyone.
Julius Franck, co-founder of Vertus.ai, has watched this transformation accelerate firsthand. He sees consequences that reach far beyond cost savings or headcount reductions. “AI in finance is going to change almost every aspect of the interaction with the financial markets,” Franck says. “One of the most interesting facets of that is the reduction of barriers across the board.”
The comparison he draws is worth sitting with. A generation ago, writing even basic software required years of dedicated training. The notion that a designer or a first-time entrepreneur without an engineering degree could build a functioning application seemed like wishful thinking. Then AI-assisted development quietly changed that. Finance is following a nearly identical path. Quantitative strategies that once demanded teams of PhD statisticians and proprietary infrastructure can now be prototyped by someone with good instincts, the right tools, and a clear problem to solve.
That cuts in two directions at once. Opening the door to smaller players also raises the stakes for everyone inside the room. “On one hand, it opens opportunities for smaller players but also forces large institutions to keep innovating and ultimately will result in more efficient and fairer markets for all,” Franck adds. When analytical capability is broadly distributed, competitive advantage stops being about access and starts being about what you do with it.
Nowhere is that pressure more visible than in fintech, where AI has become both a genuine business driver and a marketing phrase stretched well past its useful limits. Plenty of companies talk confidently about AI. A much smaller number have turned that confidence into sustainable profitability. Franck has a clear read on the structural problem underneath. “Most fintech firms in the race for AI have focused on the wrong moat,” he says. “Taking off-the-shelf LLMs, adding perhaps external data, and trying to undercut the market with low pricing by focusing on the cost savings of AI is building a very fragile moat.”
The reasoning holds up under scrutiny. When foundational models are effectively commodities, available to any startup willing to pay a monthly subscription, wrapping one in a simple interface is not a strategy. It is a placeholder. “With margins being extremely thin in the new era of service powered by AI, the moat has to come from quality of service and premium pricing,” Franck explains. “Those who only use a wrapper of ChatGPT and get lured in by the very low AI pricing around will have little chance to succeed in building a firm that brings real benefit to users beyond plugging in an LLM and is profitable.”
The firms gaining real ground are taking a harder road. They are embedding AI into the operational layers of their business in ways that build on themselves over time. Risk management, underwriting, and fraud prevention are where this is most legible. Those functions once depended on large analyst teams running statistical models that were solid by the standards of their day but constrained by human bandwidth. Now, pattern recognition systems process transaction data at a scale and speed that makes earlier approaches look approximate by comparison. The business case is tangible: lower fraud exposure, sharper credit decisions, faster underwriting cycles, and margins that reflect the difference.
Still, Franck pushes back on the idea that numbers alone carry the story. “Communicating the value of that has to be measurable, not just wrapped in buzzwords, but with actual data,” he says. “The number of companies using AI versus using AI correctly is large and growing.”
That distinction between using AI and using it well points toward something the industry has not fully resolved. When machines are handling more of the analytical work, what remains distinctly human? The answer that keeps surfacing is that judgment, accountability, and the relational texture of leadership are not tasks that transfer. AI extends human intent when it is applied deliberately. It weakens decision-making when it is used to sidestep the thinking that real guidance requires. “Ultimately, AI enhances leadership when it serves as a tool to extend carefully considered human intent, and it dilutes leadership when used to avoid the cognitive and emotional labor that true guidance requires. Leadership that resonates remains irreducibly human,” as Heng puts it plainly.
The financial firms that define the coming decade will not be the ones with the most AI integrations on their pitch decks. They will be the ones that understood early what AI is actually for and built something honest and measurable around that answer.