Artificial intelligence

AI Infrastructure Isn’t Too Expensive. We’re Just Running It Wrong – Lior Koriat, CEO of Quali

AI Infrastructure Isn’t Too Expensive - Lior Koriat, CEO of Quali

A growing number of executives are starting to ask whether the economics of AI infrastructure actually make sense. That question moved into the mainstream recently when IBM CEO Arvind Krishna laid out a stark piece of math: a single one-gigawatt AI data center can cost roughly $80 billion to fully outfit. Scale that to the hundreds of gigawatts implied by global AI ambitions, and you quickly arrive at trillions of dollars in capital investment that must somehow be paid back before the hardware is obsolete.

It sounds bleak. And if you assume we will operate AI infrastructure the same way we operated traditional cloud, it probably is.

But the biggest threat to AI’s return on investment is not the size of the check being written for GPUs. It is what happens after the hardware is installed. The real ROI killer is operational waste, and today’s operating models are not remotely prepared for what AI workloads demand.

Cloud waste already costs the industry more than $187 billion a year, roughly 30 percent of total cloud spend. That number was accumulated in a world dominated by relatively predictable, CPU-based workloads. Now we are introducing GPU-driven environments that behave very differently, scale faster, cost more per hour, and are far less forgiving of inefficiency. If we continue to manage them with the same tools and assumptions, that waste will accelerate dramatically.

The uncomfortable truth is that much of today’s cost management discipline was designed for a previous era. FinOps, as it is practiced in many organizations, relies heavily on manual processes, lagging indicators, spreadsheet-driven analysis, and post-hoc attribution. It is an attempt to impose financial order after the fact, once resources are already running and money is already spent. That model was strained even for conventional cloud. It breaks down completely in an AI-driven environment.

GPU workloads do not behave like traditional infrastructure. They are often bursty, ephemeral, and tightly coupled to experiments that may run for hours or days and then disappear. Provisioning is slow, scheduling is fragile, and utilization is frequently poor. Many organizations discover, usually too late, that expensive GPU clusters sit idle for long stretches because a job finished early, a dependency failed, or a team over-allocated capacity to avoid delays. By the time finance teams see the numbers, the opportunity to correct course has already passed.

This is why so much of the current debate about AI economics misses the point. The problem is framed as a capital expenditure question, when it is actually an operating model failure. We are trying to govern AI infrastructure with tools that assume static environments, predictable lifecycles, and human-paced decision making. AI workloads violate all three assumptions.

What changes the equation is not spending less, but operating differently. AI infrastructure needs to be treated as a governed service, not a collection of loosely managed resources. Cost, security, and compliance cannot be inferred after the fact. They have to be embedded into how environments are defined, provisioned, and retired in real time.

This means moving away from guesstimates and context-free utilization metrics toward systems that understand intent. Why does this environment exist. Which project does it serve. What budget does it belong to. How long should it live. When those answers are encoded upfront and enforced automatically, budgets stop being aspirational and start being accurate. Cost optimization becomes continuous rather than reactive.

It also means acknowledging that human-centric governance does not scale to machine-speed operations. As AI systems increasingly make decisions about when to spin up resources, how to scale workloads, and when to tear them down, governance has to operate at the same speed. Policies need to be enforced in line, not reviewed weeks later. Every action, whether triggered by a person or a system, must be observable and auditable as it happens.

This is where the industry has an opportunity to reset expectations. The question is not whether AI infrastructure will be expensive. It will be. The question is whether organizations can build operating models that prevent waste from becoming the default. The companies that succeed will not be the ones that avoid investing in AI, but the ones that design for control, accountability, and lifecycle management from the beginning.

We have seen this pattern before. Cloud adoption outpaced cost governance and created years of financial sprawl. AI risks repeating the same mistake at a much higher price point. The difference is that this time, the warning signs are already visible.

The economics of AI are not doomed. But they will not work if we keep treating infrastructure as something to clean up after the innovation has already happened. In the AI era, governance is not a brake on progress, but the only way progress becomes sustainable.

Lior Koriat is the chief executive of Quali, a technology company specializing in AI-driven infrastructure orchestration and governance. He has more than two decades of experience leading and scaling technology ventures in the enterprise software industry.

Comments
To Top

Pin It on Pinterest

Share This