In AI, the story often starts with models and ends with magic. Yet every model runs on something very concrete: land that can host industrial-scale buildings, substations, and grid interconnects that can move power at scale, rows of specialized GPUs inside high-density racks, and cooling systems that keep those racks within safe operating limits. The real constraint is not an abstract “cloud” but whether dependable energy can be delivered to a specific site at a cost structure durable enough to sustain multi-year training and inference cycles.
Alexander Salamandra, a Senior Financial Analyst on a global cloud provider’s data center finance team and a judge for the 2025 Globee Awards for Business, approaches that reality through a finance lens. His operating principle is simple: treat AI infrastructure as a capital system first, then translate megawatts, GPUs, and grid risk into decisions executives, investors, and auditors can trust.
Finance At The Center Of The AI Infrastructure Arms Race
The AI infrastructure wave is not measured in product launches; it is measured in balance sheets, operating metrics, and the viability of multi-year capital plans. Recent analysis puts cumulative AI infrastructure and model spending by the largest technology companies at about $300 billion in 2025 alone, with total outlays expected to reach around $1 trillion between 2025 and 2027. Spread across a relatively small number of hyperscalers, that translates into tens of billions per firm per year, often front-loaded into power-hungry training clusters and long-dated power contracts. At this scale, the arms race is not a metaphor; it is a rolling capital program that rivals large national infrastructure projects. For perspective, the Apollo Program cost roughly $25.4 billion in 1970s dollars, about $150-$180 billion in today’s terms. Leading hyperscalers are now deploying nearly twice that amount every year on AI infrastructure alone.
Those figures make clear that AI infrastructure is now its own financing category. Each incremental training cluster, regional data center campus, or long-haul grid upgrade sits within a capital envelope where executives are balancing payback periods, equity narratives, and debt headroom. Capacity decisions are no longer just “Can we build it,” but “Will this cluster still make economic sense when tariffs, interconnection queues, and chip supply cycles move?” The work of translating megawatts, GPUs, and campus footprints into a language credit committees understand is no longer optional.
That is the work Salamandra has already done in prior roles, starting from the vantage point of investor scrutiny. At fintech firm Vesta, he built and managed a centralized due diligence data room that organized financial, operational, and legal material for potential equity and debt investors, cut ad-hoc information requests by 70 percent, and reduced due diligence turnaround times by more than half. The system enforced version control, audit-ready documentation, and tiered access, while the investor decks he prepared alongside the CFO helped lay the groundwork for the strategic investment Vesta later secured. The project’s success reinforced his position as a finance lead who treats infrastructure not as a black box, but as a capital story that needs to withstand investor-grade interrogation. “Capital is the scoreboard for AI infrastructure, and my job is to show exactly why one megawatt or cluster deserves funding over another,” notes Salamandra.
Translating Power, Silicon And Space Into Capacity Decisions
From that capital baseline, the next question is physical: how much power and space can the system sustain at a given price point and customer demand? In the United States, data center grid demand is projected to rise about 22% in 2025, reaching roughly 61.8 gigawatts and nearly tripling to approximately 134.4 gigawatts by 2030. Globally, data center electricity consumption is expected to more than double to around 945 terawatt-hours by 2030, a volume comparable to the entire power use of a G7 economy today. Those numbers describe a world where secure grid interconnections, substation buildouts, and power purchase agreements become as strategically important as GPU allocation or model architecture.
For hyperscalers, every megawatt has to resolve into a coherent view of capacity, silicon, and siting. Land near constrained urban hubs might offer latency advantages but face multi-year interconnection queues, while power-rich regions outside the United States may promise faster build timelines but introduce cross-border regulatory and currency risk. The finance toolkit behind capacity planning has to integrate grid constraints, chip roadmaps, and landlord terms into comparable unit economics, whether the build is a megacampus in a rural power hub or a smaller facility next to a major exchange.
Salamandra’s pricing work shows how that toolkit behaves when the subject is not a data center campus but the devices and hosting that sit on top of it. At Kaseya, he led the overhaul of the Datto BCDR pricing model for a product line generating roughly $350 million in annual revenue, building a fully loaded cost model that incorporated hosting, hardware fulfillment, logistics, labor, overhead, and warranty. The project delivered a 28% increase in average monthly hardware revenue in the two months following the new pricing, alongside a 9% improvement in gross margin, while also reducing average discounting and simplifying revenue forecasting. Product leaders gained SKU-level margin transparency for the first time. “When power, silicon, and space live in one model, capacity planning stops being a guess and starts being a decision,” observes Salamandra.
Why AI Workloads Behave Differently From Traditional Cloud
As the arms race shifts from traditional cloud capacity to AI-optimized clusters, the underlying physics diverges further from normal enterprise workloads. Recent analysis indicates that data center power demand could reach about 1,400 terawatt-hours by 2030, roughly 4% of global electricity use, with the United States needing to more than triple annual capacity from 25 gigawatts in 2024 to more than 80 gigawatts by 2030. At the rack level, cooling technologies define what kinds of AI workloads a facility can host: rear-door heat exchangers typically support densities between 40 and 60 kilowatts, while immersion systems can handle up to 150 kilowatts per rack. Those densities are far above conventional enterprise workloads, and they shift the cost structure toward specialized cooling, chip generations, and grid-adjacent infrastructure.
This is why AI data centers behave differently when grid planners and financiers look at long-term economics. Higher rack densities require more concentrated power per square foot and often justify relocating clusters to power-rich regions, including countries with abundant hydro, nuclear, or gas generation and friendlier interconnection timelines. The result is a pattern of displacement where incremental AI capacity migrates toward jurisdictions that can deliver firm power and regulatory clarity, even if that means serving some demand from outside the end users’ home market.
In that context, Salamandra’s Obsolete Inventory work at Kaseya foreshadowed the kinds of risks AI infrastructure owners now face. He led the company’s first deep-dive reserve analysis across approximately 592 hardware SKUs spanning three major business units, reconciling ERP data, warehouse reports, and sales forecasts into a unified model. The work identified a $12.2 million understatement in obsolete inventory reserves and showed that reserve policy had drifted toward operational convenience rather than accounting standards. His analysis prompted accounting leadership to adopt a new standardized reserve methodology across hardware units, aligning with GAAP, improving audit readiness, and preventing the full $12.2 million impact from hitting a single future period. “If your reserve logic does not match the speed of your hardware cycles, you are quietly betting the income statement on obsolescence,” states Salamandra.
When Infrastructure Finance Meets Controls And Data Residency
As campuses, contracts, and chips scale up, so does scrutiny. Many public companies now allocate around $1 to 2 million every year to SOX programs, and internal audit teams frequently spend 5,000 to 10,000 hours annually on related work. At the same time, national data privacy regimes have proliferated: 144 countries now have data protection laws, covering about 82% of the world’s population. These numbers mean that every material AI infrastructure decision lives at the intersection of SOX controls, cross-border data residency, and board-level oversight.
In practice, that intersection looks like tying asset lives, reserve policies, and data location to the same set of controls auditors test under SOX 404. Infrastructure finance teams cannot treat a campus as just capex and depreciation; they must map which entities own which assets, where logs and training data physically reside, and how those facts line up with data residency rules. As AI clusters concentrate power and data into fewer campuses, the audit trail linking individual decisions to board-approved risk appetite has to get more explicit, not less.
Salamandra has already operated at this intersection of finance, documentation, and trust through his work at Vesta. Between Q1 and Q4 2023, he built and managed a centralized due diligence data room that unified financial, operational, and legal materials into a single investor-ready repository governed by standardized file naming, version control, and tiered access permissions. The system cut investor due-diligence turnaround times by more than 50%t, reduced ad-hoc information requests by 70% and freed roughly 80 staff-hours per diligence cycle, while keeping sensitive information aligned with confidentiality and data disclosure standards. That data room was a prerequisite for Vesta’s readiness for both debt and equity financing and formed part of the operational and financial foundation for the strategic investment Schwarzwald Capital later made to enhance the company’s fraud-prevention capabilities. “At scale, every new campus is also an SOX control and data residency decision, whether teams acknowledge it or not,” says Salamandra.
Looking Ahead — Where Infrastructure Finance Sets The Tempo
From here, the trajectory only steepens. Global power demand from data centers is projected to rise by about 165% by 2030 compared with 2023, as AI clusters move from experiment to default compute fabric. One long-range scenario has the U.S. AI data center power demand increasing more than thirtyfold to roughly 123 gigawatts by 2035, up from about 4 gigawatts in 2024, with AI workloads potentially accounting for the majority of data center electricity use by then. Over a 2025 to 2035 horizon, that implies a sustained buildout counted in hundreds of billions of dollars, not just one-off cycles. In that environment, finance leaders become operators in their own right. They are the ones turning grid constraints, chip supply, land assemblies, reserve policies, and privacy regimes into a coherent capital plan, and deciding which campuses, workloads, and regions get funded first.
Salamandra’s record reflects that role in practice, showing a consistent pattern of treating AI and infrastructure as a single capital system rather than separate technical and financial tracks. He serves as a judge for the IEEE Internet of Things Journal, bringing the same evidence-based scrutiny to emerging technical work that he applies to AI infrastructure decisions, and grounding those decisions in documentation and economics that can withstand outside review. “The winners will be the ones whose finance teams treat megawatts, GPUs, and audits as parts of the same system, and can articulate that system to capital markets,” notes Salamandra.