Inside the sprawling data centres that line the corridors between Northern Virginia, Oregon and West Texas, a transformation is underway that can be measured not in abstractions but in dollars. In the fourth quarter of its fiscal year 2026, Nvidia’s data centre revenue reached $62.3 billion, a single-quarter figure that exceeds the annual revenue of most Fortune 500 technology companies. This number represents more than 91% of Nvidia’s total quarterly revenue of $68.1 billion, confirming that the company’s identity has fundamentally shifted from a diversified semiconductor manufacturer to the dominant infrastructure provider for the artificial intelligence era.
The magnitude of this figure demands context. $62.3 billion in data centre revenue in ninety days means that Nvidia was generating approximately $693 million per day from customers building AI infrastructure. It means that the world’s largest cloud providers, enterprise technology departments and AI startups were collectively purchasing Nvidia hardware at a pace that has no precedent in the semiconductor industry. For those tracking the future of marketing technology, this data centre spending represents the foundational layer upon which every AI-powered service and application is being built.
Breaking Down the $62.3 Billion Quarter
Nvidia’s data centre revenue encompasses GPU sales for AI training and inference, networking equipment including InfiniBand and Ethernet switches, software licensing and support contracts, and the expanding DGX and HGX platform business that packages complete AI computing systems. The training workload segment continues to drive the largest individual orders, as frontier AI labs and hyperscale cloud providers purchase tens of thousands of GPUs for each new model generation.
| Revenue Segment | Q4 FY2026 | Share of Total |
|---|---|---|
| Data Centre | $62.3 billion | 91.5% |
| Gaming | ~$3.5 billion (est.) | ~5.1% |
| Professional Visualisation | ~$1.1 billion (est.) | ~1.6% |
| Automotive | ~$1.2 billion (est.) | ~1.8% |
| Total Q4 Revenue | $68.1 billion | 100% |
Who Is Buying: The Hyperscaler Demand Engine
The primary buyers driving Nvidia’s data centre revenue are the hyperscale cloud providers. Amazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure are each building out massive GPU clusters to support their AI service offerings. Amazon alone guided approximately $200 billion in capital expenditure for 2026, up from $131 billion in 2025. Alphabet guided $175 billion to $185 billion. These figures represent the direct demand pipeline for Nvidia’s highest-end products.
Beyond the established hyperscalers, a new category of GPU cloud providers has emerged as a significant demand driver. CoreWeave plans $30 to $35 billion in capital expenditure in 2026 specifically for AI data centres. The company spent $14.9 billion in 2025 and reported $3.13 billion in available cash as it ramps its infrastructure buildout. Perplexity, the AI search company, signed a multi-year deal to use CoreWeave data centres for inference workloads, illustrating how the GPU cloud market is creating new distribution channels for Nvidia hardware.
The Infrastructure Behind the Numbers
A $62.3 billion data centre quarter implies an extraordinary physical footprint. Each high-end Nvidia GPU system requires substantial power, cooling and networking infrastructure. A single DGX H100 system draws approximately 10 kilowatts of power. At the scale implied by Nvidia’s revenue, hundreds of thousands of these systems are being deployed across data centres worldwide, collectively consuming gigawatts of electrical power and requiring billions of dollars in supporting infrastructure including power substations, cooling systems and fibre-optic networks.
FiberLight’s commitment of $350 million to build approximately 1,400 route miles of fibre in West Texas specifically for AI data centre connectivity illustrates how Nvidia’s GPU sales are creating ripple effects throughout the infrastructure supply chain. The AI buildout is not just a semiconductor story; it is an energy story, a construction story and a telecommunications story, all driven by the insatiable demand for compute that Nvidia’s data centre revenue numbers quantify. The broader implications for generative AI applications in every industry are becoming clearer with each quarterly report.
Margins That Defy Hardware Economics
Nvidia posted a 75.0% GAAP gross margin in Q4 FY2026, a figure that challenges conventional understanding of hardware economics. Traditional semiconductor companies typically operate at gross margins between 40% and 60%. Nvidia’s ability to maintain 75% margins at $68.1 billion in quarterly revenue reflects several factors: the absence of competitive alternatives at comparable performance levels, the deep integration of Nvidia’s CUDA software ecosystem with AI development frameworks, and the urgency of customer demand that limits price negotiation.
| Margin Comparison | Gross Margin | Type |
|---|---|---|
| Nvidia (Q4 FY2026) | 75.0% | Hardware (GPU systems) |
| Typical semiconductor company | 45-60% | Hardware (chips) |
| Typical SaaS company | 70-80% | Software |
The Sustainability Question
The central question facing investors and industry observers is whether Nvidia’s data centre revenue growth is sustainable or represents a cyclical peak. The bull case rests on the thesis that AI adoption is still in its early stages, that enterprise deployment has barely begun relative to the total addressable market, and that each new generation of AI models requires exponentially more compute for training and inference. The bear case suggests that hyperscaler capex cycles historically peak and then contract, that competition from AMD, Intel, Google TPUs and custom silicon will eventually erode margins, and that the current rate of spending may not generate adequate returns for the companies making these investments.
What the Q4 FY2026 data shows is that, at present, demand continues to outstrip supply. Nvidia’s order backlog extends months into the future, and the company’s Blackwell generation products are experiencing allocation constraints similar to those that characterised the H100 cycle. For the the global advertising technology market and every technology vertical, Nvidia’s data centre revenue is the single most important indicator of the scale at which the AI economy is being built.