Finance News

TensorWave Secures $100M in Series A

Money

TensorWave has secured $100 million in Series A to build the world’s largest liquid-cooled AMD GPU deployment.

Takeaway Points

  • TensorWave secures $100M in Series A.
  • The round was co-led by Magnetar and AMD Ventures, with additional participation from Prosperity7, Maverick Silicon, and Nexus Venture Partners.

TensorWave Series A

TensorWave said on Wednesday that it has raised $100M in Series A funding to accelerate the deployment of the world’s largest liquid-cooled AMD GPU cluster, consisting of 8,192 MI325X GPUs.

According to the report, the round was co-led by Magnetar and AMD Ventures, with additional participation from Prosperity7, Maverick Silicon, and Nexus Venture Partners.

Darrick Horton, CEO of TensorWave, commenting about the Series A, said, “Our belief is simple: specialization wins. We’ve been AMD-native from day one. That depth of focus has let us unlock performance gains across training, fine-tuning, and inference by optimizing every layer of the stack around MI325X.”

Horton added, “We’re scaling fast because our customers are scaling faster. We’re not here to offer another cloud—we’re here to build the one that AI actually needs.” 

Piotr Tomasik, President & COO, TensorWave, said, “When you deploy thousands of high-bandwidth GPUs, thermals aren’t a footnote, they’re a first-principles problem. We engineered our system from the ground up to make high-density, high-performance clusters viable; and liquid cooling is the unlock.” 

Other Comment 

Jeff Tatarchuk, Chief Growth Officer, TensorWave, said, “Open-source models are moving faster than anyone expected. If you’re building with them, you need a stack that doesn’t slow you down. AMD’s powering that shift and we’re making it real… And at scale.”

Cooling Systems

TensorWave said that its direct liquid cooling systems allow them to pack more GPUs per rack without thermal throttling and maintain consistently high throughput for long-running training jobs. Improve energy efficiency while extending hardware longevity and deliver sustained performance for high-intensity inference workloads.

About the Series A

This $100M Series A will allow the company to boost everything, including its MI325X cluster rollout, our liquid-cooled architecture, growth of the team, and our ability to support the world’s most ambitious AI teams with infrastructure that doesn’t slow them down, TensorWave said.

TensorWave Explained GPU Cluster 

 On May 12, 2025, TensorWave said that a GPU cluster is a group of specialized chips designed to work in sync. While a standard PC tackles one job at a time, GPU clusters split the workload across thousands of processors. It’s like replacing a single painter with an army of artists, each handling a stroke to finish the mural faster.

What is a GPU cluster?

 The company said that a GPU cluster is a group of graphics processing units (GPUs) working together as one system to tackle large problems quickly. While central processing units (CPUs) are great at doing a few things quickly in a row (known as sequential processing), GPUs are specialized chips built to handle thousands of tasks simultaneously (known as parallel processing).

About TensorWave

TensorWave is the AI and HPC cloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference.

Comments
To Top

Pin It on Pinterest

Share This