It is an open secret that creating cutting-edge processors has historically been prohibitively expensive and time-consuming, with modern on-chip development costs running into the hundreds of millions of dollars. At the same time, tech leaders like Google, Meta and Tesla have been racing to put AI into every device be it cars, cameras to even phones and drones, resulting in skyrocketing demand for smarter, more efficient edge-AI chips.
In the wake of these growing pains, ChipForge, the world’s first decentralized chip design project, powered by the TATSU ecosystem has emerged as a potential game-changer, combining open-source hardware development with blockchain-style incentives, effectively transforming chipmaking from a closed, capital-intensive process into a global design contest (where hundreds of engineers worldwide can submit designs to various on-chain challenges).
What’s left is a decentralized innovation race where design excellence, not corporate budgets, determines who wins and who doesn’t.
A novel framework in action! Here’s what’s on offer with ChipForge
At its core, ChipForge is a “digital design subnet” (SN84) running on the Bittensor network which splits chip development into a series of on-chain design challenges, with each round posing a specific problem. For example, a RISC-V processing block with cryptographic support requires teams to submit register-transfer-level (RTL) designs as solutions.
Automated validators then run each submission through industry-standard EDA tools (Verilator, Yosys, OpenLane, etc.) to verify and score it on power, area and performance. The result is objective, reproducible validation for every design.
In fact, ChipForge’s community recently delivered an entire 32-bit RISC-V CPU (RV32IMCK) with AES and SHA crypto extensions, with the output being real synthesizable RTL code ready for FPGA or chip fabrication, complete with measured PPA metrics.
Innovation at a reasonable cost that’s fueling the future of Edge-AI
Because every project is a sort of competition, innovation happens extremely fast, requiring contributors to refine and re-submit designs in rapid cycles, racing to outdo each other for the prize. In effect, ChipForge replaces fixed R&D budgets with a model where sponsors pay only for successful outcomes, not for years of trial and error.
In the near future, ChipForge is looking to optimize its compilers, runtimes and AI kernels, ensuring that new processors and the software to drive them evolve together. To elaborate, the firm is prioritizing compact neural processing units (NPUs) designed for ultra-low energy and latency, two aspects that on-device AI requires.
The plan also includes moving from FPGA prototypes to real silicon so that winning designs can be sent to multi-project wafer shuttles (like Google’s OpenMPW) for actual fabrication. And because security is paramount, existing RISC-V cores have already been made to incorporate post-quantum cryptographic extensions to future-proof all designs.
Shaping the on-device AI ecosystem, one step at a time
The implications of ChipForge’s tech proposition for on-device AI stand to be profound, especially as tailored hardware for edge applications have become increasingly critical. Industry signals support this direction as well with NVIDIA announcing that its CUDA platform will support RISC-V CPUs at the recent 2025 RISC-V Summit in China.
Similarly, Google too has signaled similar support, with its Android interface now treating RISC-V as a first-class ISA, potentially making it the norm across 16 billion devices by 2030. And, with tens of billions of IoT and AI devices looming large on the horizon, the aforementioned macro trends are seemingly aligning perfectly with ChipForge’s ethos, one which entails turning chip design into a competitive, open process.
To put it simply, platforms that democratize chip innovation could well become the home of the next generation of edge-AI processors, with entities like ChipForge leading the fray.