Sign in
Hardware & Compute

TPUs

Google's custom AI chips, and why they give Google a unique strategic advantage

What it is

TPUs (Tensor Processing Units) are custom ASICs designed by Google specifically for the matrix mathematics required by neural networks. Unlike GPUs, which are general-purpose parallel processors adapted for AI, TPUs are purpose-built for lower-precision tensor operations.

Google's Gemini series is trained and served entirely on TPUs, making Google the only major AI lab not dependent on NVIDIA's supply chain. Google is now licensing TPU access through Google Cloud, with Anthropic entering a significant partnership for TPU compute.

Why it matters

Google's TPU independence is a significant strategic moat. While OpenAI and Anthropic pay NVIDIA's premium, Google controls its own compute cost structure. Understanding TPUs explains why Anthropic's Google partnership is so significant, why hardware independence matters strategically, and why chip geopolitics affects AI lab strategy.

Related concepts

Resources