The 'how it works in the real world' layer. What separates a participant from a valuable team member.
The physical infrastructure that makes AI possible
GPUs · TPUs · Why Accelerators Matter · Training Costs · Training vs. Inference Compute
The parallel processing chips that made modern AI possible
Google's custom AI chips, and why they give Google a unique strategic advantage
Why specialized hardware is essential (not optional) for AI at scale
Why building frontier models costs hundreds of millions, and what that means
The very different hardware demands of building vs. running a model
How models improve with more compute and data
Scaling Laws · Synthetic Data · Fine-tuning
The mathematical relationships that predict how AI models improve with scale
Using AI to generate training data for AI, and why it's becoming essential
Adapting a pre-trained model to a specific task or style with targeted training
Measuring AI capabilities and ensuring they serve human values
Benchmarking LLMs · AI Alignment · AGI — Definitions and Strategy
How we measure AI capability, and why benchmarks are tricky
Ensuring AI systems do what we actually want, now and as capabilities grow
What AGI actually means, why it matters, and how it shapes the AI industry
The global political and regulatory forces shaping AI development
US-China AI Race · Export Controls and Hardware Policy · AI Regulation and Investment
The geopolitical competition that's accelerating AI investment and shaping policy
How chip export restrictions shape global AI development
How government policy shapes where and how fast AI develops
The societal implications of AI that every practitioner must grapple with
Bias in Training Data · Copyright and IP Concerns · Privacy Implications
How historical inequities get baked into AI models, and what we can do about it
The unresolved legal questions about training data and AI-generated content
Data privacy risks in AI systems, from training to deployment
Frameworks for making real AI product and architecture decisions
Evaluating LLM Solutions · Cost and Deployment Tradeoffs · Model Selection Frameworks
How to assess whether an AI solution actually solves the client's problem
API vs. self-hosted, which model tier, and how to control AI costs
When to fine-tune vs. prompt, self-host vs. API, and which model family to use