Explore the curriculum at your own pace. Click any section to see what's inside.
The foundational mechanics of modern AI models
Transformers · Attention Mechanisms · Tokens · Embeddings
The dominant architecture behind modern AI, parallelizable, scalable, attention-powered
How models decide what to focus on, the core of what makes transformers powerful
The atomic units of language that AI models actually process
How models represent meaning internally, dense vectors that capture semantic relationships
How LLMs are built, from raw data to capable, safe assistants
Pre-training · RLHF / Post-training · Reasoning Training / RLVR
The massive first stage that teaches a model to predict language
Turning a raw autocomplete engine into a useful, safe assistant
How modern models learn to think step-by-step using verifiable rewards
The three families of LLMs and what distinguishes them
Base Models · Instruction-tuned Models · Reasoning Models
Pure autocomplete, powerful but raw and hard to direct
The chatbots you know, RLHF-trained to follow instructions reliably
The current frontier, models that think before they answer
The vocabulary every AI practitioner needs to operate confidently
System Prompts · Context Windows · Parameters · Training vs. Inference · Hallucinations · Jailbreaking
The instructions that shape how a model behaves before a user says anything
The maximum text a model can see at once, and why it's so hard to extend
The learned weights that define a model, and why size matters
Building a model vs. running one, fundamentally different compute profiles
When LLMs confidently state things that aren't true, and why it's a fundamental problem
Bypassing a model's safety training through adversarial prompting
What modern LLMs can actually do beyond generating text
Tool Use · Agentic Capabilities · Multimodality
How LLMs interact with external systems to extend their capabilities
LLMs running in loops with tools to complete multi-step tasks autonomously
AI that can see, hear, and reason across text, images, audio, and more
The competitive landscape, business models, and vocabulary of the AI industry
Major AI Players · Open Source vs. Open Weights · What a Wrapper Is
Who's building frontier AI and what differentiates each lab
The important distinction between truly open AI and 'open enough'
The thin AI product problem, and why it matters for business strategy
Hands-on techniques for working with AI systems
Prompt Engineering · API Basics · RAG (Retrieval-Augmented Generation)
The craft of writing inputs that reliably get the outputs you want
How to actually call an LLM from code, the mechanics every builder needs
Grounding AI responses in your data, the go-to pattern for custom knowledge bases
The physical infrastructure that makes AI possible
GPUs · TPUs · Why Accelerators Matter · Training Costs · Training vs. Inference Compute
The parallel processing chips that made modern AI possible
Google's custom AI chips, and why they give Google a unique strategic advantage
Why specialized hardware is essential (not optional) for AI at scale
Why building frontier models costs hundreds of millions, and what that means
The very different hardware demands of building vs. running a model
How models improve with more compute and data
Scaling Laws · Synthetic Data · Fine-tuning
The mathematical relationships that predict how AI models improve with scale
Using AI to generate training data for AI, and why it's becoming essential
Adapting a pre-trained model to a specific task or style with targeted training
Measuring AI capabilities and ensuring they serve human values
Benchmarking LLMs · AI Alignment · AGI — Definitions and Strategy
How we measure AI capability, and why benchmarks are tricky
Ensuring AI systems do what we actually want, now and as capabilities grow
What AGI actually means, why it matters, and how it shapes the AI industry
The global political and regulatory forces shaping AI development
US-China AI Race · Export Controls and Hardware Policy · AI Regulation and Investment
The geopolitical competition that's accelerating AI investment and shaping policy
How chip export restrictions shape global AI development
How government policy shapes where and how fast AI develops
The societal implications of AI that every practitioner must grapple with
Bias in Training Data · Copyright and IP Concerns · Privacy Implications
How historical inequities get baked into AI models, and what we can do about it
The unresolved legal questions about training data and AI-generated content
Data privacy risks in AI systems, from training to deployment
Frameworks for making real AI product and architecture decisions
Evaluating LLM Solutions · Cost and Deployment Tradeoffs · Model Selection Frameworks
How to assess whether an AI solution actually solves the client's problem
API vs. self-hosted, which model tier, and how to control AI costs
When to fine-tune vs. prompt, self-host vs. API, and which model family to use
The AI landscape beyond language models
Image and Video Generation · World Models · Autonomous Driving · AlphaFold and Biomedical AI · Robotics and Embodied AI
How diffusion models and generative AI create visual content
AI systems that learn how the physical world works, the foundation for robotics and simulation
The hard problem of getting AI to navigate the physical world reliably
How AI is transforming biology and drug discovery
The unique challenges of teaching AI to act in the physical world
Deeper mechanics of how models learn and what emerges from scale
Emergent Abilities · In-Context Learning · Continual Learning · Fine-tuning Specifics
Capabilities that appear suddenly at scale, and why they surprise researchers
How models learn from examples in their prompt without weight updates
The unsolved problem of teaching models without forgetting what they know
The technical details of how fine-tuning actually works
Skills for staying current and thinking critically about AI
Reading Research Papers · Filtering AI Hype · Interpretability
How to extract value from AI papers without getting lost in the math
Critical frameworks for separating genuine capability from marketing and media distortion
The research frontier of understanding what's happening inside AI models