Sign in
Research & Meta-Skills

Interpretability

The research frontier of understanding what's happening inside AI models

What it is

Interpretability (also called mechanistic interpretability or explainability) is the research field focused on understanding what computations neural networks actually perform internally. Current LLMs are largely black boxes, we know inputs and outputs but have limited insight into the internal representations and algorithms that produce outputs.

Anthropic's interpretability research has identified "features" in model activations that correspond to human-interpretable concepts, individual neurons or circuits that activate for specific concepts. Circuit analysis traces how specific behaviors are implemented through attention heads and MLP layers.

Interpretability matters for: debugging model failures, providing audit trails for high-stakes decisions, detecting deceptive alignment in future powerful models, and understanding emergent capabilities.

Why it matters

Interpretability is one of the core research bets at Anthropic and increasingly at other labs. As AI systems are deployed in consequential domains (healthcare, law, finance), "the model said so" is insufficient justification, auditors and regulators will require explanations. Understanding the current state and limitations of interpretability research helps you engage with the AI safety community credibly.

Related concepts

Resources

How Might LLMs Store Facts | Deep Learning Chapter 7
3blue1brown.com· 3Blue1Brown's Chapter 7 covers MLP internals, how transformers might store facts, and directly discusses superposition and sparse autoencoders (linking to Anthropic's interpretability work). Beautiful visualizations. This is the video.
25 min
Transformers, the Tech Behind LLMs | Deep Learning Chapter 5
3blue1brown.com· Chapter 5 covers transformer architecture and embeddings. Good companion to Chapter 7 for recruits who want the full picture. **OPTIONAL, only if recruits need the prerequisite.**
27 min
Zoom In: An Introduction to Circuits
distill.pub· Chris Olah's foundational Distill.pub article on neural network interpretability through circuits. Precursor to Anthropic's mechanistic interpretability work. Beautiful interactive visualizations. Adds non-Anthropic diversity.
20 min
PreviousFiltering AI Hype
NextEnd of section