The definitive collection of the 20 most influential research papers on making AI systems explainable and interpretable. Carefully curated to help engineers, researchers, and practitioners build transparent, trustworthy AI systems.
Coverage: Traditional ML, Deep Learning, Large Language Models, Computer Vision, Mechanistic Interpretability
- Why This Matters
- How to Use This Repository
- The Top 20 Papers
- Quick Reference Table
- Contributing
- License
AI systems make critical decisions affecting billions of lives—from medical diagnoses to loan approvals, from hiring to criminal justice. Without explainability, we cannot:
- ✅ Debug failures when models hallucinate or make errors
- ✅ Build trust with users and stakeholders
- ✅ Meet regulations (EU AI Act, FDA guidelines, GDPR)
- ✅ Detect bias and ensure fairness
- ✅ Improve systems through understanding
This repository provides the research foundation—from foundational attribution methods (SHAP, LIME, Integrated Gradients) to cutting-edge mechanistic interpretability (sparse autoencoders in GPT-4/Claude)—to make your AI systems transparent.
Building production systems?
→ Start with Foundational Methods (Papers #1-4)
→ Then Transformer & LLM Methods (Papers #10-13)
Researching model internals?
→ Deep dive Mechanistic Interpretability (Papers #5-9)
→ Check Comprehensive Surveys (Papers #15-16)
Need compliance/regulatory guidance?
→ Read Applications & Safety (Papers #17-19)
→ Review Strategic Vision (Paper #20)
New to explainability?
→ Start with Survey (#15)
→ Then Foundational (#1-4)
→ Then pick domain-specific papers
- 15 minutes: Read abstracts of #1, #5, #15, #20
- 1 hour: Deep read #15 (comprehensive survey)
- 1 day: Study #1, #5, #10, #15 in depth
- 1 week: Work through all 20 papers
These techniques work across all AI systems—from XGBoost to GPT-4. Essential for any engineer working on explainability.
Title: "A Unified Approach to Interpreting Model Predictions"
Authors: Scott M. Lundberg, Su-In Lee
Institution: University of Washington
Why This Paper Matters:
SHAP is the most widely deployed explainability method in production. It unifies 6 previous attribution methods under one framework based on Shapley values from game theory. Works on ANY model—linear regression, random forests, XGBoost, deep neural networks. If you only learn one explainability method, make it SHAP.
Key Contribution:
Identifies a new class of "additive feature importance measures" and proves there's a unique solution satisfying desirable properties (local accuracy, missingness, consistency). Provides fast approximations (KernelSHAP, TreeSHAP) that make Shapley values computationally tractable.
Practical Impact:
Used by Microsoft, Amazon, Google, and thousands of companies. Built into Python libraries (shap, scikit-learn, XGBoost). The default choice for explaining financial models, healthcare predictions, and business ML systems.
Links:
📄 Paper: https://arxiv.org/abs/1705.07874
📄 NeurIPS Proceedings: https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions
💻 Code: https://github.com/slundberg/shap
📚 Documentation: https://shap.readthedocs.io/
Title: "Why Should I Trust You?": Explaining the Predictions of Any Classifier
Authors: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
Institution: University of Washington
Why This Paper Matters:
LIME pioneered model-agnostic local explanations. Unlike global methods that explain the entire model, LIME explains individual predictions by learning an interpretable linear model around that specific prediction. Fast, simple, and works on black-box models where you only have input-output access.
Key Contribution:
Proposes local interpretable model-agnostic explanations by perturbing inputs and fitting sparse linear models to approximate model behavior locally. Introduces SP-LIME for selecting representative examples to explain overall model behavior through diverse individual predictions.
Practical Impact:
Standard tool for explaining individual predictions in production (fraud detection, content moderation, medical diagnosis). Faster than SHAP for quick explanations. Particularly valuable for debugging specific model failures.
Links:
📄 Paper: https://arxiv.org/abs/1602.04938
📄 KDD Proceedings: https://dl.acm.org/doi/10.1145/2939672.2939778
💻 Code: https://github.com/marcotcr/lime
📚 Tutorial: https://www.oreilly.com/content/introduction-to-local-interpretable-model-agnostic-explanations-lime/
Title: "Axiomatic Attribution for Deep Networks"
Authors: Mukund Sundararajan, Ankur Taly, Qiqi Yan
Institution: Google Research
Why This Paper Matters:
The foundational gradient-based attribution method. Integrated Gradients satisfies two critical axioms that other methods violate: Sensitivity (if changing a feature changes output, it should get non-zero attribution) and Implementation Invariance (functionally equivalent models get same attributions). Used in production at Google, Anthropic, and across the industry for explaining deep neural networks.
Key Contribution:
Computes attribution by integrating gradients along a straight path from a baseline (e.g., black image) to the actual input. This solves the "saturation problem" where gradients are zero in flat regions. Provides completeness: sum of attributions equals output difference from baseline.
Practical Impact:
Built into TensorFlow, PyTorch (Captum), and every major deep learning framework. The default method for explaining neural network predictions. Works on CNNs, RNNs, Transformers—any differentiable model. Fast (just requires gradient computation).
Links:
📄 Paper: https://arxiv.org/abs/1703.01365
💻 Original Code: https://github.com/ankurtaly/Integrated-Gradients
📚 Captum Implementation: https://captum.ai/docs/extension/integrated_gradients
🎥 Tutorial: Multiple available on Captum website
Title: "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization"
Authors: Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra
Institution: Georgia Tech, Virginia Tech
Why This Paper Matters:
The standard tool for explaining computer vision models. Grad-CAM produces visual heatmaps showing which image regions were important for predictions. Unlike pixel-level gradients (noisy), Grad-CAM operates on convolutional feature maps, producing cleaner, more interpretable visualizations. Works on any CNN architecture without modification.
Key Contribution:
Uses gradients of the target class flowing into the final convolutional layer to weight the activation maps. Produces class-discriminative localization maps showing where the model "looked" to make its decision. Combines with Guided Backpropagation to create high-resolution class-specific visualizations.
Practical Impact:
Standard in medical imaging (highlight suspicious regions in X-rays/MRIs), autonomous vehicles (what did the car "see"), and any vision application needing trust/debugging. Integrated into major computer vision frameworks. Simple to implement and computationally efficient.
Links:
📄 Paper: https://arxiv.org/abs/1610.02391
📄 ICCV Proceedings: https://openaccess.thecvf.com/content_iccv_2017/html/Selvaraju_Grad-CAM_Visual_Explanations_ICCV_2017_paper.html
💻 Code: https://github.com/ramprs/grad-cam/
🎮 Demo: http://gradcam.cloudcv.org
📄 Journal Version (IJCV 2019): https://link.springer.com/article/10.1007/s11263-019-01228-7
Understanding what models learn internally and how information flows through networks. The 2024-2025 breakthrough that enables interpreting production LLMs.
Title: "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet"
Authors: Adly Templeton et al.
Institution: Anthropic
Why This Paper Matters:
The breakthrough that proved mechanistic interpretability works on production-scale LLMs. First demonstration of extracting millions of interpretable features from a real deployed model (Claude 3 Sonnet). Found safety-critical features including deception, bias, and security vulnerabilities. Enabled "feature steering" to control model behavior.
Key Contribution:
Scaled sparse dictionary learning via sparse autoencoders (SAEs) from toy models to 200B+ parameter production LLMs. Extracted 34 million monosemantic features (each representing one interpretable concept). Discovered features organize into semantic neighborhoods—concepts cluster spatially like word embeddings.
Practical Impact:
Anthropic uses this for production safety monitoring. Influenced every major AI lab to adopt SAE methods. Demonstrates path to understanding and controlling AI systems at the feature level. The "Golden Gate Claude" demo showed you can amplify/suppress specific concepts.
Links:
📄 Full Article: https://transformer-circuits.pub/2024/scaling-monosemanticity/
🎮 Interactive Explorer: https://transformer-circuits.pub/2024/scaling-monosemanticity/vis/a1.html
📊 Feature Browser: https://transformer-circuits.pub/2024/scaling-monosemanticity/features.html
Title: "Scaling and Evaluating Sparse Autoencoders"
Authors: Leo Gao, Tom Dupré la Tour
Institution: OpenAI
Why This Paper Matters:
OpenAI's parallel SAE breakthrough on GPT-4. Provides the mathematical foundation for why sparse autoencoders work. Derives clean scaling laws showing how SAE quality improves with capacity. Introduces rigorous evaluation metrics beyond cherry-picked examples.
Key Contribution:
Introduces k-sparse autoencoders using TopK activation (only k largest activations kept). Proves learned features are more interpretable than PCA or random projections via automated scoring. Establishes scaling laws: doubling SAE size improves feature quality predictably. Open-sourced code and trained SAEs.
Practical Impact:
OpenAI uses this for GPT-4 safety monitoring. The scaling laws guide practitioners on how to size SAEs for any model. Evaluation metrics (L0 sparsity, reconstruction loss, downstream task performance) are now standard in SAE research.
Links:
📄 Paper: https://arxiv.org/abs/2406.04093
📊 PDF: https://cdn.openai.com/papers/sparse-autoencoders.pdf
💻 Code: https://github.com/openai/sparse_autoencoder
Title: "Gemma Scope: Helping the Safety Community Shed Light on Language Models"
Authors: Language Model Interpretability Team
Institution: Google DeepMind
Why This Paper Matters:
First comprehensive open-source SAE toolkit. Released hundreds of pre-trained sparse autoencoders for Gemma 2 (9B and 2B models) covering all layers and sublayers. Made mechanistic interpretability accessible to any researcher—no need to train SAEs from scratch.
Key Contribution:
Not just a paper but a complete open research artifact. Includes SAEs for attention outputs, MLP layers, residual streams—comprehensive coverage. Integrated with Neuronpedia for interactive feature exploration. Sets the standard for open science in interpretability.
Practical Impact:
Democratized mechanistic interpretability. Small research groups can now study LLM internals without massive compute budgets. Enabled 50+ follow-up papers analyzing Gemma features. Google's commitment to transparency in AI research.
Links:
📄 Blog Post: https://deepmind.google/discover/blog/gemma-scope/
🎮 Neuronpedia Demo: https://neuronpedia.org/gemma-scope
💻 HuggingFace Models: https://huggingface.co/google/gemma-scope
Title: "Language Models Can Explain Neurons in Language Models"
Authors: Steven Bills et al.
Institution: OpenAI
Why This Paper Matters:
Pioneered automated interpretability using GPT-4 to explain GPT-4's neurons. Demonstrated that AI can explain AI—the path to scalable interpretability. Manual neuron analysis doesn't scale to billions of parameters; automation does.
Key Contribution:
Three-step process: (1) GPT-4 generates natural language explanation for a neuron by examining activating examples, (2) Simulation tests if explanation predicts neuron behavior on new inputs, (3) Scoring evaluates explanation quality. Achieved meaningful explanations for thousands of neurons.
Practical Impact:
Showed automated interpretability is viable. OpenAI uses this internally for safety research. Inspired the "automated interpretability" subfield. The simulation-based scoring became standard for validating neural explanations.
Links:
📄 Blog Post: https://openai.com/research/language-models-can-explain-neurons-in-language-models
💻 Code: https://github.com/openai/automated-interpretability
🎮 Interactive Neuron Viewer: https://openaipublic.blob.core.windows.net/neuron-explainer/neuron-viewer/index.html
Title: "Sparse Autoencoders Find Highly Interpretable Features in Language Models"
Authors: Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, Lee Sharkey
Institution: EleutherAI, FAR AI, UCL
Why This Paper Matters:
The foundational SAE work that established the methodology. First to rigorously demonstrate that SAE features are genuinely interpretable via automated scoring. Showed SAEs outperform raw neurons by 2-3x on interpretability metrics. This paper convinced the field that SAEs were worth scaling.
Key Contribution:
Developed automated interpretability scoring using GPT-4 to evaluate if features correspond to human-understandable concepts. Trained SAEs on Pythia models and systematically compared to neurons, PCA, and other baselines. Open-sourced training code and learned features.
Practical Impact:
Established baseline results that Anthropic/OpenAI/DeepMind built upon. The interpretability scoring methodology is now standard. Proved SAEs work with quantitative metrics, not cherry-picked examples. The ICLR Spotlight validated SAE research as serious science.
Links:
📄 Paper: https://openreview.net/forum?id=F76bwRSLeK
💻 Code: https://github.com/ai-safety-foundation/sparse_autoencoder
Methods specifically designed for modern transformer-based models and Large Language Models.
Title: "Transformer Interpretability Beyond Attention Visualization"
Authors: Hila Chefer, Shir Gur, Lior Wolf
Institution: Tel Aviv University
Why This Paper Matters:
Definitively proved that raw attention weights are insufficient for interpretability. Introduced the first method that properly handles skip connections, normalization layers, and non-positive activations in Transformers. Produces class-specific visualizations unlike attention which is class-agnostic.
Key Contribution:
Combines Layer-wise Relevance Propagation (LRP) with gradients and attention to propagate relevance through all transformer components. Shows how to maintain conservation properties (total relevance = output) through multi-head attention while incorporating gradient information for class-specificity.
Practical Impact:
Ended the "attention as explanation" myth. Established that proper transformer interpretability requires multiple signals. Widely used in computer vision (ViTs) and increasingly applied to language transformers. Outperforms attention rollout on all benchmarks.
Links:
📄 Paper: https://openaccess.thecvf.com/content/CVPR2021/papers/Chefer_Transformer_Interpretability_Beyond_Attention_Visualization_CVPR_2021_paper.pdf
Title: "AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers"
Authors: Reduan Achtibat et al.
Institution: TU Berlin, Fraunhofer Institute
Why This Paper Matters:
State-of-the-art attribution for transformers as of 2024. Solves the "gradient shattering" problem where gradients become noisy in very deep networks (20+ layers in modern LLMs). Produces cleaner, more interpretable attributions than pure gradient methods.
Key Contribution:
Adapts LRP specifically for transformer architectures by incorporating attention weights into propagation rules. Handles temperature scaling in softmax and bias terms properly. Eliminates "checkerboard artifacts" seen in simpler attention-based methods.
Practical Impact:
Current best practice for high-quality transformer explanations when accuracy matters more than speed. Particularly strong for vision transformers and multimodal models. Becoming the standard for research requiring rigorous attribution.
Links:
📄 Paper: https://arxiv.org/abs/2402.05602
🌐 HTML: https://arxiv.org/html/2402.05602v2
Title: "TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation"
Authors: Multiple authors
Why This Paper Matters:
First rigorous adaptation of Shapley values to variable-length text in LLMs. Shapley is the theoretically optimal attribution method (satisfies all fairness axioms), but exact computation is exponential in sequence length. TokenSHAP makes it tractable via Monte Carlo sampling.
Key Contribution:
Uses Monte Carlo sampling to estimate each token's marginal contribution averaged across all possible token subsets. Measures contribution via cosine similarity between TF-IDF vectors of model outputs. Provides confidence bounds on Shapley estimates.
Practical Impact:
Most theoretically principled attribution for LLMs. Better than gradients at handling token interactions and non-linear effects. The method of choice when attribution accuracy matters (legal, medical, financial applications where explanations must be defensible).
Links:
📄 Paper: https://arxiv.org/abs/2407.10114
🌐 HTML: https://arxiv.org/html/2407.10114v1
Title: "Source Attribution in Retrieval-Augmented Generation"
Authors: Multiple authors
Why This Paper Matters:
First rigorous treatment of explainability for RAG systems—the dominant production LLM architecture. Every enterprise AI system (ChatGPT, Claude, custom RAG apps) needs to answer "Which documents support this response?" for compliance, debugging, and trust. This paper provides the methodology.
Key Contribution:
Applies Shapley values to document-level attribution in RAG. Compares exact Shapley (expensive, 2^n document subsets) with approximations (practical). Provides empirical guidance on cost-quality tradeoffs: when cheap approximations suffice vs. when rigorous computation is needed.
Practical Impact:
Makes RAG systems explainable for regulatory compliance (EU AI Act, FDA). Enables debugging why RAG systems hallucinate or cite wrong documents. Already being adopted by RAG infrastructure companies and enterprise AI teams.
Links:
📄 Paper: https://arxiv.org/abs/2507.04480
🌐 HTML: https://arxiv.org/html/2507.04480
Title: "DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning"
Authors: DeepSeek-AI (100+ authors)
Institution: DeepSeek
Why This Paper Matters:
Proved that pure reinforcement learning (without human reasoning examples) produces models with transparent Chain-of-Thought reasoning. Matches OpenAI o1 performance (86.7% on AIME 2024) while being fully open-source. Shows reasoning transparency and capability go hand-in-hand.
Key Contribution:
Demonstrated that reasoning transparency emerges naturally from RL optimization—models learn to "think out loud" without explicit training. Released complete model weights, training methodology, and inference code. Provides evidence that interpretable reasoning aids performance, not just explainability.
Practical Impact:
Sparked global adoption of transparent reasoning models (100,000+ downloads in week 1). Proved you don't need proprietary data to build reasoning LLMs. Shifted industry toward open reasoning models where thought processes are visible and auditable.
Links:
📄 Paper: https://arxiv.org/abs/2501.12948
💻 Model: https://huggingface.co/deepseek-ai/DeepSeek-R1
🎮 Demo: https://chat.deepseek.com/
Essential overviews for understanding the entire field and finding relevant methods.
Title: "Explainability for Large Language Models: A Survey"
Authors: Haiyan Zhao et al.
Institution: Multi-institutional collaboration
Why This Paper Matters:
THE comprehensive survey of LLM explainability. 100+ page systematic review covering attribution, attention analysis, probing, natural language explanations. The canonical reference—most cited survey in the field. Required reading for anyone entering interpretability research.
Key Contribution:
Creates unified taxonomy organized by training paradigm (fine-tuning vs. prompting) and explanation scope (local vs. global). Categorizes methods into: (1) Feature attribution (gradients, attention, perturbation), (2) Model internals (probing, SAEs), (3) Data attribution (training influence), (4) Natural language (self-explanation). Covers evaluation metrics for each category.
Practical Impact:
The go-to reference for understanding what methods exist and when to use each. Maintains online appendix with continuously updated paper list. PhD students use this to find dissertation topics. Industry teams use it to select appropriate methods.
Links:
📄 Paper: https://arxiv.org/abs/2309.01029
📊 ACM Digital Library: https://dl.acm.org/doi/10.1145/3639372
Title: "Mechanistic Interpretability for AI Safety — A Review"
Authors: Leonard F. Bereska, Efstratios Gavves
Institution: University of Amsterdam
Why This Paper Matters:
Definitive taxonomy of mechanistic interpretability organized by methodology and safety applications. Connects interpretability research directly to AI safety goals (deception detection, capability assessment, alignment). The roadmap for safety-focused interpretability work.
Key Contribution:
Two-axis framework: (1) Observational methods (what features exist) vs. Interventional methods (causal analysis), (2) Safety applications for each method (detecting deception, monitoring reasoning, finding vulnerabilities). 200+ paper bibliography organized by method and application. Continuously updated website.
Practical Impact:
Used by AI safety teams at major labs to prioritize research directions. Clarifies which methods are mature vs. speculative. The living website (leonardbereska.github.io) tracks field progress—papers get added as published, problems get removed when solved.
Links:
📄 Paper: https://arxiv.org/abs/2404.14082
🌐 Website: https://leonardbereska.github.io/blog/2024/mechinterpreview/
📚 Bibliography: Comprehensive, continuously updated
Real-world deployment and safety monitoring of AI systems.
Title: "Chain of Thought Monitorability: A New Opportunity for AI Safety"
Authors: Anthropic Team
Institution: Anthropic
Why This Paper Matters:
Establishes framework for monitoring reasoning traces to detect misbehavior, deception, or capability misuse. As AI systems gain autonomy (agents, long-running tasks), CoT monitoring becomes the primary safety layer. Critical for safe deployment of reasoning models.
Key Contribution:
Shows CoT monitoring can detect: (1) Models planning harmful actions, (2) Deceptive reasoning that looks benign on surface, (3) Capability overhang (model can do more than it reveals). Provides empirical evidence across multiple threat models. Defines what makes reasoning traces monitorable vs. obfuscated.
Practical Impact:
Defines safety research agenda for reasoning models. As AI systems become more capable and autonomous, CoT monitoring may be the only practical way to catch problems before deployment. Already being adopted by labs building agentic systems.
Links:
📄 Paper: https://arxiv.org/html/2507.11473v1
Title: "Explainability in the age of large language models for healthcare"
Authors: Munib Mesinovic, Peter Watkinson, Tingting Zhu
Institution: University of Oxford
Why This Paper Matters:
Addresses explainability in the highest-stakes LLM application: medical diagnosis and treatment. Discusses FDA regulatory requirements, clinical workflow integration, and phased deployment from low-risk (administrative) to high-risk (autonomous triage). The blueprint for healthcare AI deployment.
Key Contribution:
Argues explainability cannot sacrifice performance in healthcare—both are requirements. Proposes multi-level framework: (1) Model-level interpretability (SAEs, circuits), (2) Prediction-level explanations (attribution, counterfactuals), (3) Workflow integration (when/how clinicians see explanations), (4) Regulatory compliance (documentation, validation).
Practical Impact:
Guides healthcare AI companies on explainability requirements. Influences FDA guidance for AI medical devices. Shows how academic research translates to regulatory requirements and clinical practice. The template for responsible healthcare AI deployment.
Links:
📄 Paper: https://www.nature.com/articles/s44172-025-00453-y
Title: "Rethinking Interpretability in the Era of LLMs"
Authors: Chandan Singh et al.
Institution: Meta AI, UC Berkeley
Why This Paper Matters:
Argues that LLMs themselves are powerful interpretability tools. Shows GPT-4 can generate natural language explanations for datasets, model behaviors, and scientific phenomena at scale. Demonstrates the paradigm shift: using AI to explain AI, making interpretability accessible to non-experts.
Key Contribution:
Framework for "interpretability via generation": (1) LLM-augmented data analysis (explain dataset patterns), (2) Interactive explanations (users query models via dialogue), (3) Scientific discovery (LLMs propose hypotheses). Discusses when LLM-based explanation is appropriate vs. risky (hallucination concerns).
Practical Impact:
Influenced development of "AI explainer" products. Democratizes interpretability—anyone can query models for explanations without ML expertise. Caution: LLMs can hallucinate explanations, so verification is critical. Sets research agenda for LLM-based interpretability tools.
Links:
📄 Paper: https://arxiv.org/abs/2402.01761
💻 Code: https://github.com/csinva/imodelsX
Why explainability matters at the industry and societal level.
Title: "The Urgency of Interpretability"
Author: Dario Amodei (CEO, Anthropic)
Institution: Anthropic
Venue: Blog Post, 2024
Impact: 500,000+ reads, cited in policy documents
Why This Paper Matters:
Highest-profile argument for interpretability as existential necessity. Amodei (former OpenAI VP Research, Anthropic co-founder) argues interpretability is the most promising path to ensuring AI safety and alignment. Sets ambitious goal: "Interpretability can reliably detect most model problems by 2027."
Key Contribution:
Three core arguments: (1) Interpretability enables early warning systems for dangerous capabilities (detect before deployment), (2) Feature-level understanding allows surgical interventions (e.g., remove "deception" features without retraining), (3) Mechanistic interpretability is scientifically tractable (unlike alignment via fine-tuning, which is empirical guesswork).
Strategic Impact:
Shaped how major AI labs prioritize safety research. Influenced billions in safety funding allocation. Anthropic committed 30% of research budget to interpretability based on this framing. Cited in UK AI Safety Institute's research agenda. Legitimized interpretability as core to AI safety, not peripheral curiosity.
Why This Matters for Engineers:
When the CEO of a leading AI lab says "this is our top priority," it signals industry-wide importance. Interpretability is becoming a requirement, not a nice-to-have. Understanding these methods is increasingly essential for AI careers.
Links:
📄 Blog Post: https://www.darioamodei.com/post/the-urgency-of-interpretability
| # | Paper | Year | Category | Best For | Citations |
|---|---|---|---|---|---|
| 1 | SHAP | 2017 | Foundational | Any model, production | 15,000+ |
| 2 | LIME | 2016 | Foundational | Quick explanations | 12,000+ |
| 3 | Integrated Gradients | 2017 | Foundational | Deep learning | 8,000+ |
| 4 | Grad-CAM | 2017 | Foundational | Computer vision | 8,000+ |
| 5 | Scaling Monosemanticity | 2024 | Mechanistic | Production LLMs | 400+ |
| 6 | Scaling SAEs | 2024 | Mechanistic | GPT-4 internals | 200+ |
| 7 | Gemma Scope | 2024 | Mechanistic | Open research | 100+ |
| 8 | LMs Explain Neurons | 2023 | Mechanistic | Automated interp | 500+ |
| 9 | SAEs Find Features | 2024 | Mechanistic | SAE foundations | 250+ |
| 10 | Transformer Interp | 2021 | Transformer | ViTs, Transformers | 1,200+ |
| 11 | AttnLRP | 2024 | Transformer | High-quality attr | 50+ |
| 12 | TokenSHAP | 2024 | LLM | LLM attribution | 30+ |
| 13 | RAG Attribution | 2025 | LLM | RAG systems | 20+ |
| 14 | DeepSeek-R1 | 2025 | LLM | Reasoning | 500+ |
| 15 | LLM XAI Survey | 2024 | Survey | Comprehensive | 600+ |
| 16 | Mech Interp Review | 2024 | Survey | Safety focus | 300+ |
| 17 | CoT Monitoring | 2025 | Safety | AI safety | 80+ |
| 18 | Healthcare LLM XAI | 2025 | Application | Medical AI | 30+ |
| 19 | Rethinking Interp | 2024 | Application | LLM-based XAI | 400+ |
| 20 | Urgency of Interp | 2024 | Strategic | Leadership | Essay |
We welcome contributions! To maintain quality:
Paper Submission Criteria:
- ✅ Directly addresses AI explainability/interpretability
- ✅ Published in peer-reviewed venues or from established institutions
- ✅ Significant citations or novel contribution
- ✅ Freely accessible (arXiv, open access, institutional sites)
How to Contribute:
- Fork the repository
- Add paper with: Full citation, contribution summary, impact statement, links
- Update Quick Reference Table
- Submit PR with clear description
See CONTRIBUTING.md for detailed guidelines.
This collection is licensed under MIT License. All papers remain copyright of their respective authors and publishers.
This collection builds on groundbreaking work from researchers at:
- Anthropic, OpenAI, Google DeepMind, Meta AI, DeepSeek
- University of Washington, Tel Aviv University, UC Berkeley, Georgia Tech, TU Berlin, University of Oxford
- EleutherAI, FAR AI, and the open-source ML community
Maintainer: Community-maintained
Last Updated: December 2025
⭐ Star this repo if it helps your work!