Language Bottleneck Models: A Framework for Interpretable Knowledge Tracing and Beyond
Best AI papers explained - A podcast by Enoch H. Kang

Categories:
This paper introduces Language Bottleneck Models (LBMs), a novel framework designed to enhance the interpretability and accuracy of Knowledge Tracing (KT) in education. Unlike traditional KT methods that rely on opaque latent embeddings, LBMs leverage Large Language Models (LLMs) to create natural-language summaries of student knowledge states. These summaries act as a "bottleneck," ensuring that all predictive information is concise yet human-understandable, thereby bridging the gap between predictive power and actionable insights for educators. The paper details how LBMs reframe KT as an inverse problem and demonstrates their effectiveness against state-of-the-art methods on both synthetic and real-world datasets, even with significantly less training data. Moreover, it explores the steerability of LBMs, allowing for human intervention in shaping the generated knowledge summaries.