Best AI papers explained
A podcast by Enoch H. Kang
442 Episodes
-
Decoding Claude Code: Terminal Agent for Developers
Published: 5/7/2025 -
Emergent Strategic AI Equilibrium from Pre-trained Reasoning
Published: 5/7/2025 -
Benefiting from Proprietary Data with Siloed Training
Published: 5/6/2025 -
Advantage Alignment Algorithms
Published: 5/6/2025 -
Asymptotic Safety Guarantees Based On Scalable Oversight
Published: 5/6/2025 -
What Makes a Reward Model a Good Teacher? An Optimization Perspective
Published: 5/6/2025 -
Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Published: 5/6/2025 -
Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts
Published: 5/6/2025 -
You Are What You Eat - AI Alignment Requires Understanding How Data Shapes Structure and Generalisation
Published: 5/6/2025 -
Interplay of LLMs in Information Retrieval Evaluation
Published: 5/3/2025 -
Trade-Offs Between Tasks Induced by Capacity Constraints Bound the Scope of Intelligence
Published: 5/3/2025 -
Toward Efficient Exploration by Large Language Model Agents
Published: 5/3/2025 -
Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT
Published: 5/2/2025 -
Self-Consuming Generative Models with Curated Data
Published: 5/2/2025 -
Bootstrapping Language Models with DPO Implicit Rewards
Published: 5/2/2025 -
DeepSeek-Prover-V2: Advancing Formal Reasoning
Published: 5/1/2025 -
THINKPRM: Data-Efficient Process Reward Models
Published: 5/1/2025 -
Societal Frameworks and LLM Alignment
Published: 4/29/2025 -
Risks from Multi-Agent Advanced AI
Published: 4/29/2025 -
Causality-Aware Alignment for Large Language Model Debiasing
Published: 4/29/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.