Best AI papers explained
A podcast by Enoch H. Kang
440 Episodes
-
Training a Generally Curious Agent
Published: 6/12/2025 -
Estimation of Treatment Effects Under Nonstationarity via Truncated Difference-in-Q’s
Published: 6/12/2025 -
Strategy Coopetition Explains the Emergence and Transience of In-Context Learning
Published: 6/12/2025 -
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
Published: 6/11/2025 -
Agentic Supernet for Multi-agent Architecture Search
Published: 6/11/2025 -
Sample Complexity and Representation Ability of Test-time Scaling Paradigms
Published: 6/11/2025 -
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Published: 6/10/2025 -
LLMs Get Lost In Multi-Turn Conversation
Published: 6/9/2025 -
PromptPex: Automatic Test Generation for Prompts
Published: 6/8/2025 -
General Agents Need World Models
Published: 6/8/2025 -
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models
Published: 6/7/2025 -
Decisions With Algorithms
Published: 6/7/2025 -
Adapting, fast and slow: Causal Approach to Few-Shot Sequence Learning
Published: 6/6/2025 -
Conformal Arbitrage for LLM Objective Balancing
Published: 6/6/2025 -
Simulation-Based Inference for Adaptive Experiments
Published: 6/6/2025 -
Agents as Tool-Use Decision-Makers
Published: 6/6/2025 -
Quantitative Judges for Large Language Models
Published: 6/6/2025 -
Self-Challenging Language Model Agents
Published: 6/6/2025 -
Learning to Explore: An In-Context Learning Approach for Pure Exploration
Published: 6/6/2025 -
How Bidirectionality Helps Language Models Learn Better via Dynamic Bottleneck Estimation
Published: 6/6/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.