Best AI papers explained
A podcast by Enoch H. Kang
442 Episodes
-
Leaked Claude Sonnet 3.7 System Instruction tuning
Published: 5/12/2025 -
Converging Predictions with Shared Information
Published: 5/11/2025 -
Test-Time Alignment Via Hypothesis Reweighting
Published: 5/11/2025 -
Rethinking Diverse Human Preference Learning through Principal Component Analysis
Published: 5/11/2025 -
Active Statistical Inference
Published: 5/10/2025 -
Data Mixture Optimization: A Multi-fidelity Multi-scale Bayesian Framework
Published: 5/10/2025 -
AI-Powered Bayesian Inference
Published: 5/10/2025 -
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Published: 5/9/2025 -
Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of AI
Published: 5/9/2025 -
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
Published: 5/9/2025 -
How to Evaluate Reward Models for RLHF
Published: 5/9/2025 -
LLMs as Judges: Survey of Evaluation Methods
Published: 5/9/2025 -
The Alternative Annotator Test for LLM-as-a-Judge: How to Statistically Justify Replacing Human Annotators with LLMs
Published: 5/9/2025 -
Limits to scalable evaluation at the frontier: LLM as Judge won’t beat twice the data
Published: 5/9/2025 -
Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation
Published: 5/9/2025 -
Accelerating Unbiased LLM Evaluation via Synthetic Feedback
Published: 5/9/2025 -
Prediction-Powered Statistical Inference Framework
Published: 5/9/2025 -
Optimizing Chain-of-Thought Reasoners via Gradient Variance Minimization in Rejection Sampling and RL
Published: 5/9/2025 -
RM-R1: Reward Modeling as Reasoning
Published: 5/9/2025 -
Reexamining the Aleatoric and Epistemic Uncertainty Dichotomy
Published: 5/8/2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.