441 Episodes

  1. How Bidirectionality Helps Language Models Learn Better via Dynamic Bottleneck Estimation

    Published: 6/6/2025
  2. A Closer Look at Bias and Chain-of-Thought Faithfulness of Large (Vision) Language Models

    Published: 6/5/2025
  3. Simplifying Bayesian Optimization Via In-Context Direct Optimum Sampling

    Published: 6/5/2025
  4. Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models

    Published: 6/5/2025
  5. IPO: Interpretable Prompt Optimization for Vision-Language Models

    Published: 6/5/2025
  6. Evolutionary Prompt Optimization discovers emergent multimodal reasoning strategies

    Published: 6/5/2025
  7. Evaluating the Unseen Capabilities: How Many Theorems Do LLMs Know?

    Published: 6/4/2025
  8. Diffusion Guidance Is a Controllable Policy Improvement Operator

    Published: 6/2/2025
  9. Alita: Generalist Agent With Self-Evolution

    Published: 6/2/2025
  10. A Snapshot of Influence: A Local Data Attribution Framework for Online Reinforcement Learning

    Published: 6/2/2025
  11. Learning Compositional Functions with Transformers from Easy-to-Hard Data

    Published: 6/2/2025
  12. Preference Learning with Response Time

    Published: 6/2/2025
  13. Accelerating RL for LLM Reasoning with Optimal Advantage Regression

    Published: 5/31/2025
  14. Algorithms for reliable decision-making need causal reasoning

    Published: 5/31/2025
  15. Belief Attribution as Mental Explanation: The Role of Accuracy, Informativity, and Causality

    Published: 5/31/2025
  16. Distances for Markov chains from sample streams

    Published: 5/31/2025
  17. When and Why LLMs Fail to Reason Globally

    Published: 5/31/2025
  18. IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis

    Published: 5/31/2025
  19. No Free Lunch: Non-Asymptotic Analysis of Prediction-Powered Inference

    Published: 5/31/2025
  20. Accelerating RL for LLM Reasoning with Optimal Advantage Regression

    Published: 5/31/2025

7 / 23

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.