442 Episodes

  1. Test-Time Reinforcement Learning (TTRL)

    Published: 5/27/2025
  2. Interpreting Emergent Planning in Model-Free Reinforcement Learning

    Published: 5/26/2025
  3. Agentic Reward Modeling_Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems

    Published: 5/26/2025
  4. Beyond Reward Hacking: Causal Rewards for Large LanguageModel Alignment

    Published: 5/26/2025
  5. Learning How Hard to Think: Input-Adaptive Allocation of LM Computation

    Published: 5/26/2025
  6. Highlighting What Matters: Promptable Embeddings for Attribute-Focused Image Retrieval

    Published: 5/26/2025
  7. UFT: Unifying Supervised and Reinforcement Fine-Tuning

    Published: 5/26/2025
  8. Understanding High-Dimensional Bayesian Optimization

    Published: 5/26/2025
  9. Inference time alignment in continuous space

    Published: 5/25/2025
  10. Efficient Test-Time Scaling via Self-Calibration

    Published: 5/25/2025
  11. Conformal Prediction via Bayesian Quadrature

    Published: 5/25/2025
  12. Predicting from Strings: Language Model Embeddings for Bayesian Optimization

    Published: 5/25/2025
  13. Self-Evolving Curriculum for LLM Reasoning

    Published: 5/25/2025
  14. Online Decision-Focused Learning in Dynamic Environments

    Published: 5/25/2025
  15. FisherSFT: Data-Efficient Supervised Fine-Tuning of Language Models Using Information Gain

    Published: 5/25/2025
  16. Reward Shaping from Confounded Offline Data

    Published: 5/25/2025
  17. Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning

    Published: 5/25/2025
  18. Understanding Best-of-N Language Model Alignment

    Published: 5/25/2025
  19. Maximizing Acquisition Functions for Bayesian Optimization - and its relation to Gradient Descent

    Published: 5/24/2025
  20. Bayesian Prompt Ensembles: Model Uncertainty Estimation for Black-Box Large Language Models

    Published: 5/24/2025

10 / 23

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.