The Invisible Leash: Why RLVR May Not Escape Its Origin

Best AI papers explained - A podcast by Enoch H. Kang

Categories:

This paper explores the limitations of Reinforcement Learning with Verifiable Rewards (RLVR) in expanding the reasoning capabilities of large language models (LLMs). It argues that RLVR primarily functions as a conservative reweighting mechanism, enhancing the precision of existing solutions rather than discovering entirely new ones. The text introduces a theoretical perspective, validated empirically, that RLVR is constrained by the base model's initial probability distribution, unable to sample solutions with zero initial likelihood. Furthermore, a crucial entropy-reward trade-off is identified: while RLVR improves accuracy by concentrating probability on high-reward outputs, it simultaneously reduces the diversity of solutions, potentially overlooking correct yet underrepresented answers that the base model could access. The authors conclude that overcoming these limitations requires explicit exploration mechanisms or hybrid strategies that introduce new solution pathways.