Reinforcement learning (RL) has improved the reasoning abilities of large language models (LLMs), yet state-of-the-art methods still fail to learn on many training problems. On hard problems, on-policy RL rarely explores even a single correct rollout, yielding zero reward and no learning signal for driving improvement. We find that natural solutions to remedy this exploration problem from classical RL, such as entropy bonuses, more permissive clipping of the importance ratio, or direct optimization of pass@k objectives, do not resolve this issue and often destabilize optimization without improving solvability. A natural alternative is to leverage transfer from easier problems. However, we show that mixing easy and hard problems during RL training is counterproductive due to ray interference, where optimization focuses on already-solvable problems in a way that actively inhibits progress on harder ones. To address this challenge, we introduce Privileged On-Policy Exploration (POPE), an approach that leverages human- or other oracle solutions as privileged information to guide exploration on hard problems, unlike methods that use oracle solutions as training targets (e.g., off-policy RL methods or warmstarting from SFT). POPE augments hard problems with prefixes of oracle solutions, enabling RL to obtain non-zero rewards during guided rollouts. Crucially, the resulting behaviors transfer back to the original, unguided problems through a synergy between instruction-following and reasoning. Empirically, POPE expands the set of solvable problems and substantially improves performance on challenging reasoning benchmarks.
Install the required dependencies from the root directory:
# Dependencies for training
./install.shAuthenticate with the Hugging Face Hub (required for loading datasets and pushing checkpoints):
huggingface-cli loginTo verify that your environment is set up correctly, run:
scripts/run.shSee the scripts in scripts/ and configuration files in conf/ for:
- Model and tokenizer setup,
- RL hyperparameters (e.g., rollout count, token budgets),
- Dataset definitions and mixing ratios.
Datasets used in this project—including hard problem sets and POPE-style guided variants—are available on Hugging Face:
👉 https://huggingface.co/collections/CMU-AIRe/pope
If you use this code or the POPE method in your work, please cite:
@misc{qu2026popelearningreasonhard,
title={POPE: Learning to Reason on Hard Problems via Privileged On-Policy Exploration},
author={Yuxiao Qu and Amrith Setlur and Virginia Smith and Ruslan Salakhutdinov and Aviral Kumar},
year={2026},
eprint={2601.18779},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.18779},
}