-
Notifications
You must be signed in to change notification settings - Fork 2.6k
update rsl_rl to 3.1.2 to support state-dependent std dev #3867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
update rsl_rl to 3.1.2 to support state-dependent std dev #3867
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
This PR updates the RSL-RL library dependency from version 3.0.1 to 3.1.2 across two files to enable state-dependent standard deviation support for RL policies. The changes include updating the pinned version in source/isaaclab_rl/setup.py (line 49) and updating the minimum version constant in scripts/reinforcement_learning/rsl_rl/train.py (line 59). This upgrade is necessary for sim-to-real deployment of manipulation policies in IsaacLab, as referenced in the PR description. The existing version validation infrastructure in the training script will automatically enforce this requirement and guide users to upgrade if needed. The changes are minimal and focused entirely on version numbers, leveraging the codebase's existing RSL-RL integration infrastructure including policy exporters, wrappers, and configuration classes.
PR Description Notes:
- Minor typo: "support use these two changes" should likely be "support these two changes" or "use these two changes"
Important Files Changed
| Filename | Score | Overview |
|---|---|---|
| scripts/reinforcement_learning/rsl_rl/train.py | 5/5 | Updated RSL_RL_VERSION constant from "3.0.1" to "3.1.2" to enforce minimum version requirement |
| source/isaaclab_rl/setup.py | 4/5 | Updated rsl-rl-lib dependency from version 3.0.1 to 3.1.2 in the package requirements |
Confidence score: 4/5
- This PR is safe to merge with low risk as it contains only version string updates
- Score reflects that while the changes are minimal and well-isolated, the PR checklist indicates the changelog and extension.toml version were not updated (which may be required by contribution guidelines), and there is no explicit verification that rsl-rl 3.1.2 maintains backward compatibility with existing IsaacLab RSL-RL integration code
- Pay close attention to source/isaaclab_rl/setup.py to ensure the version bump does not introduce breaking changes in RSL-RL wrappers, exporters, or configuration classes
2 files reviewed, no comments
Signed-off-by: Ashwin Varghese Kuruttukulam <123109010+ashwinvkNV@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
Updates the rsl-rl-lib dependency from version 3.0.1 to 3.1.2 to support state-dependent standard deviation and better handling of std_cond_in parameters for sim-to-real deployment policies.
Changes:
- Updates dependency version in
source/isaaclab_rl/setup.pyfromrsl-rl-lib==3.0.1torsl-rl-lib==3.1.2 - Updates minimum version check in
scripts/reinforcement_learning/rsl_rl/train.pyfrom3.0.1to3.1.2
Key Benefits:
- Adds support for state-dependent standard deviation in actor networks
- Enables better handling of conditional input parameters for policy deployment
- Aligns with upstream rsl_rl improvements needed for IsaacLab sim-to-real policies
Confidence Score: 5/5
- This PR is safe to merge with minimal risk - it's a straightforward dependency version bump
- The changes are minimal and focused: only updating version numbers in two locations (setup.py dependency and train.py version check). The update enables new features (state-dependent std deviation) that are backward-compatible with existing code. The exporter and wrapper code already handle the actor network structure generically, so no breaking changes are expected.
- No files require special attention
Important Files Changed
File Analysis
| Filename | Score | Overview |
|---|---|---|
| source/isaaclab_rl/setup.py | 5/5 | Updates rsl-rl-lib dependency from 3.0.1 to 3.1.2 |
| scripts/reinforcement_learning/rsl_rl/train.py | 5/5 | Updates minimum required rsl-rl version constant from 3.0.1 to 3.1.2 |
Sequence Diagram
sequenceDiagram
participant User
participant TrainScript as train.py
participant Setup as setup.py
participant RslRl as rsl-rl-lib
participant ActorNet as ActorCritic
User->>TrainScript: Execute training
TrainScript->>TrainScript: Check version >= 3.1.2
alt Version check fails
TrainScript-->>User: Error: Upgrade required
else Version check passes
TrainScript->>Setup: Load dependencies
Setup->>RslRl: Import rsl-rl-lib 3.1.2
RslRl-->>TrainScript: Library loaded
TrainScript->>RslRl: Create OnPolicyRunner
RslRl->>ActorNet: Initialize with state-dependent std
ActorNet-->>RslRl: Policy ready
RslRl-->>TrainScript: Runner initialized
TrainScript->>RslRl: Start training
RslRl->>ActorNet: Train with std_cond_in support
ActorNet-->>RslRl: Updated weights
RslRl-->>TrainScript: Training complete
TrainScript-->>User: Export for deployment
end
2 files reviewed, no comments
Description
Updating rsl_rl to 3.1.2 release to support use these two changes in IsaacLab sim-to-real deployed policies.
Main commits of interest:
leggedrobotics/rsl_rl@530f71a
leggedrobotics/rsl_rl@a4d108a
Type of change
New feature (non-breaking change which adds functionality)
I have read and understood the contribution guidelines
I have run the
pre-commitchecks with./isaaclab.sh --formatI have made corresponding changes to the documentation
My changes generate no new warnings
I have updated the changelog and the corresponding version in the extension's
config/extension.tomlfileI have added my name to the
CONTRIBUTORS.mdor my name already exists there