Skip to content

Conversation

sudhakarsingh27
Copy link
Collaborator

@sudhakarsingh27 sudhakarsingh27 commented Sep 19, 2025

Description

Allows applying rope embedding with "offsets" to sequences in packed (thd) or batch (bshd/sbhd) formats. The offset per sequence can be supplied as an argument start_positions tensor like so:

# in case one wants to apply rope offset by the sequence lengths
start_positions = cu_seqlens[-1]

x_with_rope = apply_rotary_pos_emb(x, rope_embedding, start_positions=start_positions, ...)

Fixes # (issue)
NVIDIA-NeMo/NeMo#14611

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Enable start_positions to work in backward pass (as in allow accessing rope with offsets per sequence) for fused kernels (particularly for thd fomat).
  • Make "bshd/sbhd" formats also work with start_positions argument for completeness.
  • Add necessary tests to check all those formats.

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
pre-commit-ci bot and others added 4 commits September 19, 2025 04:06
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
Signed-off-by: Sudhakar Singh <sudhakars@nvidia.com>
@sudhakarsingh27
Copy link
Collaborator Author

/te-ci pytorch L0

@sudhakarsingh27
Copy link
Collaborator Author

/te-ci pytorch L0

@sudhakarsingh27
Copy link
Collaborator Author

/te-ci pytorch L0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants