Skip to content

Bump gsply from 0.2.10 to 0.2.11#6

Closed
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/gsply-0.2.11
Closed

Bump gsply from 0.2.10 to 0.2.11#6
dependabot[bot] wants to merge 1 commit intomasterfrom
dependabot/pip/gsply-0.2.11

Conversation

@dependabot
Copy link
Copy Markdown

@dependabot dependabot bot commented on behalf of github Dec 1, 2025

Bumps gsply from 0.2.10 to 0.2.11.

Release notes

Sourced from gsply's releases.

v0.2.11 (GPU Compression Optimization)

Performance Improvements

  • torch.compile() Auto-Optimization: GPU compression now automatically uses torch.compile() when available
    • ~25% faster GPU compression (5.0ms → 4.0ms for 365K Gaussians)
    • Automatic fallback to eager mode if compilation fails (e.g., no Triton/MSVC)
    • Zero configuration required - works transparently
    • Triton backend provides JIT-compiled optimized kernels

Bug Fixes

  • GPU Compression Rounding Fix: Fixed quaternion quantization producing 1-bit differences vs CPU
    • Changed from torch.round() (banker's rounding) to + 0.5 before int conversion
    • Now matches CPU Numba behavior exactly
    • Ensures consistent round-trip compression/decompression across CPU and GPU

Platform Support

  • Triton Requirements:
    • Linux: Standard triton package (installed with PyTorch)
    • Windows: Requires triton-windows package for torch.compile optimization
    • Falls back to eager mode automatically when Triton unavailable

Implementation Details

  • Lazy compilation with caching (_COMPILED_FUNCTIONS dict)
  • Runtime error handling catches compilation failures gracefully
  • Individual packing functions compiled separately for better error isolation
  • Position packing: 7.8x speedup with torch.compile
  • Color/opacity packing: 6.2x speedup with torch.compile
  • Quaternion packing: 2.3x speedup with torch.compile
Changelog

Sourced from gsply's changelog.

v0.2.11 (GPU Compression Optimization)

Performance Improvements

  • torch.compile() Auto-Optimization: GPU compression now automatically uses torch.compile() when available
    • ~25% faster GPU compression (5.0ms → 4.0ms for 365K Gaussians)
    • Automatic fallback to eager mode if compilation fails (e.g., no Triton/MSVC)
    • Zero configuration required - works transparently
    • Triton backend provides JIT-compiled optimized kernels

Bug Fixes

  • GPU Compression Rounding Fix: Fixed quaternion quantization producing 1-bit differences vs CPU
    • Changed from torch.round() (banker's rounding) to + 0.5 before int conversion
    • Now matches CPU Numba behavior exactly
    • Ensures consistent round-trip compression/decompression across CPU and GPU

Platform Support

  • Triton Requirements:
    • Linux: Standard triton package (installed with PyTorch)
    • Windows: Requires triton-windows package for torch.compile optimization
    • Falls back to eager mode automatically when Triton unavailable

Implementation Details

  • Lazy compilation with caching (_COMPILED_FUNCTIONS dict)
  • Runtime error handling catches compilation failures gracefully
  • Individual packing functions compiled separately for better error isolation
  • Position packing: 7.8x speedup with torch.compile
  • Color/opacity packing: 6.2x speedup with torch.compile
  • Quaternion packing: 2.3x speedup with torch.compile

Testing

  • All 406 tests passing
  • GPU compression roundtrip verified with both torch.compile enabled and disabled

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [gsply](https://github.com/OpsiClear/gsply) from 0.2.10 to 0.2.11.
- [Release notes](https://github.com/OpsiClear/gsply/releases)
- [Changelog](https://github.com/OpsiClear/gsply/blob/master/docs/CHANGELOG.md)
- [Commits](OpsiClear/gsply@v0.2.10...v0.2.11)

---
updated-dependencies:
- dependency-name: gsply
  dependency-version: 0.2.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot @github
Copy link
Copy Markdown
Author

dependabot bot commented on behalf of github Dec 1, 2025

Labels

The following labels could not be found: dependencies, python. Please create them before Dependabot can add them to a pull request.

Please fix the above issues or remove invalid values from dependabot.yml.

@dependabot @github
Copy link
Copy Markdown
Author

dependabot bot commented on behalf of github Dec 3, 2025

Looks like gsply is up-to-date now, so this is no longer needed.

@dependabot dependabot bot closed this Dec 3, 2025
@dependabot dependabot bot deleted the dependabot/pip/gsply-0.2.11 branch December 3, 2025 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants