Skip to content

Fix packetbeat cache janitor goroutine leak#48836

Merged
stanek-michal merged 25 commits intomainfrom
packetbeat-cache-janitor-goroutine-leak
Mar 31, 2026
Merged

Fix packetbeat cache janitor goroutine leak#48836
stanek-michal merged 25 commits intomainfrom
packetbeat-cache-janitor-goroutine-leak

Conversation

@stanek-michal
Copy link
Copy Markdown
Contributor

@stanek-michal stanek-michal commented Feb 13, 2026

Proposed commit message

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works. Where relevant, I have used the stresstest.sh script to run them under stress conditions and race detector to verify their stability.
  • I have added an entry in ./changelog/fragments using the changelog tool.

Disruptive User Impact

Author's Checklist

  • [ ]

How to test this PR locally

Related issues

Use cases

Screenshots

Logs

Protocol plugins (dns, tcp, mysql, pgsql, mongodb, thrift, amqp,
nfs/rpc, icmp) start cache janitor goroutines via StartJanitor() but
never call StopJanitor() when the plugin is destroyed during a
configuration reload. Each reload cycle leaks two goroutines (one from
the DNS/protocol cache, one from the TCP stream cache) and their
associated map allocations (~3 MB per cache with the default 64k-slot
hash size). Under Fleet management, where policy revisions trigger
frequent runner restarts, this causes unbounded memory growth.

Add a PluginCloser interface and Close() methods to all protocol
plugins that use caches, and call them from the sniffer cleanup path
so janitor goroutines are stopped when the sniffer is torn down.
The TransactionPublisher.worker goroutine exits when p.done is closed
during Stop(), but never calls client.Close() on the beat.Client it
holds. Each configuration reload creates a new client via
CreateReporter() that is never released, leaking pipeline client
resources.

Add defer client.Close() to the worker so clients are properly
released when the publisher stops.
StopJanitor closed the janitorQuit channel but never nilled it out,
so a second call would panic on closing an already-closed channel.
Nil the channel after close so repeated calls are safe.
@stanek-michal stanek-michal requested review from a team as code owners February 13, 2026 03:45
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Feb 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

🤖 GitHub comments

Just comment with:

  • run docs-build : Re-trigger the docs validation. (use unformatted text in the comment!)

@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Feb 13, 2026

This pull request does not have a backport label.
If this is a bug or security fix, could you label this PR @stanek-michal? 🙏.
For such, you'll need to label your PR with:

  • The upcoming major version of the Elastic Stack
  • The upcoming minor version of the Elastic Stack (if you're not pushing a breaking change)

To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-8./d is the label to automatically backport to the 8./d branch. /d is the digit
  • backport-active-all is the label that automatically backports to all active branches.
  • backport-active-8 is the label that automatically backports to all active minor branches for the 8 major.
  • backport-active-9 is the label that automatically backports to all active minor branches for the 9 major.

@pierrehilbert pierrehilbert added the Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team label Feb 13, 2026
@elasticmachine
Copy link
Copy Markdown
Contributor

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Feb 13, 2026
@pierrehilbert pierrehilbert added needs_team Indicates that the issue/PR needs a Team:* label Team:Security-Linux Platform Linux Platform Team in Security Solution labels Feb 13, 2026
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Feb 13, 2026
@elasticmachine
Copy link
Copy Markdown
Contributor

Pinging @elastic/sec-linux-platform (Team:Security-Linux Platform)

Copy link
Copy Markdown
Contributor

@leehinman leehinman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in general adding the Closer LGTM

any chance you could add a test that uses go.uber.org/goleak ? That has helped us catch and track down leaks with beatreceiver startup and shutdown.

func VerifyNoLeaks(t *testing.T) {
skipped := []goleak.Option{
// See https://github.com/microsoft/go-winio/issues/272
goleak.IgnoreAnyFunction("github.com/Microsoft/go-winio.getQueuedCompletionStatus"),
// False positive, from init in cloud.google.com/go/pubsub and filebeat/input/gcppubsub.
// See https://github.com/googleapis/google-cloud-go/issues/10948
// and https://github.com/census-instrumentation/opencensus-go/issues/1191
goleak.IgnoreAnyFunction("go.opencensus.io/stats/view.(*worker).start"),
// On Linux, mainly arm64, some HTTP transport goroutines are leaked while still dialing.
goleak.IgnoreAnyFunction("net.(*netFD).connect"),
goleak.IgnoreAnyFunction("net.(*netFD).connect.func2"),
goleak.IgnoreAnyFunction("net/http.(*Transport).startDialConnForLocked"),
}
goleak.VerifyNone(t, skipped...)
}

@stanek-michal
Copy link
Copy Markdown
Contributor Author

in general adding the Closer LGTM

any chance you could add a test that uses go.uber.org/goleak ? That has helped us catch and track down leaks with beatreceiver startup and shutdown.

func VerifyNoLeaks(t *testing.T) {
skipped := []goleak.Option{
// See https://github.com/microsoft/go-winio/issues/272
goleak.IgnoreAnyFunction("github.com/Microsoft/go-winio.getQueuedCompletionStatus"),
// False positive, from init in cloud.google.com/go/pubsub and filebeat/input/gcppubsub.
// See https://github.com/googleapis/google-cloud-go/issues/10948
// and https://github.com/census-instrumentation/opencensus-go/issues/1191
goleak.IgnoreAnyFunction("go.opencensus.io/stats/view.(*worker).start"),
// On Linux, mainly arm64, some HTTP transport goroutines are leaked while still dialing.
goleak.IgnoreAnyFunction("net.(*netFD).connect"),
goleak.IgnoreAnyFunction("net.(*netFD).connect.func2"),
goleak.IgnoreAnyFunction("net/http.(*Transport).startDialConnForLocked"),
}
goleak.VerifyNone(t, skipped...)
}

done 209a469

@stanek-michal stanek-michal added backport-active-8 Automated backport with mergify to all the active 8.[0-9]+ branches backport-active-9 Automated backport with mergify to all the active 9.[0-9]+ branches bugfix labels Feb 17, 2026
@stanek-michal stanek-michal requested a review from a team as a code owner February 19, 2026 15:20
andrewkroh

This comment was marked as duplicate.

@stanek-michal stanek-michal force-pushed the packetbeat-cache-janitor-goroutine-leak branch from d8e78e2 to 25bb5a8 Compare February 19, 2026 16:54
Copy link
Copy Markdown
Member

@andrewkroh andrewkroh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Synchronization fix LGTM.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 2, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a47937 and ffef2c5.

📒 Files selected for processing (1)
  • packetbeat/protos/thrift/thrift.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • packetbeat/protos/thrift/thrift.go

📝 Walkthrough

Walkthrough

Refactors janitor lifecycle management: libbeat cache uses synchronized quit channels to start/stop janitors. Adds a PluginCloser interface and ProtocolsStruct.Close. Many packetbeat protocol plugins (AMQP, DNS, ICMP, MongoDB, MySQL, NFS, PostgreSQL, TCP, Thrift) gain Close methods to stop their janitors. Sniffer and decoder code now propagate and invoke cleanup functions when decoders change. Multiple handlers hardened with safe type assertions and payload/size guards. New unit tests verify decoder and janitor cleanup behavior.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch packetbeat-cache-janitor-goroutine-leak

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packetbeat/protos/thrift/thrift.go`:
- Around line 234-236: The Close path currently only calls
thrift.transactions.StopJanitor() but doesn't stop the goroutine started in
thrift.init() that runs thrift.publishTransactions() and ranges over
thrift.publishQueue; modify thriftPlugin to signal and wait for that publisher
to terminate: add a shutdown mechanism (e.g., a done channel or a sync.WaitGroup
plus a closed/closing flag) to the thriftPlugin struct, have Close()
signal/close that channel (or set the flag and close publishQueue in a
coordinated way) and then wait for the publishTransactions goroutine to exit;
ensure producers check the closing flag or observe the closed channel before
sending to thrift.publishQueue so sends cannot occur after Close() begins, and
update thrift.init(), publishTransactions(), and Close() to use this
coordination.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e14527c and 4a47937.

📒 Files selected for processing (18)
  • changelog/fragments/1771343134-packetbeat-libbeat-janitor-cleanup-lifecycle-fix.yaml
  • libbeat/common/cache.go
  • packetbeat/protos/amqp/amqp.go
  • packetbeat/protos/dns/dns.go
  • packetbeat/protos/icmp/icmp.go
  • packetbeat/protos/icmp/icmp_test.go
  • packetbeat/protos/mongodb/mongodb.go
  • packetbeat/protos/mysql/mysql.go
  • packetbeat/protos/nfs/rpc.go
  • packetbeat/protos/pgsql/pgsql.go
  • packetbeat/protos/protos.go
  • packetbeat/protos/tcp/tcp.go
  • packetbeat/protos/thrift/thrift.go
  • packetbeat/publish/publish.go
  • packetbeat/sniffer/decoders.go
  • packetbeat/sniffer/decoders_test.go
  • packetbeat/sniffer/sniffer.go
  • packetbeat/sniffer/sniffer_test.go
💤 Files with no reviewable changes (1)
  • packetbeat/protos/icmp/icmp_test.go

Close the thrift publish queue in Close so publishTransactions can exit cleanly and avoid a lingering goroutine after shutdown.

Made-with: Cursor
@stanek-michal
Copy link
Copy Markdown
Contributor Author

whoops, I gave it a final look and realized it needs to be reworked, I'll fix it on Monday. The protocol janitors were closed prematurely and we'd leak memory, also thrift part has a panic risk

stanek-michal and others added 2 commits March 31, 2026 02:29
protocols.Close() was being called from per-decoder cleanup, but the
protocols instance outlives the decoder — it is created once per
interface in setupSniffer and reused across decoder rebuilds on
link-type changes. Calling Close() on decoder replacement could stop
protocol janitors while the analyzers were still in use, and in the
case of Thrift could panic on double channel close.

Move protocols.Close() to run once after Sniffer.Run() exits, matching
the actual lifetime of the protocols object. Decoder-level cleanup
(ICMP, TCP, UDP) remains per-decoder as before.
@stanek-michal
Copy link
Copy Markdown
Contributor Author

fixed the lifetimes - now protocol janitors are cleaned up along with the sniffers.

@stanek-michal stanek-michal merged commit 70b37f2 into main Mar 31, 2026
204 checks passed
@stanek-michal stanek-michal deleted the packetbeat-cache-janitor-goroutine-leak branch March 31, 2026 10:30
@github-actions
Copy link
Copy Markdown
Contributor

@Mergifyio backport 8.19 9.2 9.3

@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Mar 31, 2026

backport 8.19 9.2 9.3

✅ Backports have been created

Details

Cherry-pick of 70b37f2 has failed:

On branch mergify/bp/8.19/pr-48836
Your branch is up to date with 'origin/8.19'.

You are currently cherry-picking commit 70b37f21a.
  (fix conflicts and run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:
	new file:   changelog/fragments/1771343134-packetbeat-libbeat-janitor-cleanup-lifecycle-fix.yaml
	modified:   libbeat/common/cache.go
	modified:   packetbeat/protos/amqp/amqp.go
	modified:   packetbeat/protos/dns/dns.go
	modified:   packetbeat/protos/icmp/icmp.go
	modified:   packetbeat/protos/icmp/icmp_test.go
	modified:   packetbeat/protos/mongodb/mongodb.go
	modified:   packetbeat/protos/mysql/mysql.go
	modified:   packetbeat/protos/nfs/rpc.go
	modified:   packetbeat/protos/pgsql/pgsql.go
	modified:   packetbeat/protos/protos.go
	modified:   packetbeat/protos/tcp/tcp.go
	modified:   packetbeat/protos/thrift/thrift.go
	modified:   packetbeat/publish/publish.go
	modified:   packetbeat/sniffer/decoders.go
	new file:   packetbeat/sniffer/decoders_test.go
	modified:   packetbeat/sniffer/sniffer_test.go

Unmerged paths:
  (use "git add <file>..." to mark resolution)
	both modified:   packetbeat/beater/processor.go
	both modified:   packetbeat/sniffer/sniffer.go

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally

Cherry-pick of 70b37f2 has failed:

On branch mergify/bp/9.2/pr-48836
Your branch is up to date with 'origin/9.2'.

You are currently cherry-picking commit 70b37f21a.
  (fix conflicts and run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:
	new file:   changelog/fragments/1771343134-packetbeat-libbeat-janitor-cleanup-lifecycle-fix.yaml
	modified:   libbeat/common/cache.go
	modified:   packetbeat/protos/amqp/amqp.go
	modified:   packetbeat/protos/dns/dns.go
	modified:   packetbeat/protos/icmp/icmp.go
	modified:   packetbeat/protos/icmp/icmp_test.go
	modified:   packetbeat/protos/mongodb/mongodb.go
	modified:   packetbeat/protos/mysql/mysql.go
	modified:   packetbeat/protos/nfs/rpc.go
	modified:   packetbeat/protos/pgsql/pgsql.go
	modified:   packetbeat/protos/protos.go
	modified:   packetbeat/protos/tcp/tcp.go
	modified:   packetbeat/protos/thrift/thrift.go
	modified:   packetbeat/publish/publish.go
	modified:   packetbeat/sniffer/decoders.go
	new file:   packetbeat/sniffer/decoders_test.go
	modified:   packetbeat/sniffer/sniffer_test.go

Unmerged paths:
  (use "git add <file>..." to mark resolution)
	both modified:   packetbeat/beater/processor.go
	both modified:   packetbeat/sniffer/sniffer.go

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally

mergify bot pushed a commit that referenced this pull request Mar 31, 2026
* packetbeat: stop cache janitor goroutines on protocol teardown

Protocol plugins (dns, tcp, mysql, pgsql, mongodb, thrift, amqp,
nfs/rpc, icmp) start cache janitor goroutines via StartJanitor() but
never call StopJanitor() when the plugin is destroyed during a
configuration reload. Each reload cycle leaks two goroutines (one from
the DNS/protocol cache, one from the TCP stream cache) and their
associated map allocations (~3 MB per cache with the default 64k-slot
hash size). Under Fleet management, where policy revisions trigger
frequent runner restarts, this causes unbounded memory growth.

Add a PluginCloser interface and Close() methods to all protocol
plugins that use caches, and call them from the sniffer cleanup path
so janitor goroutines are stopped when the sniffer is torn down.

* packetbeat: close pipeline clients when publisher worker exits

The TransactionPublisher.worker goroutine exits when p.done is closed
during Stop(), but never calls client.Close() on the beat.Client it
holds. Each configuration reload creates a new client via
CreateReporter() that is never released, leaking pipeline client
resources.

Add defer client.Close() to the worker so clients are properly
released when the publisher stops.

* libbeat: make Cache.StopJanitor idempotent

StopJanitor closed the janitorQuit channel but never nilled it out,
so a second call would panic on closing an already-closed channel.
Nil the channel after close so repeated calls are safe.

* packetbeat: add goroutine leak regression test for decoder cleanup

* packetbeat: fix janitor leaks and decoder lifecycle on dynamic interface changes

* changelog: add fragment for janitor and decoder cleanup fixes

* changelog: fixup fragment

* packetbeat: fix golangci-lint findings in touched files

* packetbeat: more linter fixes

* filebeat: fix AD memberOf filter test expectations

* packetbeat: Add robust synchronization for Start/StopJanitor

* packetbeat/thrift: stop publisher goroutine on shutdown

Close the thrift publish queue in Close so publishTransactions can exit cleanly and avoid a lingering goroutine after shutdown.

Made-with: Cursor

* packetbeat: move protocols.Close() to sniffer shutdown

protocols.Close() was being called from per-decoder cleanup, but the
protocols instance outlives the decoder — it is created once per
interface in setupSniffer and reused across decoder rebuilds on
link-type changes. Calling Close() on decoder replacement could stop
protocol janitors while the analyzers were still in use, and in the
case of Thrift could panic on double channel close.

Move protocols.Close() to run once after Sniffer.Run() exits, matching
the actual lifetime of the protocols object. Decoder-level cleanup
(ICMP, TCP, UDP) remains per-decoder as before.

(cherry picked from commit 70b37f2)

# Conflicts:
#	packetbeat/beater/processor.go
#	packetbeat/sniffer/sniffer.go
mergify bot pushed a commit that referenced this pull request Mar 31, 2026
* packetbeat: stop cache janitor goroutines on protocol teardown

Protocol plugins (dns, tcp, mysql, pgsql, mongodb, thrift, amqp,
nfs/rpc, icmp) start cache janitor goroutines via StartJanitor() but
never call StopJanitor() when the plugin is destroyed during a
configuration reload. Each reload cycle leaks two goroutines (one from
the DNS/protocol cache, one from the TCP stream cache) and their
associated map allocations (~3 MB per cache with the default 64k-slot
hash size). Under Fleet management, where policy revisions trigger
frequent runner restarts, this causes unbounded memory growth.

Add a PluginCloser interface and Close() methods to all protocol
plugins that use caches, and call them from the sniffer cleanup path
so janitor goroutines are stopped when the sniffer is torn down.

* packetbeat: close pipeline clients when publisher worker exits

The TransactionPublisher.worker goroutine exits when p.done is closed
during Stop(), but never calls client.Close() on the beat.Client it
holds. Each configuration reload creates a new client via
CreateReporter() that is never released, leaking pipeline client
resources.

Add defer client.Close() to the worker so clients are properly
released when the publisher stops.

* libbeat: make Cache.StopJanitor idempotent

StopJanitor closed the janitorQuit channel but never nilled it out,
so a second call would panic on closing an already-closed channel.
Nil the channel after close so repeated calls are safe.

* packetbeat: add goroutine leak regression test for decoder cleanup

* packetbeat: fix janitor leaks and decoder lifecycle on dynamic interface changes

* changelog: add fragment for janitor and decoder cleanup fixes

* changelog: fixup fragment

* packetbeat: fix golangci-lint findings in touched files

* packetbeat: more linter fixes

* filebeat: fix AD memberOf filter test expectations

* packetbeat: Add robust synchronization for Start/StopJanitor

* packetbeat/thrift: stop publisher goroutine on shutdown

Close the thrift publish queue in Close so publishTransactions can exit cleanly and avoid a lingering goroutine after shutdown.

Made-with: Cursor

* packetbeat: move protocols.Close() to sniffer shutdown

protocols.Close() was being called from per-decoder cleanup, but the
protocols instance outlives the decoder — it is created once per
interface in setupSniffer and reused across decoder rebuilds on
link-type changes. Calling Close() on decoder replacement could stop
protocol janitors while the analyzers were still in use, and in the
case of Thrift could panic on double channel close.

Move protocols.Close() to run once after Sniffer.Run() exits, matching
the actual lifetime of the protocols object. Decoder-level cleanup
(ICMP, TCP, UDP) remains per-decoder as before.

(cherry picked from commit 70b37f2)

# Conflicts:
#	packetbeat/beater/processor.go
#	packetbeat/sniffer/sniffer.go
mergify bot pushed a commit that referenced this pull request Mar 31, 2026
* packetbeat: stop cache janitor goroutines on protocol teardown

Protocol plugins (dns, tcp, mysql, pgsql, mongodb, thrift, amqp,
nfs/rpc, icmp) start cache janitor goroutines via StartJanitor() but
never call StopJanitor() when the plugin is destroyed during a
configuration reload. Each reload cycle leaks two goroutines (one from
the DNS/protocol cache, one from the TCP stream cache) and their
associated map allocations (~3 MB per cache with the default 64k-slot
hash size). Under Fleet management, where policy revisions trigger
frequent runner restarts, this causes unbounded memory growth.

Add a PluginCloser interface and Close() methods to all protocol
plugins that use caches, and call them from the sniffer cleanup path
so janitor goroutines are stopped when the sniffer is torn down.

* packetbeat: close pipeline clients when publisher worker exits

The TransactionPublisher.worker goroutine exits when p.done is closed
during Stop(), but never calls client.Close() on the beat.Client it
holds. Each configuration reload creates a new client via
CreateReporter() that is never released, leaking pipeline client
resources.

Add defer client.Close() to the worker so clients are properly
released when the publisher stops.

* libbeat: make Cache.StopJanitor idempotent

StopJanitor closed the janitorQuit channel but never nilled it out,
so a second call would panic on closing an already-closed channel.
Nil the channel after close so repeated calls are safe.

* packetbeat: add goroutine leak regression test for decoder cleanup

* packetbeat: fix janitor leaks and decoder lifecycle on dynamic interface changes

* changelog: add fragment for janitor and decoder cleanup fixes

* changelog: fixup fragment

* packetbeat: fix golangci-lint findings in touched files

* packetbeat: more linter fixes

* filebeat: fix AD memberOf filter test expectations

* packetbeat: Add robust synchronization for Start/StopJanitor

* packetbeat/thrift: stop publisher goroutine on shutdown

Close the thrift publish queue in Close so publishTransactions can exit cleanly and avoid a lingering goroutine after shutdown.

Made-with: Cursor

* packetbeat: move protocols.Close() to sniffer shutdown

protocols.Close() was being called from per-decoder cleanup, but the
protocols instance outlives the decoder — it is created once per
interface in setupSniffer and reused across decoder rebuilds on
link-type changes. Calling Close() on decoder replacement could stop
protocol janitors while the analyzers were still in use, and in the
case of Thrift could panic on double channel close.

Move protocols.Close() to run once after Sniffer.Run() exits, matching
the actual lifetime of the protocols object. Decoder-level cleanup
(ICMP, TCP, UDP) remains per-decoder as before.

(cherry picked from commit 70b37f2)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-active-8 Automated backport with mergify to all the active 8.[0-9]+ branches backport-active-9 Automated backport with mergify to all the active 9.[0-9]+ branches bugfix Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team Team:Security-Linux Platform Linux Platform Team in Security Solution

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants