Skip to content

Conversation

@ycombinator
Copy link
Contributor

What is the problem this PR solves?

// Please do not just reference an issue. Explain WHAT the problem this PR solves here.

This PR prevents request bodies of arbitrarily large size to be sent to the POST /api/fleet/uploads API.

How does this PR solve the problem?

// Explain HOW you solved the problem in your code. It is possible that during PR reviews this changes and then this section should be updated.

This PR checks the size of the request body sent to the POST /api/fleet/uploads API and, if it exceeds the configured limit, rejects the request, responding with an HTTP 413 Request Entity Too Large status code. By default, the limit is configured to 5MiB but can be overridden in the Fleet Server input configuration via the server.limits.upload_start_limit.max_body_byte_size setting.

How to test this PR locally

Design Checklist

  • I have ensured my design is stateless and will work when multiple fleet-server instances are behind a load balancer.
  • I have or intend to scale test my changes, ensuring it will work reliably with 100K+ agents connected.
  • I have included fail safe mechanisms to limit the load on fleet-server: rate limiting, circuit breakers, caching, load shedding, etc.

Checklist

  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in ./changelog/fragments using the changelog tool

Related issues

@ycombinator ycombinator requested a review from a team as a code owner January 8, 2026 08:51
@mergify
Copy link
Contributor

mergify bot commented Jan 8, 2026

This pull request does not have a backport label. Could you fix it @ycombinator? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-./d./d is the label to automatically backport to the 8./d branch. /d is the digit
  • backport-active-all is the label that automatically backports to all active branches.
  • backport-active-8 is the label that automatically backports to all active minor branches for the 8 major.
  • backport-active-9 is the label that automatically backports to all active minor branches for the 9 major.

@ycombinator ycombinator added Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team backport-active-all Automated backport with mergify to all the active branches labels Jan 8, 2026
@cmacknz cmacknz requested a review from pzl January 8, 2026 15:03
Copy link
Member

@pzl pzl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

pointed out a few places where names or messages were unclear if it was describing large files (.i.e. a file itself being like 3GB with config set lower, and either rejecting file body bytes above that; or when reading this json file-description payload, looking at file.size property and rejecting for that limit). Whereas this is the limit placed on the total JSON payload body of file metadata / header.

The diff in isolation is clear which is being addressed, but down the road I can see this being an easy mixup, which thing is "too large": the file contents itself or the start / header payload.

ycombinator and others added 2 commits January 22, 2026 07:02
Co-authored-by: Michel Laterman <82832767+michel-laterman@users.noreply.github.com>
pzl
pzl previously approved these changes Jan 22, 2026
Copy link
Contributor

@michel-laterman michel-laterman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also want to use the config vars for upload finailize and chunks?

Co-authored-by: Michel Laterman <82832767+michel-laterman@users.noreply.github.com>
@ycombinator
Copy link
Contributor Author

Do we also want to use the config vars for upload finailize and chunks?

Sure, I think this makes sense. Will update PR.

@ycombinator
Copy link
Contributor Author

ycombinator commented Jan 23, 2026

Do we also want to use the config vars for upload finailize and chunks?

Implemented request body size limit for upload finalize API in 3fbd7c7.

For the upload chunk API, I'm seeing this check already:

// prevent over-sized chunks
data := http.MaxBytesReader(w, r.Body, file.MaxChunkSize)

Do we need/want to do more?

@michel-laterman
Copy link
Contributor

@pzl, should we allow for users to specify a custom value for file chunk size?

@pzl
Copy link
Member

pzl commented Jan 23, 2026

@pzl, should we allow for users to specify a custom value for file chunk size?

Probably not a good idea without specific reason. This size was chosen quite specifically. A non-exhaustive list of concerns:

  1. 4 MiB must remain a hard maximum. There are interconnects with kibana that depend on this
  2. Setting the value low can drastically change the network traffic. Even dropping half to 2MiB doubles the amount of chunks. Lower than that can really balloon the amount of requests
  3. Setting the value low also changes how many documents are used in elasticsearch. Spreading a document out among many many documents is less efficient, and may cause other issues.

@ycombinator ycombinator enabled auto-merge (squash) January 24, 2026 12:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport-active-all Automated backport with mergify to all the active branches Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants