Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions registry/registry.go
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,9 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
server := &http.Server{
Handler: handler,
ReadHeaderTimeout: readHeaderTimeout,
ReadTimeout: 60 * time.Minute,
Copy link

@nmittal-do nmittal-do Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supporting 20GB upload using connection timeout seems like a short term workaround.
What if the user has bigger file and good internet connection will they be able to download/upload >20G file and vice versa?
Also is it possible to create a DDoS attach by opening multiple 1hr long connection? Do we know how can we thwart it?
Have we explored option around how can we support multi part upload? while limiting the file size to maybe 1 gigs? 5Gigs?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the user has bigger file and good internet connection will they be able to download/upload >20G file and vice versa?
Have we explored option around how can we support multi part upload? while limiting the file size to maybe 1 gigs? 5Gigs?

Docker clients already chunks it to around 5 GBs (per HTTP POST/PATCH/PUT). Earlier, customer were able to push even 60 GBs image using this. But recently docker clients chunks it to little bit more than that and registry.digitalocean.com cloudflare zone had 5GB limit of request body. Hence, customer's push had been failing recently. We would raise a cloudflare request to increase this limit and we are setting 1 hr connection timeout to prevent slow connection attacks like R.U.D.Y. We are going to document only 20 GB in our product docs because we want to try 20 GB upload first and maybe increase it after some months of customer testing.

Also is it possible to create a DDoS attach by opening multiple 1hr long connection? Do we know how can we thwart it?

Cloudflare shielded our origins for such attacks. After we lift this 5GB limit, cloudflare would not shield us anymore. We would be open to such attacks. So, yes multiple 1hr long connection can be used to slow our servers down. We do not have any mechanism to thwart this at the moment. However, we can add grafana dashboards and alerting for such kind of long running connections.

Copy link

@nmittal-do nmittal-do Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But recently docker clients chunks it to little bit more than that

What will that no be? I am assuming little bit is not 15G

What was the timeout value before?

We have updated the timeout to 60 min, but what was this value before? I am trying to understand if the chucks size is not increased lot are we increasing timeout proportionally or how did we land on 60 min?

Document 20G on product

I will be worried about slow connections here which are unable to upload 20G in 1 hour they will not like us holding our promise. Also once one customer figrue out limit is not 20G this can spread easily, we should think about how can we have technical guardrails around it.

After we lift this 5GB limit, cloudflare would not shield us anymore

Is this limit applicable when we are doing chunk uploads of <= 5G?

we can add grafana dashboards and alerting for such kind of long running connections.

Please add those and put them in this PR description

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will that no be? I am assuming little bit is not 15G

I will find out and post it here.

we should think about how can we have technical guardrails around it.

That's a valid point. I would explore how can we do this. Either using nginx's conf or from our own code.

Is this limit applicable when we are doing chunk uploads of <= 5G?

I am not sure. Need to confirm with cloudflare. We would only be able to do that when we raise a request.

Please add those and put them in this PR description

Will do.

WriteTimeout: 60 * time.Minute,
IdleTimeout: 5 * time.Minute,
}

return &Registry{
Expand Down