forked from distribution/distribution
-
Notifications
You must be signed in to change notification settings - Fork 6
Set HTTP Server Timeouts #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
nayanjd-do
wants to merge
4
commits into
master
Choose a base branch
from
ndas/DOCR-1663/connection-timeout
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Supporting 20GB upload using connection timeout seems like a short term workaround.
What if the user has bigger file and good internet connection will they be able to download/upload >20G file and vice versa?
Also is it possible to create a DDoS attach by opening multiple 1hr long connection? Do we know how can we thwart it?
Have we explored option around how can we support multi part upload? while limiting the file size to maybe 1 gigs? 5Gigs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docker clients already chunks it to around 5 GBs (per HTTP POST/PATCH/PUT). Earlier, customer were able to push even 60 GBs image using this. But recently docker clients chunks it to little bit more than that and
registry.digitalocean.comcloudflare zone had 5GB limit of request body. Hence, customer's push had been failing recently. We would raise a cloudflare request to increase this limit and we are setting 1 hr connection timeout to prevent slow connection attacks like R.U.D.Y. We are going to document only 20 GB in our product docs because we want to try 20 GB upload first and maybe increase it after some months of customer testing.Cloudflare shielded our origins for such attacks. After we lift this 5GB limit, cloudflare would not shield us anymore. We would be open to such attacks. So, yes multiple 1hr long connection can be used to slow our servers down. We do not have any mechanism to thwart this at the moment. However, we can add grafana dashboards and alerting for such kind of long running connections.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will that no be? I am assuming little bit is not 15G
We have updated the timeout to 60 min, but what was this value before? I am trying to understand if the chucks size is not increased lot are we increasing timeout proportionally or how did we land on 60 min?
I will be worried about slow connections here which are unable to upload 20G in 1 hour they will not like us holding our promise. Also once one customer figrue out limit is not 20G this can spread easily, we should think about how can we have technical guardrails around it.
Is this limit applicable when we are doing chunk uploads of <= 5G?
Please add those and put them in this PR description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will find out and post it here.
That's a valid point. I would explore how can we do this. Either using nginx's conf or from our own code.
I am not sure. Need to confirm with cloudflare. We would only be able to do that when we raise a request.
Will do.