-
Notifications
You must be signed in to change notification settings - Fork 11
cli: rely on the existing use_s3_caching option to enable caching #223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli: rely on the existing use_s3_caching option to enable caching #223
Conversation
92cf0e3 to
80dabb6
Compare
| # Not sure why we need the !Omnibus::S3Cache.fetch_missing.empty? check here | ||
| Omnibus::S3Cache.populate if @options[:populate_s3_cache] && !Omnibus::S3Cache.fetch_missing.empty? | ||
| Omnibus::S3LicenseCache.populate if @options[:populate_s3_cache] | ||
| Omnibus::S3Cache.populate if Omnibus::Config.use_s3_caching && Omnibus::Config.s3_authenticated_download && !Omnibus::S3Cache.fetch_missing.empty? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why Omnibus::Config.s3_authenticated_download?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, I'll add a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. It turns out that S3_OMNIBUS_CACHE_BUCKET being defined is not enough to enable filling the cache.
Our macOS builds are reading anonymously from the bucket, but can't write to it, which is why there was a different option.
I'm not sure if it's worth keeping a dedicated flag for that since we already have enough information in the environment, but that feels a bit brittle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see; that's interesting, I hadn't thought that we might be using the cache like that. Which, by the way, does feel a bit brittle, does that mean that we have a hidden dependency from the macOS build to the rest of builds then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I guess if there was a macOS specific dependency we would have the same issue we had on Windows yesterday.
Hopefully we'll move our macOS builds to gitlab soon enough and we can just remove this workaround.
I'll create a card for it
* Support an explicit target install location for macOS packages (#215) * configure: use the expected default value for windows prefix (#216) * use the expected default value for windows prefix * allow --build to be ommited from configure args * replace unless with if not * Always ship source/source offers (#217) * software: always ship the source or source offer Even when restoring the build from cache * attempt clean instead of deployed deploy isn't available for all fetchers * [BARX-775] Clone directly to the right revision (#218) * Replace git clone by git fetch to the right version * add log * feat(security): Pin github actions (#219) * add a disable_version_manifest accessor (#220) * add a disable_version_manifest accessor * allow version manifests to be disabled * remove an extra blank line to keep rubocop happy * fix syntax * project: fix typo preventing overriding version manifest file (#222) * project: provide a default value for disable_version_manifest (#221) * cli: rely on the existing use_s3_caching option to enable caching (#223) * cli: rely on the existing use_s3_caching option to enable caching * cli: remove now unused populate_s3_cache option * don't attempt to populate cache when we only have read access * add a comment to clarify the logic * feat(bump): Remove homemade workflow to use renovate instead (#224) * [PathFetcher] Improve speed by removing unnecessary digest operations (#225) Improves the PathFetcher class' speed by removing some digest operations that are not neccesary, since they are quite costly (10-15s for a large directory like the datadog-agent repository). Specifically: - if the target directory (in the omnibus project directories) is empty, force a fetch without computing the digests of the source and the target: in this case, we know a fetch is required. - after a fetch, do not pre-compute the digest of the target directory. In practice, it is a useless operation because this digest is never reused in the same omnibus run. I expect these improvements to reduce the duration of all Agent builds by about 30s. --------- Co-authored-by: Alex Lopez <alex.lopez.zorzano@gmail.com> Co-authored-by: Hugo Beauzée-Luyssen <hugo.beauzee@datadoghq.com> Co-authored-by: AliDatadog <125997632+AliDatadog@users.noreply.github.com> Co-authored-by: Nicolas Schweitzer <nicolas.schweitzer@datadoghq.com> Co-authored-by: Kylian Serrania <kylian.serrania@datadoghq.com>
There's already a config value to enable sources caching, we can use it directly.
This is associated with DataDog/datadog-agent#34835