-
Notifications
You must be signed in to change notification settings - Fork 21
CLOUDP-295785 - staging support for image building #336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
MCK 1.5.0 Release NotesNew Features
Bug Fixes
|
1801c66
to
dcd030d
Compare
…344) # Summary⚠️ **Important notice** This PR contains some changes from #336, but they are not used yet and don't impact the PRs or patches. They are included because previously this PR was stacked on the staging PR and it is much easier to include them. The changes that are included: - `latest_tag` support - this is needed for staging builds, but like mentioned earlier, staging builds are not yet used - replace `268558157000.dkr.ecr.us-east-1.amazonaws.com/dev` with `BASE_REPO_URL`. This will be used to distinguish different repo urls: dev, staging and release. Currently hardcoded to `268558157000.dkr.ecr.us-east-1.amazonaws.com/dev` --- **This change is made to unblock the release of MCK 1.3.0. It is not final state of the release mechanism and most of it will be replaced by image promotion process.** Created new `.evergreen-release.yml` file that contains all release tasks including integration with `kubectl-mongodb` plugin release task. All of the variants are triggered only when `github_tag` is added. Additional changes: - each released image will be also released with additional `olm_tag` that has dynamic timestamp part. It will prevent accidental overriding the tags used by OLM. The tag syntax is `{version}-olm-{timestamp_suffix}` where timestamp suffix is in `%Y%m%d%H%M%S` format - created separate `release_operator_pipeline` evergreen function that uses `release` build scenario and version provided by `git_tag` - fixed and bumped preflight script ## Proof of Work List of tasks that are triggered when doing manual patch: <img width="2036" height="1017" alt="Screenshot 2025-09-03 at 11 00 16" src="https://github.com/user-attachments/assets/b3e7e707-3929-4f88-bc4f-2f998a16482a" />⚠️ This PR was tested by running evergreen command locally: ``` sudo evergreen patch -p mongodb-kubernetes -a release -d "Release test" -f -y -u --browse --path .evergreen.yml --param RELEASE_OPERATOR_VERSION=1.3.0-rc ``` Link to evg job -> https://spruce.mongodb.com/version/68b81b45285a950007bc8398 ## Checklist - [x] Have you linked a jira ticket and/or is the ticket in the title? - [x] Have you checked whether your jira ticket required DOCSP changes? - [x] Have you added changelog file? - use `skip-changelog` label if not needed - refer to [Changelog files and Release Notes](https://github.com/mongodb/mongodb-kubernetes/blob/master/CONTRIBUTING.md#changelog-files-and-release-notes) section in CONTRIBUTING.md for more details
439da21
to
a68d85e
Compare
…ongodb#344) # Summary⚠️ **Important notice** This PR contains some changes from mongodb#336, but they are not used yet and don't impact the PRs or patches. They are included because previously this PR was stacked on the staging PR and it is much easier to include them. The changes that are included: - `latest_tag` support - this is needed for staging builds, but like mentioned earlier, staging builds are not yet used - replace `268558157000.dkr.ecr.us-east-1.amazonaws.com/dev` with `BASE_REPO_URL`. This will be used to distinguish different repo urls: dev, staging and release. Currently hardcoded to `268558157000.dkr.ecr.us-east-1.amazonaws.com/dev` --- **This change is made to unblock the release of MCK 1.3.0. It is not final state of the release mechanism and most of it will be replaced by image promotion process.** Created new `.evergreen-release.yml` file that contains all release tasks including integration with `kubectl-mongodb` plugin release task. All of the variants are triggered only when `github_tag` is added. Additional changes: - each released image will be also released with additional `olm_tag` that has dynamic timestamp part. It will prevent accidental overriding the tags used by OLM. The tag syntax is `{version}-olm-{timestamp_suffix}` where timestamp suffix is in `%Y%m%d%H%M%S` format - created separate `release_operator_pipeline` evergreen function that uses `release` build scenario and version provided by `git_tag` - fixed and bumped preflight script ## Proof of Work List of tasks that are triggered when doing manual patch: <img width="2036" height="1017" alt="Screenshot 2025-09-03 at 11 00 16" src="https://github.com/user-attachments/assets/b3e7e707-3929-4f88-bc4f-2f998a16482a" />⚠️ This PR was tested by running evergreen command locally: ``` sudo evergreen patch -p mongodb-kubernetes -a release -d "Release test" -f -y -u --browse --path .evergreen.yml --param RELEASE_OPERATOR_VERSION=1.3.0-rc ``` Link to evg job -> https://spruce.mongodb.com/version/68b81b45285a950007bc8398 ## Checklist - [x] Have you linked a jira ticket and/or is the ticket in the title? - [x] Have you checked whether your jira ticket required DOCSP changes? - [x] Have you added changelog file? - use `skip-changelog` label if not needed - refer to [Changelog files and Release Notes](https://github.com/mongodb/mongodb-kubernetes/blob/master/CONTRIBUTING.md#changelog-files-and-release-notes) section in CONTRIBUTING.md for more details
ea0e23a
to
f61dba9
Compare
f593df1
to
39840fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome work! Just some questions.
I wonder whether we can make things more straighforward than the following:
-
- evg-private-context -> bash script and context which decides which env vars to use
-
- .evergreen.yml -> calls pipeline.sh
-
- pipeline.sh -> calls pipeline.py
i wonder whether we can simplify/unify things a bit more. Maybe joining 2 and 3 or extract the decision logic from evg-private-context into 2.
WDYT?
@@ -0,0 +1,43 @@ | |||
# Evergreen CI/CD Configuration Guide |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we will have all the documantion on our ci/cd in a google doc (?) if yes - we should link to that one instead. Otherwise - we will scatter things around a lot of places and things will go even more stale
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about having all CI/CD documentation in the codebase tbh. That is why I have started this doc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
problem is that ci/cd (evg) is only viewable and available internally. Not much external contributors can rely on + we will have release related things in google docs (right?) and this here - I think this will make things more difficult. I rather have that md file really short and just link to the google doc you are writing/have prepared
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The release guide already exists, but is has less to do with evergreen CI/CD documentation. This EVERGREEN.md
files describes the structure, aliases and usage of evergreen in our project. I really think it is beneficial to keep it close to code, so important changes in the Evergreen configuration/usage can be easily documented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there was a discussion with @viveksinghggits on where to put things. The content is great, but I want us to be consistent with what everyone decided
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we agreed to link the EVERGREEN.md
in the WiKi ToC and in related documents.
|
||
if [[ -n "${digest_arm64}" ]]; then | ||
docker pull "${source_image}@${digest_arm64}" | ||
docker tag "${source_image}@${digest_arm64}" "${target_image}-arm64" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not skopeo copy
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I didn't, but Łukasz did and he experienced issues trying skopeo and crane as well.
I've tested now that using skopeo copy --preserve-digests --all
works. I will try to use it in the promotion pipeline later. After that this script will not be needed anymore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, this should be mandatory for the promotion process (or an equivalent that preserves digests).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still need this script as an intermediary step to the promotion pipeline? I assume yes, just making sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script is not used in promoting the images, just for initial supply of the OM images in staging repo. I've added a comment on the top of the script:
# Utility used to retag and push container images from one repo to another
# Useful for migrating images between different repositories (e.g. dev -> staging)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just left a small comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @MaciejKaras ,
Are you doing to add the section that we talked about, about how to load build info for a particular scenario? For example if I want to do something similar to
build_scenario = BuildScenario.infer_scenario_from_environment()
kubectl_plugin_build_info = load_build_info(build_scenario).binaries[KUBECTL_PLUGIN_BINARY_NAME]
How would I do that now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should expose the build_scenario
from your script and user should provide it. In evg-private-context
there is BUILD_SCENARIO
env variable set that you can use when calling the script in evg task. For local runs you can make it a default Development
value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, makes sense. So we will have to two evg variants one will be run in PR patches and one will be run in staging. If I am using the same script in these variants, I can make my script accept the build_scenario. Which can even be hardcoded in the variants. Please correct me if I am wrong.
Also, are we going to document this? Or you documenting it doesn't make much sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could consider to use a script that just returns the env vars for build scenarios. evg-private-context
can just and any other user the same way. It follow my suggestion in the call to extract that logic out to be re-usable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we will have to two evg variants one will be run in PR patches and one will be run in staging. If I am using the same script in these variants, I can make my script accept the build_scenario. Which can even be hardcoded in the variants. Please correct me if I am wrong.
@viveksinghggits You don't need separate variants for PR patches and staging builds. Just call the switch_context
evg function and it will create .generated
context files with the BUILD_SCENARIO
environment variable available for you. This is what we do in all other tasks, including e2e tests or image building tasks. I believe clone
task already sets up context files so you just need to read it in your script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could consider to use a script that just returns the env vars for build scenarios. evg-private-context can just and any other user the same way. It follow my suggestion in the call to extract that logic out to be re-usable
@nammn We could extract that logic, I agree. But still we need BUILD_SCENARIO
to be provided and it's currently calculated in evg-private-context
and it only makes sense to calculate it there. The evg variables that are used to calculate BUILD_SCENARIO
are not present locally.
# Conflicts: # scripts/dev/contexts/public_kind_code_snippets
# Conflicts: # scripts/dev/contexts/e2e_mdb_kind_ubi_cloudqa # scripts/dev/contexts/e2e_smoke_ibm_power # scripts/dev/contexts/e2e_smoke_ibm_z # scripts/dev/contexts/e2e_static_mdb_kind_ubi_cloudqa # scripts/dev/contexts/e2e_static_smoke_arm # scripts/dev/contexts/e2e_static_smoke_ibm_power # scripts/dev/contexts/e2e_static_smoke_ibm_z # scripts/dev/contexts/variables/om60_image # scripts/dev/contexts/variables/om70_image # scripts/dev/contexts/variables/om80_image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
# Conflicts: # scripts/dev/contexts/migrate_all_agents # scripts/dev/contexts/preflight_release_images # scripts/dev/contexts/preflight_release_images_check_only # scripts/dev/contexts/release_agent
…71-1` used in tests
…ging # Conflicts: # .evergreen-functions.yml
# Conflicts: # .evergreen-functions.yml
Summary
PR integrates
staging
build scenario withatomic_pipeline.py
and the e2e tests.staging
repositories are created in AWS ECR under/staging
dir. List of newstaging/
repositories:Example AWS page with details:
Previously all of the evergreen patch builds were targeting
268558157000.dkr.ecr.us-east-1.amazonaws.com/dev
repository. Now, depending on theBUILD_SCENARIO
environment variable images will be pushed to and pulled fromdifferent registry:
Additional changes:
latest
tag for staging builds. This is used for local testingroot-context
. Previously they were duplicated in bothscripts/dev/contexts/evg-private-context
andscripts/dev/contexts/local-defaults-context
OPERATOR_VERSION
instead ofVERSION_ID
.VERSION_ID
was only related to patch builds, which did not apply for staging and release builds268558157000.dkr.ecr.us-east-1.amazonaws.com/staging
because all latest tags are pushed on master buildsbuild_scenario
now needs to be passed explicitly topipeline.py
as well asversion
(apart fromagent
image)Proof of Work
Building all images is passing for staging scenario -> https://spruce.mongodb.com/version/68c7ce53672666000716401a/tasks?page=0&sorts=STATUS%3AASC%3BBASE_STATUS%3ADESC
Building all images is passing for release scenario (apart from kubectl which requires tag) -> https://spruce.mongodb.com/version/68c911ef8d38f100074fa2db/tasks?sorts=STATUS%3AASC%3BBASE_STATUS%3ADESC
Example image stored together with signature -> https://us-east-1.console.aws.amazon.com/ecr/repositories/private/268558157000/staging/mongodb-kubernetes-init-database?region=us-east-1
Remaining work
development
scenario for building the images using evergreen and directlypipeline.py
dummy
version in therelease_info.json
- this is already planned in hereChecklist
skip-changelog
label if not needed