-
Notifications
You must be signed in to change notification settings - Fork 67
Testing support for kubevirt #1213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Closed
320f68c to
32c7ee8
Compare
6ecad7e to
3481a07
Compare
Member
Author
Example run |
ShyamsundarR
approved these changes
Mar 13, 2024
https://github.com/kubevirt/containerized-data-importer/releases/tag/v1.58.1 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
https://github.com/kubevirt/kubevirt/releases/tag/v1.2.0 Signed-off-by: Nir Soffer <nsoffer@redhat.com>
There is no point in using tow versions of the same image. Using this image in the CDI test can save time in the kubvirt tests later, using the cached image. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
CDI may become available before it is ready to use. If we try to use it
while it is progressing we may fail with errors about missing CRDS. Wait
until the progressing condition becomes false.
Example run showing the issue:
2024-01-10 21:42:24,080 DEBUG [kubevirt/1] Deploying cdi cr
2024-01-10 21:42:25,674 DEBUG [kubevirt/1] Waiting until cdi cr is available
2024-01-10 21:42:26,005 DEBUG [kubevirt/1] cdi.cdi.kubevirt.io/cdi condition met
We stopped waiting here...
2024-01-10 21:42:26,007 DEBUG [kubevirt/1] Waiting until cdi cr finished progressing
2024-01-10 21:42:39,472 DEBUG [kubevirt/1] cdi.cdi.kubevirt.io/cdi condition met
But CDI finished progressing 13 seconds later.
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
We cannot use volsync with ramen yet, and the kubevirt environment is already too big. Without volsync we can remove the volumesnapshot addon and submariner, which does not handle well suspending of the machine running the minikube VMs. With this change we should be able to start an environment, suspend the laptop, and resume it in an environment with unreliable network or no network access. This will be useful for live demo in conferences. Keep volsync enabled in `regional-dr` and `regional-dr-hubless` to keep the submariner and volsync addons functional. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
This is useful for starting a stopped working environment quickly
without trying to redeploy everything. The main motivation is using a
pre created environment in location with weak network like a conference.
Other use cases are working around bugs in addons that do not work well
when starting a stopped cluster, for example clusteradm.
With `--skip-addons` we skip the `start` and `stop` hooks, but we do run
the `test` hooks. This is useful for starting a stopped environment
faster but testing that the environment works. To skip all hooks run
with both `--skip-addons` and `--skip-tests`.
Example run:
$ drenv start --skip-addons --skip-tests $env
2023-11-20 00:59:25,341 INFO [rdr-kubevirt] Starting environment
2023-11-20 00:59:25,464 INFO [dr1] Starting minikube cluster
2023-11-20 00:59:29,566 INFO [hub] Starting minikube cluster
2023-11-20 00:59:29,578 INFO [dr2] Starting minikube cluster
2023-11-20 01:00:23,402 INFO [dr1] Cluster started in 57.94 seconds
2023-11-20 01:00:23,402 INFO [dr1] Configuring containerd
2023-11-20 01:00:24,936 INFO [dr1] Waiting until all deployments are available
2023-11-20 01:00:28,749 INFO [hub] Cluster started in 59.18 seconds
2023-11-20 01:00:28,750 INFO [hub] Waiting until all deployments are available
2023-11-20 01:00:53,834 INFO [dr2] Cluster started in 84.26 seconds
2023-11-20 01:00:53,834 INFO [dr2] Configuring containerd
2023-11-20 01:00:55,042 INFO [dr2] Waiting until all deployments are available
2023-11-20 01:01:01,063 INFO [hub] Deployments are available in 32.31 seconds
2023-11-20 01:01:09,482 INFO [dr1] Deployments are available in 44.55 seconds
2023-11-20 01:01:34,661 INFO [dr2] Deployments are available in 39.62 seconds
2023-11-20 01:01:34,661 INFO [rdr-kubevirt] Dumping ramen e2e config to '/home/nsoffer/.config/drenv/rdr-kubevirt'
2023-11-20 01:01:34,827 INFO [rdr-kubevirt] Environment started in 129.49 seconds
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Suspend or resume the underlying virtual machines. Assuming kvm2 driver to keep it simple for now, need to implement it better later so it works also with qemu2 driver. The use case is building the environment with good network, suspending it, and resuming it in an environment with flaky network for demo. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Configure CDI to allow pulling from a local insecure registry. This is useful for demos in an environment with unreliable network, or for CI environment when we want to avoid random failures due to flaky network. The image must be pushed to the local registry, this is easy using standard podman push command. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
To avoid certificate renewals during testing.
Without this I experienced this error:
drenv.commands.Error: Command failed:
command: ('kubectl', 'apply', '--context', 'dr1', '--kustomize=cr')
exitcode: 1
error:
Error from server (InternalError): error when applying patch:
{"spec":{"configuration":{"developerConfiguration":{"featureGates":[]}}}}
to:
Resource: "kubevirt.io/v1, Resource=kubevirts", GroupVersionKind: "kubevirt.io/v1, Kind=KubeVirt"
Name: "kubevirt", Namespace: "kubevirt"
for: "cr": error when patching "cr": Internal error occurred: failed calling webhook
"kubevirt-update-validator.kubevirt.io": failed to call webhook: Post
"https://kubevirt-operator-webhook.kubevirt.svc:443/kubevirt-validate-update?timeout=10s":
tls: failed to verify certificate: x509: certificate has expired or is not yet valid:
current time 2024-01-26T19:05:52Z is after 2024-01-26T16:24:46Z
Thanks: Michael Henriksen <mhenriks@redhat.com>
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
To avoid certificate renewals during testing.
Without this I experienced this error when starting a stopped
environment after a day:
drenv.commands.Error: Command failed:
command: ('kubectl', 'apply', '--context', 'dr2', '--kustomize=disk')
exitcode: 1
error:
Error from server (InternalError): error when creating "disk": Internal
error occurred: failed calling webhook "populator-validate.cdi.kubevirt.io":
failed to call webhook: Post "https://cdi-api.cdi.svc:443/populator-validate?timeout=30s":
tls: failed to verify certificate: x509: certificate has expired or is not yet valid:
current time 2024-01-28T14:08:01Z is after 2024-01-27T19:15:20Z
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Using a local git server we can deploy ocm applications without network access to github. This is useful for demos when the network is unreliable, for example in a conference. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Using local registry is useful for demos when network is unreliable, for example in a conference. It can also be used to avoid random failures when the network is flaky, by caching remove images locally. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
With this you can run the local registry as a systemd service starting at boot, instead of starting the registry manually when you want to use it. Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Since we plan multiple configuration for kubevirt, we use the same
layout as ocm-ramen-samples subscription/ directory:
configs/
├── deployment-k8s-regional-rbd.yaml
└── kubevirt
└── vm-pvc-k8s-regional.yaml
To run basic tests using a vm use:
basic-test/run -c configs/kubevirt/vm-pvc-k8s-regional.yaml $env
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Additional changes required for kubevirt support.
Status: