-
Notifications
You must be signed in to change notification settings - Fork 86
Design for virt qcow2 datamover #2037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: oadp-dev
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,211 @@ | ||
| # Kubevirt incremental datamover | ||
|
|
||
| In a kubevirt environment, one possibility for improving volume backup performance to allow more frequent backups would be to take incremental qcow2 backups using libvirt tools. Integrating this into OADP/Velero backups will require some new components to OADP (plugins, controller), as well as some modifications to the velero codebase. | ||
|
|
||
| ## Background | ||
|
|
||
| Taking a VolumeSnapshot and then using kopia to process the entire volume and copy the required incremental changes into the Backup Storage Location (BSL) is a heavyweight process. Creating an incremental qcow2 backup for the same volume is generally a much more lightweight action. We want to make use of the existing Velero backup/restore process, with actual kubevirt incremental backup/restore happening via a new controller. For the moment, this will be referred to as the Kubevirt Datamover Controller. This action will be coordinated with Velero via existing infrastructure -- BackupItemActions (BIAs), RestoreItemActions (RIAs) and the DataUpload/DataDownload CRs. Initial implementation should require minimal changes to Velero, since Velero currently ignores DataUploads with `Spec.DataMover` set to something other than `velero`. | ||
|
|
||
| ## Goals | ||
|
|
||
| - Back up and restore VM volumes using kubevirt incremental backup instead of Velero's built-in CSI datamover | ||
| - Use existing Velero infrastructure to integrate this feature into regular velero backup and restore | ||
| - Implementation based on kubevirt enhancement defined at <https://github.com/kubevirt/enhancements/blob/main/veps/sig-storage/incremental-backup.md> | ||
| - There is a follow-on design PR at <https://github.com/kubevirt/enhancements/pull/126/changes> although this mainly discusses push-mode, which is out of scope for the current design. | ||
|
|
||
| ## Non goals | ||
| - Deep integration with velero data mover pluggability (this could be considered in the long-term though, which would minimize duplication of effort and enhance maintainability) | ||
| - Using push mode with kubevirt. | ||
|
|
||
| ## Use cases | ||
| - As a user I want to use OADP to trigger backups that will back up volume data using kubevirt tooling rather than CSI snapshots | ||
| - Volume backups will be incremental when possible (first backup for a given volume will be full, subsequent backups for that same volume will be incremental) | ||
| - Assuming that kubevirt incremental volume backups should be much faster than CSI snapshots followed by incremental kopia snapshot copy, the expectation is that users might run kubevirt datamover backups more frequently than they would for CSI-based backups. | ||
| - If users are backing up more frequently with this method, they should ensure that they are not using CSI snapshots or fs-backup via the resource/volume policy configuration in the backup. | ||
|
|
||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ``` | ||
| Key Differences between CSI and KubeVirt incremental snapshots | ||
| ┌────────────────────┬──────────────────────────────────────────┬────────────────────────────────────────────┐ | ||
| │ Aspect │ CSI Approach │ KubeVirt qcow2 Approach │ | ||
| ├────────────────────┼──────────────────────────────────────────┼────────────────────────────────────────────┤ | ||
| │ Layer │ Storage (CSI driver) │ Hypervisor (QEMU/libvirt) │ | ||
| ├────────────────────┼──────────────────────────────────────────┼────────────────────────────────────────────┤ | ||
| │ Snapshot mechanism │ CSI VolumeSnapshot │ VirtualMachineBackup CR │ | ||
| ├────────────────────┼──────────────────────────────────────────┼────────────────────────────────────────────┤ | ||
| │ Incremental │ Kopia deduplication (scans whole volume) │ True block-level CBT (only changed blocks) │ | ||
| ├────────────────────┼──────────────────────────────────────────┼────────────────────────────────────────────┤ | ||
| │ Data mover │ Velero's node-agent + kopia │ New OADP controller + qemu-img │ | ||
| ├────────────────────┼──────────────────────────────────────────┼────────────────────────────────────────────┤ | ||
| │ VM awareness │ None (just sees PVC) │ Full (knows it's a VM disk) │ | ||
| └────────────────────┴──────────────────────────────────────────┴────────────────────────────────────────────┘ | ||
| ``` | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The design might benefit from something like a comparison table for CSI DM vs QCOW2 DM (just a thought): |
||
| ## High-Level design | ||
|
|
||
| ### Upstream velero changes | ||
| - Update velero volume policy model to allow unrecognized volume policy actions. Velero would treat unrecognized actions as "skip" (i.e. no snapshot or fs-backup), but the kubevirt datamover could only act if the policy action is "kubevirt". | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Have we created an issue for this in velero? Do you think this will get pushed back? ie. we only allow recognizable policy actions.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No issue created upstream yet. The issue and PR for this is one of the tasks I'll be working on this week. And yes, pushback wouldn't completely surprise me. Hopefully we can convince the other maintainers that this is a good thing to allow for 3rd party plugins, etc. but we may have to carry the commit in our fork if we are unsuccessful there.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have a strong argument here. Our goal is to make Velero more portable and usable for third-party datamovers, and we deliberately want to avoid introducing custom, vendor-specific logic like the vSphere-specific code that exists in some parts of Velero today: https://github.com/search?q=repo%3Avmware-tanzu%2Fvelero%20vsphere&type=code |
||
| - In `pkg/restore/actions/dataupload_retrieve_action.go` and in `DataDownload` we need to add SnapshotType. | ||
|
|
||
| ### BackupItemAction/RestoreItemAction plugins | ||
| - VirtualMachine BIA plugin | ||
| - The plugin will check whether the VirtualMachine's `status.ChangedBlockTracking` is `Enabled` | ||
| - The plugin must also determine whether the VM is running, since offline backup is not supported in the initial release. | ||
| - If QEMU backup is enabled, the next question is whether the user wants to use the kubevirt datamover for this VM's volumes. We will use volume policies for this, although it's a bit more complicated since a VM could have multiple PVCs. If at least one PVC for the VM has the "kubevirt" policy action specified, and no PVCs in this VM have other non-skip policies (i.e. "snapshot", etc.) then we'll use the kubevirt datamover | ||
| - In addition, SnapshotMoveData must be true | ||
| - Iterate over all PVCs for the VM | ||
| - If any PVC has an action other than "kubevirt" or "skip", exit without action | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - If at least one PVC has an action of "kubevirt", then use the kubevirt datamover | ||
| - This plugin will create a DataUpload with `Spec.SnapshotType` set to "kubevirt" and `Spec.DataMover` set to "kubevirt" | ||
| - Note that unlike the built-in datamover, this DataUpload is tied to a VM (which could include multiple PVCs) rather than to a single PVC. | ||
| - An annotation will be added to the DataUpload identifying the VirtualMachine we're backing up. | ||
| - Add `velerov1api.DataUploadNameAnnotation` to VirtualMachine | ||
| - OperationID will be created and returned similar to what's done with the CSI PVC plugin, and the async operation Progress method will report on progress based on the DU status (similar to CSI PVC plugin) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just to improve my understanding, this would be:
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. In both cases, we have a plugin which creates a DataUpload and then in its async Progress method, monitors the DU state until it's done. |
||
| - VirtualMachine RIA plugin | ||
| - (further investigation still needed on exactly what's needed here) | ||
| - Similar in functionality to csi PVC restore action | ||
| - Create temporary PVC (needed, or is this handled by controller?) | ||
| - Create DD based on DU annotation and DU ConfigMap (this may be handled already by existing DU RIA) | ||
coderabbitai[bot] marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - VirtualMachineBackup/VirtualMachineBackupTracker RIA plugin | ||
| - Simple RIA that discards VMB/VMBT resources on restore | ||
| - We don't want to restore these because they would kick off another VMBackup action. | ||
|
|
||
| ### Kubevirt Datamover Controller | ||
| - Responsible for reconciling DataUploads/DataDownloads where `Spec.DataMover` is "kubevirt" | ||
| - Configurable concurrency limits: concurrent-vm-backups and concurrent-vm-datauploads | ||
| - DataUpload reconciler (backup): | ||
| - create the (temporary) PVC. | ||
| - identify the VirtualMachine from the PVC metadata. | ||
| - From the BSL, identify the latest checkpoint for this VM and confirm that all of the related qcow2 files are available. | ||
| - create/update the VirtualMachineBackupTracker CR with `source` set to the VirtualMachine, and `status.latestCheckpoint.name` set to match the latest checkpoint we have for this vm. Note that it might be better if we could avoid hacking the VMBT status and instead have a field on the VM -- something like `spec.overrideCheckpoint` to use a specific checkpoint if the one in the VMBT doesn't match the latest we have in the BSL. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On the first ever backup, we create VirtualMachineBackupTracker (VMBT) with source set to a VirtualMachine resource, and on backup completion Generally, for a VMB backup, before we create a VMBT:
Remaining ops proceed
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, that's right, with one open question in my mind: "On backup completion, we cleanup the VMB resource"
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, so it depends on what information would the user would access |
||
| - create the VirtualMachineBackup CR with `source` set to the VirtualMachineBackupTracker, `pvcName` set to the temporary PVC, and (optionally)`forceFullBackup`, set to `true` to force a full backup | ||
| - Wait for VMBackup to complete (monitoring status) | ||
| - Launch kubevirt datamover pod mounting the temporary PVC with the qcow2 file(s) from the backup. | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - This pod needs to be running a command that will do the datamover operation from pvc to object storage | ||
| - The datamover pod functionality should be built into the same image as the kubevirt-datamover-controller pod image. | ||
| - Copy the new file to object storage (see [Where to store qcow2 files](#wherehow-to-store-qcow2-files-and-metadata) below) | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How will the pod receive BSL configuration and credentials? For example, will the controller inject the BSL details via environment variables and mount the credential secret?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should be similar to what datamover pods do I'd imagine. I haven't looked at that recently, but presumably DU links to Backup which then lists BSL?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And yes, mounting/accessing the credentials is something we can probably handle in the same way we do for datamover and fs-backup pods upstream. |
||
| - Save any required metadata to identify the stored data (collection of qcow2 pathnames/checkpoints, etc.), along with identifying the backup and VirtualMachine they're associated with. Save this metadata file as well (see [Where to store qcow2 files](#wherehow-to-store-qcow2-files-and-metadata) below) | ||
| - We need to properly handle cases where we attempt an incremental backup but a full backup is taken instead (checkpoint lost, CSI snapshot restore since last checkpoint, VM restart, etc.) | ||
| - Aborted backups also need to be handled (resulting in a failed PVC backup on the Velero side) | ||
| - DataDownload reconciler (restore) | ||
| - (this area is less well defined so far, since the kubevirt enhancement doesn't go into as much detail on restore) | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - We will need a temporary PVC for pulling qcow2 images from object store (if we're restoring the empty temp PVC from backup, that might work here) | ||
| - We also need PVCs created for each VM disk we're restoring from qcow2 images. | ||
| - We'll need to create another datamover pod here which will do the following: | ||
| - pod will have temp PVC mounted, as well as PVCs mounted for each vm disk we're creating. | ||
| - pod running command/image will first get the list of qcow2 files to pull from object storage | ||
| - once we have the qcow2 full backup and incremental files from object store, repeatedly call `qemu-img rebase -b fullbackup.qcow2 -f qcow2 -u incrementalN.qcow2` for each incremental backup, in order | ||
| - Then, convert qcow2 image to raw disk image: `qemu-img convert -f qcow2 -O raw fullbackup.qcow2 restored-raw.img` | ||
| - Finally, write this image to the PVC which contains the VM disk | ||
| - (repeat process for each disk if VM has multiple disks to restore) | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - Note that the various `qemu-img` actions might eventually be combined into a single kubevirt API call, but for the moment this would need to be done manually. | ||
| - Once datamover pod has restored the VM disks, it will exit and the VirtualMachine can launch with these disks (following the velero datamover model where the temporary PVC is deleted, retaining the PV, which then binds to the VM's PVCs may work here). The temporary PVC (containing the qcow2 images, not the restored VM disk image) should be completely removed at this point, including PV content. | ||
|
Comment on lines
+89
to
+101
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Restore flow requires significantly more design detail before implementation. Lines 86-97 acknowledge the restore flow is "less well defined" and contains several uncertainties:
This level of design ambiguity in a critical recovery path presents significant operational risk. Recommend completing the restore flow design with: error handling strategy, integrity verification steps, rollback procedures, and concrete success/failure criteria before implementation begins. 🤖 Prompt for AI Agents |
||
|
|
||
| ### Kubevirt datamover backup data/metadata | ||
|
|
||
| The kubevirt datamover data will be stored in the BSL using a prefix derived from `BSLPrefix+"-kubevirt-datamover"`. | ||
|
|
||
| The directory structure will be as follows: | ||
| ``` | ||
| <bsl-prefix>-kubevirt-datamover/ | ||
| ├── manifests/ | ||
| │ └── <velero-backup-name>/ | ||
| │ ├── <vm-name>.json # Per-backup-per-vm manifest | ||
| │ └── index.json # Per-backup manifest | ||
| └── checkpoints/ | ||
| └── <namespace>/ | ||
| └── <vm-name>/ | ||
| ├── <checkpoint-id>/ # checkpoint dir | ||
| │ └── <vmb-name>-<disk-name>.qcow2 # Actual qcow2 files | ||
| └── index.json # Per-VM index file | ||
| ``` | ||
| Example of a Per-VM Index file: | ||
| ``` | ||
| Per-VM Index (checkpoints/<ns>/<vm-name>/index.json): | ||
|
|
||
| { | ||
| "vmName": "my-vm", | ||
| "namespace": "default", | ||
| "checkpoints": [ | ||
| { | ||
| "id": "cp-001", | ||
| "type": "full", | ||
| "timestamp": "2025-01-10T10:00:00Z", | ||
| "vmBackup": "vmb-001", | ||
| "files": ["vmb-001-disk0.qcow2", "vmb-001-disk1.qcow2"], | ||
| "pvcs": ["my-vm-pvc-0", "my-vm-pvc-1"], | ||
| "referencedBy": ["backup-2025-01-10", "backup-2025-01-11", "backup-2025-01-12"] | ||
| }, | ||
| { | ||
| "id": "cp-002", | ||
| "type": "incremental", | ||
| "parent": "cp-001", | ||
| "timestamp": "2025-01-11T10:00:00Z", | ||
| "vmBackup": "vmb-002", | ||
| "files": ["vmb-002-disk0.qcow2", "vmb-002-disk1.qcow2"], | ||
| "pvcs": ["my-vm-pvc-0", "my-vm-pvc-1"], | ||
| "referencedBy": ["backup-2025-01-11", "backup-2025-01-12"] | ||
| }, | ||
| { | ||
| "id": "cp-003", | ||
| "type": "incremental", | ||
| "parent": "cp-002", | ||
| "timestamp": "2025-01-12T10:00:00Z", | ||
| "vmBackup": "vmb-003", | ||
| "files": ["vmb-003-disk0.qcow2", "vmb-003-disk1.qcow2"], | ||
| "pvcs": ["my-vm-pvc-0", "my-vm-pvc-1"], | ||
| "referencedBy": ["backup-2025-01-12"] | ||
| } | ||
| ] | ||
| } | ||
| ``` | ||
| Example of a per-backup manifest: | ||
| ``` | ||
| Per-Backup Manifest (manifests/<backup-name>/index.json): | ||
|
|
||
| { | ||
| "backupName": "backup-2025-01-12", | ||
| "timestamp": "2025-01-12T10:00:00Z", | ||
| } | ||
| ``` | ||
| Example of a per-backup manifest: | ||
| ``` | ||
| Per-Backup-oer-vm Manifest (manifests/<backup-name>/<vm-name>.json): | ||
|
|
||
| { | ||
| "namespace": "default", | ||
| "name": "my-vm", | ||
| "checkpointChain": ["cp-001", "cp-002", "cp-003"] | ||
| } | ||
| ``` | ||
|
|
||
|
|
||
| ### Where/how to store qcow2 files and metadata | ||
shubham-pampattiwar marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - Current approach: | ||
| - Use the Velero object store plugin API but not the velero-specific logic in `persistence/object-store.go` | ||
| - Create a top-level dir in the BSL (under the BSL prefix, parallel to backups/restores/kopia) for kubevirt datamover. | ||
| - Actually, this may have to be outside the prefix (i.e. if prefix is "data" then we may need to create a parallel dir "data-kubevirt" or something similar, since I think Velero allows only its own files under the prefix) | ||
| - Copy individual qcow2 files and metadata files identifying the required qcow2 checkpoints. We may want a subdir per VirtualMachine for qcow2 files. For metadata files, these should probably be organized by velero backup. | ||
| - We need to manage storage usage on backup deletion -- when deleting a backup, we should delete any qcow2 files no longer referenced by any still-existing backups. | ||
| - Other approaches: | ||
| - On volume in cluster | ||
| - Likely the simplest approach | ||
| - Volume must be mounted by the controller pod | ||
| - Will require its own periodic velero backups (less frequently than the VM incremental backups) for disaster recovery purposes | ||
| - Using kopia | ||
| - We could use kopia on top of the object storage API, but it is not clear that this will provide any real benefits, since we're already working with files that represent just the data diff we need. We can just manage them as individual objects. | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - This will also require additional overhead around kopia maintenance, and we still may need to manage qcow2 file deletion manually. | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The backup and restore flows involve multiple components (BIA/RIA plugins, DataUpload/DataDownload, controller, VirtualMachineBackup CR, datamover pods). Visual diagrams would help readers follow the sequence. Here are suggested mermaid diagrams(please let me know if they are accurate or change them as needed to add into the design doc):
sequenceDiagram
participant Velero
participant VMI_BIA as VMI BIA Plugin
participant DU as DataUpload CR
participant Controller as Kubevirt Datamover Controller
participant VMBackup as VirtualMachineBackup CR
participant KubeVirt
participant TempPVC as Temporary PVC
participant Pod as Datamover Pod
participant BSL as Object Storage (BSL)
Velero->>VMI_BIA: Backup VirtualMachineInstance
VMI_BIA->>VMI_BIA: Check CBT enabled & volume policies
VMI_BIA->>TempPVC: Create temporary PVC
VMI_BIA->>DU: Create DataUpload (DataMover: "kubevirt")
Controller->>DU: Reconcile DataUpload
Controller->>VMBackup: Create VirtualMachineBackup CR
VMBackup->>KubeVirt: Trigger backup
KubeVirt->>KubeVirt: Extract changed blocks via libvirt/QEMU
KubeVirt->>TempPVC: Write qcow2 file(s)
KubeVirt->>VMBackup: Update status: Complete
Controller->>Pod: Launch datamover pod
Pod->>TempPVC: Mount & read qcow2 files
Pod->>BSL: Upload qcow2 + metadata
Pod->>Controller: Exit success
Controller->>DU: Update status: Complete
sequenceDiagram
participant Velero
participant VMI_RIA as VMI RIA Plugin
participant DD as DataDownload CR
participant Controller as Kubevirt Datamover Controller
participant TempPVC as Temporary PVC
participant TargetPVC as Target PVC(s)
participant Pod as Datamover Pod
participant BSL as Object Storage (BSL)
Velero->>VMI_RIA: Restore VirtualMachineInstance
VMI_RIA->>TempPVC: Create temporary PVC
VMI_RIA->>DD: Create DataDownload (DataMover: "kubevirt")
Controller->>DD: Reconcile DataDownload
Controller->>TargetPVC: Create target PVC(s) for VM disks
Controller->>Pod: Launch datamover pod
Pod->>BSL: Download qcow2 files (full + incrementals)
Pod->>TempPVC: Write qcow2 files
Pod->>Pod: qemu-img rebase (chain incrementals)
Pod->>Pod: qemu-img convert -f qcow2 -O raw
Pod->>TargetPVC: Write raw disk image
Pod->>Controller: Exit success
Controller->>DD: Update status: Complete
Controller->>TempPVC: Delete temporary PVC
Note over TargetPVC: VM can now boot with restored disk
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @weshayutin has created a backup flow diagram on a wiki page. We can add a similar restore one -- comparing with these to make sure we've gotten it all right. |
||
| ### Open questions | ||
| - How to determine PVC size? | ||
| - user-configurable? configmap or annotation? | ||
| - From the kubevirt enhancement: "Before the process begins, an estimation of the required backup size will be performed. If the provided PVC size is insufficient, an error will be returned" | ||
| - If the PVC is too small, we need a clear error on the backup indicating that it failed due to insufficient PVC space. | ||
| - Since controller is responsible for PVC creation rather than plugin, the controller may be able to respond to PVC too small errors by retrying with a larger PVC. | ||
|
Comment on lines
+199
to
+203
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PVC resize retry mechanism needs bounds and backoff strategy. Line 191 suggests the controller "may be able to respond to PVC too small errors by retrying with larger PVC" but doesn't specify:
Without these constraints, the retry loop could consume excessive storage or cause rapid failure loops. Recommend specifying concrete bounds and exponential backoff before implementation. |
||
| - The kubevirt datamover controller will be responsible for deleting the `VirtualMachineBackup` resource once it's no longer needed. When should this happen? Upon velero backup deletion? This would enable debugging in the case of failed operations. If we delete it immediately, that will make troubleshooting more difficult. If on backup deletion, we'll need to write a `DeleteItemAction` plugin. | ||
| - Do we need an option to force full backups? If we're always doing incremental, eventually the incremental backup list becomes really long, requiring applying possibly hundreds of incremental files for a single restore. | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - For initial release, we can add a force-full-virt-backup annotation on the velero backup. Longer-term, we can push for a general datamover feature in velero which could force full backups for both fs-backup and velero datamover if backup.Spec.ForceFullVolumeBackup is true, and once implemented, the qcow2 datamover can use this as well. | ||
|
|
||
| ### General notes | ||
| - SnapshotMoveData must be true on the backup or DU/DD processing won't work properly | ||
| - Longer-term, we can probably eliminate some of the custom code in the new controller by refactoring the velero datamover pluggability features, allowing the node agent to orchestrate this (with a custom image and configuration for datamover pods, etc.) | ||
| - The kubevirt enhancement references both "push mode" and "pull mode" -- initial implementation on the kubevirt side will be push mode only. This OADP proposal is also push mode only for the initial implementation | ||
sseago marked this conversation as resolved.
Show resolved
Hide resolved
Comment on lines
+198
to
+211
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Critical prerequisite validation missing from plugin design. Line 197 states "SnapshotMoveData must be true on the backup" but this critical prerequisite is buried in general notes rather than enforced in the VirtualMachine BIA plugin (lines 50-62). If SnapshotMoveData is false, the entire datamover flow will fail, yet the design doesn't show where this validation occurs. Recommend adding explicit validation in the VirtualMachine BIA plugin (around line 54) to check SnapshotMoveData and fail fast with a clear error message if false, rather than allowing the backup to proceed only to fail later in the controller. 🤖 Prompt for AI AgentsMajor operational concerns missing from open questions. The open questions section addresses PVC sizing and force-full backups but omits several critical operational concerns: Backup retention and lifecycle:
Disaster recovery:
Cross-cluster restore:
Upgrade path:
Testing and validation:
These gaps represent significant operational and reliability risks for production deployments. |
||
Uh oh!
There was an error while loading. Please reload this page.