Production-focused warm migration tooling for VMware to Apache CloudStack.
This repository contains:
- a Go migration engine
- an HTTP API service
- a web UI for batch migration planning and monitoring
The project is designed around near-live migration:
- copy the base image once
- keep the target updated with CBT-driven delta rounds
- cut over only for the final sync and import boundary
For each VM, the engine:
- Connects to vCenter and discovers the source VM, disks, and NICs.
- Ensures VMware CBT is enabled.
- Creates a base snapshot.
- Copies VMware disks directly into QCOW2 on CloudStack primary storage.
- Runs repeated CBT-native delta rounds.
- At finalize time, powers off or waits for shutdown according to policy.
- Runs a final delta sync.
- Optionally runs
virt-v2v-in-place. - Imports the root disk into CloudStack, then imports and attaches data disks and NICs.
The source VM downtime is therefore limited to the final shutdown + final sync + import window, not the full disk copy time.
- Warm migration using VMware CBT (
QueryChangedDiskAreas) - Direct QCOW2 target writes, no RAW intermediate
- Parallel VM and parallel disk execution
- Resume-safe workflow with per-VM state under
/var/lib/vm-migrator - Optional
virt-v2v-in-place - Multi-disk aware import and conversion planning
- UI, API, and CLI control for
Finalize/Finalize Now - Pending-action workflow for shutdown decisions when VMware Tools are unavailable
- Retry failed jobs from the UI or API
- Retry failed conversions with
virt-v2vdebug enabled - CloudStack primary storage support for:
- NFS
- Shared Mountpoint
Use saved vCenter and CloudStack profiles to switch quickly between environments.
Select one or more source VMs, then choose which VM is currently active for editing.
Choose CloudStack placement, guest mapping, boot/storage behavior, and migration strategy settings such as finalize schedule, shutdown mode, and snapshot quiesce.
Review NIC mappings and the generated per-VM summary before generating specs or starting migration.
The Progress view shows job status, current stage, next stage, finalize state, throughput, and quick actions like retry, finalize, and finalize-now.
Each job can be expanded to show disk progress and the live stdout/stderr logs from the engine.
Disk-level read progress, estimated used size, throughput, and logs help diagnose whether a migration is copying, converting, waiting on VMware, or blocked on a later stage.
- Source read path: VMware VDDK
- Base copy path: VDDK -> QCOW2
- Delta path: VMware CBT ranges -> targeted QCOW2 updates
- Conversion path: optional
virt-v2v-in-place - Import path:
importVmfor root diskimportVolumefor additional data disksattachVolumefor imported data disks
The engine persists workflow state and disk progress to:
/var/lib/vm-migrator/<vm>_<moref>/state.json
Control markers and logs also live in that same runtime directory.
The UI/API can now surface early stages before copy begins:
connecting_vcenterfinding_vmdiscovering_vmware_diskspreparing_target_storageenabling_cbtcreating_base_snapshotbase_copydeltaawaiting_shutdown_actionfinal_syncconvertingimport_root_diskimport_data_diskdone
This helps distinguish “not started copying yet” from “slow copy”.
This release supports:
- NFS primary storage
- Shared Mountpoint primary storage
- The engine mounts the pool path when needed.
- On Ubuntu, engine-managed mounts default to NFSv3-style options to avoid QCOW2 flush issues observed on some NFSv4 environments.
- You can override mount options with:
V2C_NFS_MOUNT_OPTS
- The engine uses the CloudStack path directly.
- No mount or unmount is attempted by the engine.
- Preflight validation checks:
- path exists
- path is a directory
- path is writable
- write/delete works
- free-space check where possible
- Ceph/RBD import flow is not enabled here because current CloudStack
importVmsupport is not sufficient for that path.
- Linux host
- VMware VDDK installed
- must include
include/vixDiskLib.h - must include
lib64/libvixDiskLib.so* - official download: Broadcom VDDK
- must include
- Root or sudo access
- vCenter credentials
- CloudStack API access
- Network connectivity from migration host to VMware and CloudStack endpoints
This repository does not redistribute VDDK. Users must obtain it directly from Broadcom and accept Broadcom licensing separately.
For Windows conversions, virt-v2v-in-place needs virtio driver assets available through VIRTIO_WIN.
This project resolves them from:
virt.virtio_iso/usr/share/virtio-win/virtio-win.iso/usr/share/virtio-win
Bootstrap prepares this automatically:
- EL-family hosts:
- adds the
virtio-winrepo - installs
virtio-win
- adds the
- Ubuntu hosts:
- converts upstream
virtio-win.noarch.rpmwithalien - installs the resulting package
- extracts
srvanyhelpers into/usr/share/virt-tools
- converts upstream
Required access for the current implementation:
- Migration host -> vCenter:
443/TCP - Migration host -> ESXi hosts serving source VM disks:
902/TCPand443/TCP - Migration host -> CloudStack API:
80/TCP,8080/TCP, or443/TCPdepending on configured endpoint
- Migration host -> NFS primary storage:
- at least the ports required by your NFS version and mount options
- Browser/admin workstation -> migration host:
5173/TCPfor the UI8000/TCPfor the API, if used directly
Notes:
- CloudStack management server does not need direct VMware connectivity for this tool.
qemu-nbdis used locally via Unix socket, not as a network listener.
git clone https://github.com/prashanthr2/vmware-to-cloudstack.git
cd vmware-to-cloudstackBefore bootstrap, make sure you have either:
- an extracted VDDK directory, or
- a VDDK tarball
Example with extracted VDDK:
chmod +x ./scripts/bootstrap.sh
sudo ./scripts/bootstrap.sh --vddk-dir /opt/vmware-vddk/vmware-vix-disklib-distrib --install-service --with-uiExample with VDDK tarball:
chmod +x ./scripts/bootstrap.sh
sudo ./scripts/bootstrap.sh --vddk-tar /tmp/VMware-vix-disklib-*.tar.gz --install-service --with-uisudo vi /etc/v2c-engine/config.yaml
sudo vi /etc/v2c-ui/.env.localSet the UI API base in /etc/v2c-ui/.env.local:
VITE_API_BASE=http://<migration-host-ip>:8000sudo systemctl enable --now v2c-engine v2c-ui
systemctl status v2c-engine v2c-ui- UI:
http://<migration-host-ip>:5173 - Health check:
curl -s http://<migration-host-ip>:8000/health
Supported options:
--vddk-dir <path>--vddk-tar <path>--config <path>--bin-path <path>--listen <addr>--ui-listen <addr>--install-service--with-ui--start-services--skip-build
Recommended flow:
- bootstrap without auto-start
- edit config
- enable and start services
sudo ./scripts/bootstrap.sh --vddk-dir /opt/vmware-vddk/vmware-vix-disklib-distrib --install-service --with-ui
sudo vi /etc/v2c-engine/config.yaml
sudo vi /etc/v2c-ui/.env.local
sudo systemctl enable --now v2c-engine v2c-uiUse --start-services only when the config files already contain real values.
- Engine binary:
/usr/local/bin/v2c-engine - Engine config:
/etc/v2c-engine/config.yaml - Optional build env helper:
/etc/v2c-engine/build.env - UI env file:
/etc/v2c-ui/.env.local - Runtime state and logs:
/var/lib/vm-migrator
Bootstrap intentionally does not install a global LD_LIBRARY_PATH profile script because VDDK libraries can interfere with unrelated host tools.
The main behavior is controlled by the migration: block in the VM spec.
Use delta_interval to keep running incremental rounds before cutover:
migration:
delta_interval: 300Behavior:
- base copy completes first
- engine waits
delta_interval - repeated CBT-native delta rounds continue until finalize is requested
Use scheduled cutover:
migration:
delta_interval: 300
finalize_at: "2026-03-12T23:30:00+00:00"
finalize_delta_interval: 30
finalize_window: 600
finalize_settle_seconds: 30Fields:
finalize_atfinalize_delta_intervalfinalize_windowfinalize_settle_seconds
Default settle delay when omitted or 0:
- Windows:
30 - Linux/other:
15
Supported from:
- CLI
- API
- UI
Finalize:
- requests cutover
- workflow picks it up in normal delta loop
Finalize Now:
- interrupts delta wait
- proceeds to finalization as soon as allowed
If the workflow is still in base_copy, base copy completes first and then the engine goes directly into finalization.
Shutdown policy is controlled by migration.shutdown_mode.
Supported values:
automanualforce
- If VMware Tools are healthy, the engine uses guest shutdown.
- If VMware Tools are unavailable, the engine pauses and asks for operator action instead of forcing power off immediately.
In that case the UI/API exposes:
Force Power OffManual Shutdown Done
The engine will also automatically continue if it observes that the source VM has been powered off manually before the user confirms.
- engine waits for the VM to be powered off externally
- no forced shutdown is attempted
- engine force powers off the source VM at finalize time
Snapshot quiesce policy is controlled by migration.snapshot_quiesce.
Supported values:
autotruefalse
Behavior:
auto- tries quiesced snapshot when VMware Tools are healthy
- falls back to non-quiesced when tools are unavailable or quiesce cannot be used
true- requests quiesced snapshots
false- always uses non-quiesced snapshots
virt-v2v-in-place runs after final sync when enabled.
Planning behavior:
- single-disk guests use boot-disk-only mode
- multi-disk guests are inspected and may use temporary
libvirtxmlmode when the guest OS spans multiple disks
Safety improvements in current main:
- single-disk conversion fails early if boot-disk inspection finds no guest OS or no root device
- Windows guests get stricter pre-conversion checks
- failed jobs can be retried with
virt-v2vdebug enabled
Failed jobs can be retried from:
- UI
- API
Standard retry:
- creates a new job
- keeps job history
Debug retry:
- runs
virt-v2v-in-placewith-v -x - useful when conversion failures need upstream-quality diagnostics
API examples:
POST /migration/retry/{vm}
POST /migration/retry/{vm}?debug=trueThe UI is served by v2c-ui and talks to v2c-engine serve.
GET /healthGET /vmware/vmsGET /cloudstack/zonesGET /cloudstack/clustersGET /cloudstack/storageGET /cloudstack/networksGET /cloudstack/diskofferingsGET /cloudstack/serviceofferingsPOST /migration/specPOST /migration/startGET /migration/jobsGET /migration/status/{vm}GET /migration/logs/{vm}POST /migration/finalize/{vm}POST /migration/finalize/{vm}?now=truePOST /migration/retry/{vm}POST /migration/retry/{vm}?debug=truePOST /migration/shutdown/{vm}?action=forcePOST /migration/shutdown/{vm}?action=manual
stagenext_stageoverall_progresstransfer_speed_mbpsdisk_progressfinalize_requestedfinalize_now_requestedawaiting_user_actionrequired_actionavailable_actions
- batch VM selection
- per-VM storage and NIC mapping
- strategy settings including:
- delta interval
- finalize schedule
- settle delay
- shutdown mode
- snapshot quiesce
- migration progress table
- pending actions panel
- failed-job retry and retry-with-debug
- logs view
The import order is:
- optional
virt-v2v-in-place importVmfor root disk- attach additional NICs
importVolumefor each non-root diskattachVolumefor each imported data diskupdateVirtualMachinefor VM-level settings- optional start of imported VM
Run:
./v2c-engine run --spec ./examples/spec.run.single-vm.single-disk.single-nic.yaml --config /etc/v2c-engine/config.yamlStatus:
./v2c-engine status --spec ./examples/spec.run.multi.example.yaml --config /etc/v2c-engine/config.yaml
./v2c-engine status --spec ./examples/spec.run.multi.example.yaml --vm Centos7 --json --config /etc/v2c-engine/config.yamlFinalize:
./v2c-engine finalize --spec ./examples/spec.run.multi.example.yaml --vm Centos7 --config /etc/v2c-engine/config.yaml
./v2c-engine finalize --spec ./examples/spec.run.multi.example.yaml --vm Centos7 --now --config /etc/v2c-engine/config.yamlServe API:
./v2c-engine serve --config ./config.yaml --listen :8000See examples/README.md.
Common templates:
- examples/config.full.example.yaml
- examples/spec.run.single-vm.single-disk.single-nic.yaml
- examples/spec.run.single-vm.multi-disk.multi-nic.yaml
- examples/spec.run.single-vm.defaults-only.yaml
- examples/spec.run.multi-vm.single-disk.single-nic.yaml
- examples/spec.run.multi-vm.multi-disk.multi-nic.yaml
- examples/spec.run.multi-vm.defaults-only.yaml
Known limitations and platform-specific workarounds are tracked in:
Current documented issues include:
- Ubuntu + some NFSv4 environments causing QCOW2 flush I/O errors
virt-v2v-in-placefailures for CentOS 7 XFS v4 guests on EL10 guestfs stacks
go build -o v2c-engine ./cmd/v2c-engineIf VDDK is in a non-default path:
export CGO_CFLAGS="-I/opt/vmware-vddk/include"
export CGO_LDFLAGS="-L/opt/vmware-vddk/lib64 -lvixDiskLib -ldl -lpthread"chmod +x ./scripts/uninstall.sh
sudo ./scripts/uninstall.sh --purge-stateTo print the bootstrap package list for manual review:
./scripts/uninstall.sh --list-packagesbase-copy and delta-sync are hidden by default.
To enable direct expert usage:
export V2C_ENABLE_EXPERT_COMMANDS=1





