From 14e99e7ac7246340ae57dbdf327cb96820e36da2 Mon Sep 17 00:00:00 2001 From: Jack Yu Date: Fri, 9 Jan 2026 14:47:00 +0800 Subject: [PATCH 1/2] doc: vm random migration issue Signed-off-by: Jack Yu --- docs/upgrade/v1-5-x-to-v1-6-x.md | 11 ++++++++++- .../version-v1.6/upgrade/v1-5-x-to-v1-6-x.md | 10 +++++++++- .../version-v1.7/upgrade/v1-5-x-to-v1-6-x.md | 10 +++++++++- 3 files changed, 28 insertions(+), 3 deletions(-) diff --git a/docs/upgrade/v1-5-x-to-v1-6-x.md b/docs/upgrade/v1-5-x-to-v1-6-x.md index 6978c3d472..ca2b0d20f6 100644 --- a/docs/upgrade/v1-5-x-to-v1-6-x.md +++ b/docs/upgrade/v1-5-x-to-v1-6-x.md @@ -254,7 +254,15 @@ Before Harvester v1.6.0, the controller patched the MAC address from the VMI int Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to inform users that certain CPU and memory changes might not take effect immediately, we decided to expose the “RestartRequired” condition in the UI. That’s why this message appears after upgrading Harvester or updating the harvester-ui-extension to v1.6.x. -### 7. Change in Default VLAN Behavior for Secondary Pod Interfaces +### 7. Virtual Machine Cannot Be Live Migrated to the Target Node + +After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. + +The workaround is to restart the virtual machine. + +Related issue: [#9739](https://github.com/harvester/harvester/issues/9739) + +### 8. Change in Default VLAN Behavior for Secondary Pod Interfaces In v1.6.0 and earlier versions, pods with secondary network interfaces (such as VM networks and storage networks) were automatically assigned to VLAN ID 1 and the VLAN ID configured in the VLAN network. This dual-VLAN ID configuration allowed the Harvester network bridge to forward untagged traffic to the veth interfaces of these pods. @@ -263,3 +271,4 @@ This behavior changed in Harvester v1.6.1, which uses v1.8.0 of the CNI bridge p The change affects clusters upgraded from v1.5.x to v1.6.1 if the external switch port is configured as an access port sending untagged frames. Updating the external switch configuration to use a trunk port resolves the issue. Pods with secondary interfaces that are attached to untagged networks or associated with VLAN ID 1 are not affected. Related issue: [#8816](https://github.com/harvester/harvester/issues/8816) + diff --git a/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md b/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md index dd4e5b706a..8366b36c8c 100644 --- a/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md +++ b/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md @@ -254,7 +254,15 @@ Before Harvester v1.6.0, the controller patched the MAC address from the VMI int Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to inform users that certain CPU and memory changes might not take effect immediately, we decided to expose the “RestartRequired” condition in the UI. That’s why this message appears after upgrading Harvester or updating the harvester-ui-extension to v1.6.x. -### 7.Change in default VLAN Behavior for Secondary Pod Interfaces +### 7. Virtual Machine Cannot Be Live Migrated to the Target Node + +After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. + +The workaround is to restart the virtual machine. + +Related issue: [#9739](https://github.com/harvester/harvester/issues/9739) + +### 8.Change in default VLAN Behavior for Secondary Pod Interfaces In v1.6.0 and earlier versions, pods with secondary network interfaces (such as VM networks and storage networks) were automatically assigned to VLAN ID 1 and the VLAN ID configured in the VLAN network. This dual-VLAN ID configuration allowed the Harvester network bridge to forward untagged traffic to the veth interfaces of these pods. diff --git a/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md b/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md index 6978c3d472..751dc76511 100644 --- a/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md +++ b/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md @@ -254,7 +254,15 @@ Before Harvester v1.6.0, the controller patched the MAC address from the VMI int Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to inform users that certain CPU and memory changes might not take effect immediately, we decided to expose the “RestartRequired” condition in the UI. That’s why this message appears after upgrading Harvester or updating the harvester-ui-extension to v1.6.x. -### 7. Change in Default VLAN Behavior for Secondary Pod Interfaces +### 7. Virtual Machine Cannot Be Live Migrated to the Target Node + +After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. + +The workaround is to restart the virtual machine. + +Related issue: [#9739](https://github.com/harvester/harvester/issues/9739) + +### 8. Change in Default VLAN Behavior for Secondary Pod Interfaces In v1.6.0 and earlier versions, pods with secondary network interfaces (such as VM networks and storage networks) were automatically assigned to VLAN ID 1 and the VLAN ID configured in the VLAN network. This dual-VLAN ID configuration allowed the Harvester network bridge to forward untagged traffic to the veth interfaces of these pods. From d7c7ec497b39ed883fb7cfa2e5b5f85447c380c5 Mon Sep 17 00:00:00 2001 From: Jack Yu Date: Mon, 12 Jan 2026 11:07:00 +0800 Subject: [PATCH 2/2] docs: update based on the feedback Signed-off-by: Jack Yu --- docs/upgrade/v1-5-x-to-v1-6-x.md | 2 +- versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md | 2 +- versioned_docs/version-v1.6/vm/live-migration.md | 6 ++++++ versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md | 2 +- 4 files changed, 9 insertions(+), 3 deletions(-) diff --git a/docs/upgrade/v1-5-x-to-v1-6-x.md b/docs/upgrade/v1-5-x-to-v1-6-x.md index ca2b0d20f6..fe11978853 100644 --- a/docs/upgrade/v1-5-x-to-v1-6-x.md +++ b/docs/upgrade/v1-5-x-to-v1-6-x.md @@ -256,7 +256,7 @@ Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to info ### 7. Virtual Machine Cannot Be Live Migrated to the Target Node -After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. +After upgrading Harvester to v1.6.x, KubeVirt may report that some virtual machines are pending restart. These virtual machines cannot be live migrated to a user-specified target node from the UI, until they are restarted. Once restarted, subsequent node-specific live migrations will work. The workaround is to restart the virtual machine. diff --git a/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md b/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md index 8366b36c8c..56a9846250 100644 --- a/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md +++ b/versioned_docs/version-v1.6/upgrade/v1-5-x-to-v1-6-x.md @@ -256,7 +256,7 @@ Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to info ### 7. Virtual Machine Cannot Be Live Migrated to the Target Node -After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. +After upgrading Harvester to v1.6.x, KubeVirt may report that some virtual machines are pending restart. These virtual machines cannot be live migrated to a user-specified target node from the UI, until they are restarted. Once restarted, subsequent node-specific live migrations will work. The workaround is to restart the virtual machine. diff --git a/versioned_docs/version-v1.6/vm/live-migration.md b/versioned_docs/version-v1.6/vm/live-migration.md index c658e5039f..02e75bd4ac 100644 --- a/versioned_docs/version-v1.6/vm/live-migration.md +++ b/versioned_docs/version-v1.6/vm/live-migration.md @@ -133,6 +133,12 @@ The **Migrate** menu option is not available in the following situations: ::: +:::caution + +Due to a [limitation](https://kubevirt.io/user-guide/user_workloads/vm_rollout_strategies/#restartrequired-condition) in the VM rollout strategy implementation, virtual machines that are pending restart may get live-migrated to a random node, instead of a user-specified target node. The virtual machines must be restarted for all subsequent node-specific live migrations. + +::: + ![](/img/v1.2/vm/migrate.png) ## Aborting a Migration diff --git a/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md b/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md index 751dc76511..7b1fa776d3 100644 --- a/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md +++ b/versioned_docs/version-v1.7/upgrade/v1-5-x-to-v1-6-x.md @@ -256,7 +256,7 @@ Starting from v1.6.0, to support the CPU and Memory hot-plug feature and to info ### 7. Virtual Machine Cannot Be Live Migrated to the Target Node -After upgrading Harvester to v1.6.x, virtual machines from v1.5.x cannot be live migrated to a user-specified target node. Instead, the virtual machines are migrated to a random node. This issue occurs because of the `RestartRequired` condition, which indicates that the virtual machines require a restart. +After upgrading Harvester to v1.6.x, KubeVirt may report that some virtual machines are pending restart. These virtual machines cannot be live migrated to a user-specified target node from the UI, until they are restarted. Once restarted, subsequent node-specific live migrations will work. The workaround is to restart the virtual machine.