-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Labels
area/operatingsystemOperating system-related concerns.Operating system-related concerns.
Description
Summary
This work item enables FeOS to attach NVMe devices, exposed to the host as PCIe Virtual Functions (VFs), directly to a FeOS-managed Virtual Machine (VM). This allows a remote NVMe device, connected via a Bluefield DPU using NVMe-oF (RoCEv2), to be used as a high-performance boot or storage drive for a VM.
The core task is to extend the VM management API and backend to support PCIe device passthrough, identifying the NVMe device by its PCIe address. The scope also includes support for dynamically attaching and detaching these devices while the VM is running (hot-plugging).
Scope
✅ In Scope
- Extend the FeOS VM API to allow specifying one or more NVMe devices via their host PCIe address during VM creation/modification.
- Implement the backend logic for PCIe passthrough of an NVMe VF to a guest VM (e.g., using IOMMU /
vfio-pci). - Ensure the guest VM can recognize and utilize the passed-through NVMe device.
- Support using the device as both a primary boot drive and a secondary storage volume.
- Support for dynamic hot-plug and hot-unplug of NVMe VFs to/from a running VM.
❌ Out of Scope
- Configuration of the Bluefield DPU or the NVMe-oF fabric itself. This issue assumes the NVMe VF is already present and visible on the FeOS host.
- Live migration of VMs with attached PCIe passthrough devices.
Responsible Areas
- FeOS VM Management
- FeOS API
Contributors
Acceptance Criteria
-
API
- The VM API includes a new field (e.g.,
pci_devices) in the VM specification to accept a list of PCIe addresses. - The API validates that the provided PCIe address corresponds to an existing and available device on the host.
- The VM API includes a new field (e.g.,
-
VM Runtime (Static Attachment)
- A VM can be successfully launched with an NVMe VF passed through to it.
- The guest operating system inside the VM correctly identifies the NVMe device via its native driver.
- The VM can successfully boot from the passed-through NVMe device when configured as the primary boot disk.
- The VM can mount and perform I/O operations on the device when it is attached as secondary storage.
-
Dynamic Attachment (Hot-plug)
- A dedicated API endpoint exists to attach a PCIe device to a running VM.
- A dedicated API endpoint exists to detach a PCIe device from a running VM.
- The guest OS recognizes the newly attached NVMe device without requiring a reboot.
- The guest OS gracefully handles the removal of a detached NVMe device, provided it is not in active use.
- The VM remains stable during and after hot-plug/unplug operations.
Action Items
- Design the specific API extension for the VM model to include PCIe devices at creation time.
- Implement the necessary host-level checks (e.g., IOMMU enabled, device bound to
vfio-pcidriver). - Integrate the passthrough logic into the VM creation and startup process in the FeOS hypervisor backend.
- Add input validation for PCIe addresses in the API layer.
- Design and implement API endpoints for hot-plug and hot-unplug operations.
- Integrate hot-plug/unplug logic with the hypervisor backend.
- Create integration tests that launch a VM with a passthrough NVMe device and verify its presence and functionality within the guest.
- Add integration tests for attaching and detaching devices from a running VM.
Metadata
Metadata
Assignees
Labels
area/operatingsystemOperating system-related concerns.Operating system-related concerns.
Type
Projects
Status
Todo