-
Notifications
You must be signed in to change notification settings - Fork 26
feat: supports to attach/detach block devices and nics on oci #698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -13,6 +13,7 @@ | |
| LaunchInstanceShapeConfigDetails, | ||
| CreateVolumeDetails, | ||
| AttachIScsiVolumeDetails, | ||
| AttachParavirtualizedVolumeDetails, | ||
| CreateVcnDetails, | ||
| ) | ||
| except ImportError: | ||
|
|
@@ -438,11 +439,10 @@ def is_started(self): | |
| def attach_block(self, disk, target, wait=True, timeout=120): | ||
| try: | ||
| LOG.info("try to attach {} to {}".format(disk.id, self.id)) | ||
| attach_details = AttachIScsiVolumeDetails( | ||
| attach_details = AttachParavirtualizedVolumeDetails( | ||
| display_name=target, | ||
| instance_id=self.id, | ||
| volume_id=disk.id, | ||
| type='iscsi' | ||
| volume_id=disk.id | ||
| ) | ||
| response = self.compute_client.attach_volume(attach_details) | ||
| if wait: | ||
|
|
@@ -485,11 +485,70 @@ def detach_block(self, disk, wait=True, force=False): | |
| LOG.error("Failed to detach volume: {}".format(err)) | ||
| return False | ||
|
|
||
| def attach_nic(self, nic, wait=True, timeout=120): | ||
| raise NotImplementedError | ||
| def attach_nic(self, nic, wait=True, timeout=120, **kwargs): | ||
| try: | ||
| LOG.info("try to attach nic to {}".format(self.id)) | ||
| from oci.core.models import AttachVnicDetails, CreateVnicDetails | ||
| attach_vnic_details = AttachVnicDetails( | ||
| instance_id=self.id, | ||
| create_vnic_details=CreateVnicDetails( | ||
| subnet_id=nic.subnet_id, | ||
| assign_public_ip=False, | ||
| display_name=nic.tag, | ||
| freeform_tags={'Name': nic.tag} | ||
| ) | ||
| ) | ||
| response = self.compute_client.attach_vnic(attach_vnic_details) | ||
| nic.vnic_attachment_id = response.data.id | ||
| if wait: | ||
| get_response = oci.wait_until( | ||
| self.compute_client, | ||
| self.compute_client.get_vnic_attachment(nic.vnic_attachment_id), | ||
| 'lifecycle_state', | ||
| 'ATTACHED', | ||
| max_wait_seconds=timeout | ||
| ) | ||
| vnic_attachment = get_response.data | ||
| nic.id = vnic_attachment.vnic_id | ||
| LOG.info("nic attached, vnic_id: {}, attachment_id: {}".format(nic.id, nic.vnic_attachment_id)) | ||
| return True | ||
| except Exception as err: | ||
| LOG.error("Failed to attach nic: {}".format(err)) | ||
| return False | ||
|
|
||
| def detach_nic(self, nic, wait=True, force=False): | ||
| raise NotImplementedError | ||
| try: | ||
| if not nic.vnic_attachment_id: | ||
| LOG.info("No vnic_attachment_id found, searching...") | ||
| vnic_attachments = self.compute_client.list_vnic_attachments( | ||
| compartment_id=self.compartment_id, | ||
| instance_id=self.id | ||
| ).data | ||
| for va in vnic_attachments: | ||
| if va.vnic_id == nic.id and va.lifecycle_state == 'ATTACHED': | ||
| nic.vnic_attachment_id = va.id | ||
| break | ||
| if not nic.vnic_attachment_id: | ||
| LOG.info("No attached vnic found to detach") | ||
| return False | ||
| LOG.info("try to detach nic {} from {}".format(nic.vnic_attachment_id, self.id)) | ||
| self.compute_client.detach_vnic(nic.vnic_attachment_id) | ||
| if wait: | ||
| oci.wait_until( | ||
| self.compute_client, | ||
| self.compute_client.get_vnic_attachment(nic.vnic_attachment_id), | ||
| 'lifecycle_state', | ||
| 'DETACHED', | ||
| max_wait_seconds=120, | ||
| succeed_on_not_found=True | ||
| ) | ||
| nic.vnic_attachment_id = None | ||
| nic.id = None | ||
| LOG.info("nic detached successfully") | ||
| return True | ||
| except Exception as err: | ||
| LOG.error("Failed to detach nic: {}".format(err)) | ||
| return False | ||
|
|
||
| def is_paused(self): | ||
| raise NotImplementedError | ||
|
|
@@ -524,11 +583,20 @@ def __init__(self, params): | |
| self.type = None | ||
|
|
||
| def is_free(self): | ||
| self.volume = self.blockstorage_client.get_volume(self.id).data | ||
| if self.volume.lifecycle_state == 'AVAILABLE': | ||
| try: | ||
| volume_attachments = self.compute_client.list_volume_attachments( | ||
| compartment_id=self.compartment_id, | ||
| volume_id=self.id | ||
| ).data | ||
| for va in volume_attachments: | ||
| if va.lifecycle_state in ['ATTACHING', 'ATTACHED']: | ||
| LOG.info("{} volume is attached (state: {})".format(self.id, va.lifecycle_state)) | ||
| return False | ||
|
Comment on lines
585
to
+594
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suggestion (testing): Add tests for the new OCIVolume This is a significant behavior change that can affect volume deletion/reuse, so it should be covered by unit tests (with a mocked
That will make the edge-case behavior (errors, multiple attachments, transient states) explicit and protect against regressions. Suggested implementation: import types
import pytest
from os_tests.libs.resources_oci import OCIVolume
class FakeVolumeAttachmentsResponse:
def __init__(self, attachments):
# OCI SDK returns an object with a `.data` attribute that is an iterable of attachments
self.data = attachments
class FakeAttachment:
def __init__(self, lifecycle_state):
self.lifecycle_state = lifecycle_state
class FakeComputeClient:
def __init__(self, attachments=None, side_effect=None):
"""
:param attachments: iterable of FakeAttachment, returned when list_volume_attachments is called
:param side_effect: exception to raise when list_volume_attachments is called
"""
self._attachments = attachments or []
self._side_effect = side_effect
self.list_volume_attachments_calls = []
def list_volume_attachments(self, compartment_id, volume_id):
# Capture calls so we can assert the method is invoked with expected parameters
self.list_volume_attachments_calls.append(
{"compartment_id": compartment_id, "volume_id": volume_id}
)
if self._side_effect:
raise self._side_effect
return FakeVolumeAttachmentsResponse(self._attachments)
def _make_volume_with_compute_client(compute_client, compartment_id="ocid1.compartment.oc1..test", volume_id="ocid1.volume.oc1..test"):
"""
Create an OCIVolume instance bypassing its __init__ so tests do not depend on
the constructor signature.
"""
vol = OCIVolume.__new__(OCIVolume) # type: ignore
vol.compute_client = compute_client
vol.compartment_id = compartment_id
vol.id = volume_id
return vol
def test_is_free_returns_true_when_no_attachments(monkeypatch, caplog):
compute_client = FakeComputeClient(attachments=[])
volume = _make_volume_with_compute_client(compute_client)
with caplog.at_level("INFO"):
assert volume.is_free() is True
# Ensure the compute_client was used to query attachments
assert compute_client.list_volume_attachments_calls == [
{"compartment_id": volume.compartment_id, "volume_id": volume.id}
]
# Optional: confirm log message indicating volume is free
assert any("volume is free" in msg for msg in caplog.messages)
@pytest.mark.parametrize("state", ["ATTACHING", "ATTACHED"])
def test_is_free_returns_false_when_attachment_in_attaching_or_attached(monkeypatch, caplog, state):
attachments = [FakeAttachment(lifecycle_state=state)]
compute_client = FakeComputeClient(attachments=attachments)
volume = _make_volume_with_compute_client(compute_client)
with caplog.at_level("INFO"):
assert volume.is_free() is False
assert compute_client.list_volume_attachments_calls == [
{"compartment_id": volume.compartment_id, "volume_id": volume.id}
]
# Confirm we logged that the volume is attached
assert any("volume is attached" in msg for msg in caplog.messages)
def test_is_free_true_when_attachments_only_in_non_attached_states(monkeypatch, caplog):
# States such as DETACHED and DETACHING should *not* be considered attached by the current logic
attachments = [
FakeAttachment(lifecycle_state="DETACHED"),
FakeAttachment(lifecycle_state="DETACHING"),
]
compute_client = FakeComputeClient(attachments=attachments)
volume = _make_volume_with_compute_client(compute_client)
with caplog.at_level("INFO"):
assert volume.is_free() is True
assert compute_client.list_volume_attachments_calls == [
{"compartment_id": volume.compartment_id, "volume_id": volume.id}
]
# Confirm we log the volume as free
assert any("volume is free" in msg for msg in caplog.messages)
def test_is_free_true_when_list_volume_attachments_raises(monkeypatch, caplog):
error = RuntimeError("simulated failure")
compute_client = FakeComputeClient(side_effect=error)
volume = _make_volume_with_compute_client(compute_client)
with caplog.at_level("INFO"):
assert volume.is_free() is True
assert compute_client.list_volume_attachments_calls == [
{"compartment_id": volume.compartment_id, "volume_id": volume.id}
]
# Ensure the failure is logged
assert any("Failed to check volume attachment" in msg for msg in caplog.messages)
|
||
| LOG.info("{} volume is free".format(self.id)) | ||
| return True | ||
| except Exception as err: | ||
| LOG.info("Failed to check volume attachment: {}".format(err)) | ||
| return True | ||
| LOG.info("{} volume is in {} state".format(self.id, self.volume.lifecycle_state)) | ||
| return False | ||
|
|
||
| def create(self, wait=True): | ||
| try: | ||
|
|
@@ -625,8 +693,11 @@ def modify_disk_size(self, os_disk_size=10, expand_num=10): | |
| class OCINIC(NetworkResource): | ||
| ''' | ||
| OCI Network class | ||
| On OCI, secondary VNICs are created and attached in one API call | ||
| (attach_vnic) and detached and deleted in one call (detach_vnic). | ||
| So create() just marks the NIC as ready, and delete() is a no-op | ||
| since detach_vnic already removes the VNIC. | ||
| ''' | ||
| _vnic = None | ||
|
|
||
| def __init__(self, params): | ||
| super(OCINIC, self).__init__(params) | ||
|
|
@@ -639,32 +710,39 @@ def __init__(self, params): | |
|
|
||
| self.compartment_id = params.get('compartment_id') | ||
| self.subnet_id = params.get('subnet_id') or params.get('subnet_id_ipv4') | ||
| self.tag = params.get('tagname') or 'os_tests_network_oci' | ||
| self.tag = params.get('tagname') or 'os_tests_nic_oci' | ||
| self.id = None | ||
| self.vnic_attachment_id = None | ||
|
|
||
| def show(self): | ||
| if self.is_exist(): | ||
| LOG.info("VNIC ID: {}".format(self.id)) | ||
|
|
||
| def create(self): | ||
| LOG.info("VNIC creation is handled during instance launch on OCI") | ||
| raise NotImplementedError | ||
| LOG.info("OCINIC created (will be provisioned on attach_nic)") | ||
| return True | ||
|
|
||
| def delete(self, wait=True): | ||
| LOG.info("VNIC deletion is handled during instance termination on OCI") | ||
| raise NotImplementedError | ||
| LOG.info("OCINIC delete (already handled by detach_vnic)") | ||
| self.id = None | ||
| self.vnic_attachment_id = None | ||
| return True | ||
|
|
||
| def get_state(self): | ||
| if not self.id: | ||
| return 'unknown' | ||
| try: | ||
| state = 'unknown' | ||
| vnic = self.network_client.get_vnic(self.id).data | ||
| state = vnic.lifecycle_state | ||
| LOG.info("vnic state: {}".format(state)) | ||
| except Exception as err: | ||
| return state | ||
| return state | ||
| except Exception as err: | ||
| LOG.info("Failed to get vnic state: {}".format(err)) | ||
| return 'unknown' | ||
|
|
||
| def is_exist(self): | ||
| if not self.id: | ||
| return False | ||
| try: | ||
| vnic = self.network_client.get_vnic(self.id).data | ||
| if vnic.lifecycle_state in ['TERMINATING', 'TERMINATED']: | ||
|
|
@@ -677,10 +755,15 @@ def is_exist(self): | |
| return False | ||
|
|
||
| def is_free(self): | ||
| if not self.id: | ||
| return True | ||
| if not self.vnic_attachment_id: | ||
| return True | ||
| try: | ||
| vnic = self.network_client.get_vnic(self.id).data | ||
| if vnic.lifecycle_state == 'AVAILABLE': | ||
| return True | ||
| except Exception as err: | ||
| LOG.info("Failed to check VNIC state: {}".format(err)) | ||
| return False | ||
| va = self.compute_client.get_vnic_attachment(self.vnic_attachment_id).data | ||
| if va.lifecycle_state in ['ATTACHED', 'ATTACHING']: | ||
| LOG.info("{} nic is in use (state: {})".format(self.id, va.lifecycle_state)) | ||
| return False | ||
| except Exception: | ||
| pass | ||
| return True | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1111,17 +1111,20 @@ def test_stop_start_vm(self): | |
| self.assertTrue(self.vm.is_stopped(), | ||
| "Stop VM error: VM status is not SHUTOFF") | ||
| self._start_vm_and_check() | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. suggestion (testing): Consider making the OCI-specific behavior explicit in the test semantics (skip/xfail or separate test) so the intent is clearer. Right now this test no longer exercises the guest-initiated shutdown path on OCI (it just logs and returns), but the name and structure still imply that shutdown behavior is being validated for all providers. To avoid this mismatch, either mark it skipped/xfail on OCI, or split it into provider-specific tests (e.g. a Suggested implementation: if self.vm.provider == 'oci':
pytest.skip("Guest-initiated shutdown does not auto-stop OCI instances; skipping shutdown behavior validation on this provider")
else:
utils_lib.run_cmd(self, 'sudo shutdown now')
for count in utils_lib.iterate_timeout(180,
"Timed out waiting for VM to stop.",
wait=0):
time.sleep(30)
self.log.info(f"The {count} time 30 second waiting\n"
f"The vm status is {self.vm.get_state()}")
if self.vm.is_stopped():
break
time.sleep(120)
self._start_vm_and_check()
|
||
| utils_lib.run_cmd(self, 'sudo shutdown now') | ||
| for count in utils_lib.iterate_timeout(180, | ||
| "Timed out waiting for VM to stop.", | ||
| wait=0): | ||
| time.sleep(30) | ||
| self.log.info(f"The {count} time 30 second waiting\n" | ||
| f"The vm status is {self.vm.get_state()}") | ||
| if self.vm.is_stopped(): | ||
| break | ||
| time.sleep(120) | ||
| self._start_vm_and_check() | ||
| if self.vm.provider == 'oci': | ||
| self.log.info("Skip 'sudo shutdown now' test on OCI as guest-initiated shutdown does not auto-stop the instance") | ||
| else: | ||
| utils_lib.run_cmd(self, 'sudo shutdown now') | ||
| for count in utils_lib.iterate_timeout(180, | ||
| "Timed out waiting for VM to stop.", | ||
| wait=0): | ||
| time.sleep(30) | ||
| self.log.info(f"The {count} time 30 second waiting\n" | ||
| f"The vm status is {self.vm.get_state()}") | ||
| if self.vm.is_stopped(): | ||
| break | ||
| time.sleep(120) | ||
| self._start_vm_and_check() | ||
|
|
||
| def _update_kernel_args(self, boot_param_required): | ||
| if utils_lib.is_ostree_system(self): | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (testing): Add tests covering the new OCI NIC attach/detach behaviors and OCINIC lifecycle helpers.
The new
attach_nic/detach_niclogic and OCINIC lifecycle changes significantly alter behavior (vnic_attachment_idhandling,is_exist/is_free, andcreate/deletebecoming attach/detach wrappers), but there are no tests covering these flows.Please add unit tests (with mocked OCI clients) that verify at least:
attach_niccallsattach_vnicwith the expectedCreateVnicDetails(assign_public_ip=False,display_name, freeform tags), waits, setsnic.vnic_attachment_idandnic.id, and returnsTrue.attach_nicreturnsFalsewhenattach_vnicorwait_untilraises.detach_nicresolvesvnic_attachment_idvialist_vnic_attachmentswhen missing, handles the "no attached vnic" case, and on success clears bothnic.vnic_attachment_idandnic.id.create,delete,get_state,is_exist, andis_freebehave correctly whenid/vnic_attachment_idareNone.Mock-based unit tests should be sufficient to validate orchestration and state transitions without real OCI infrastructure.
Suggested implementation:
The proposed tests assume specific API shapes that you may need to align with your actual implementation:
Class names and locations:
resources_oci.OCIInstancewith the real instance class that implementsattach_nic/detach_nic.resources_oci.OCINICwith the actual NIC resource class that exposescreate,delete,get_state,is_exist, andis_free.Imported OCI symbols:
os_tests.libs.resources_oci.AttachVnicDetails,CreateVnicDetails, andoci.wait_until. Ensureresources_oci.pydoesfrom oci.core.models import AttachVnicDetails, CreateVnicDetailsand importsociat module scope so these patch targets are correct. If the imports are local or aliased, adjust the patch paths accordingly.OCINIC implementation details:
OCINIC.create()delegates toself.instance.attach_nic(self, wait=True)and returns its boolean result.OCINIC.delete()delegates toself.instance.detach_nic(self, wait=True), clearsidandvnic_attachment_idon success, and returns the boolean result.OCINIC.is_exist()returnsTruewhenidorvnic_attachment_idis set;Falseotherwise.OCINIC.is_free()returns the logical inverse ofis_exist().OCINIC.get_state()returns different strings depending on whetherid/vnic_attachment_idareNone. Update the expected strings in the tests if your implementation uses different labels.detach_nic signature:
detach_nic(self, nic, wait=True, timeout=120)and expect it to:nic.vnic_attachment_idwhen present, otherwise resolve it viacompute_client.list_vnic_attachments.Falseand avoid callingdetach_vnicwhen no vnic attachment is found.nic.vnic_attachment_idandnic.id.Once you align the names and any minor semantic differences, these tests will give coverage for the new attach/detach orchestration and OCINIC lifecycle behavior described in your review comment.