From 019ccff8ec09ff5ba8586ec6f6a6a63ca720bd4f Mon Sep 17 00:00:00 2001 From: Guillermo Ramos Date: Tue, 26 Nov 2024 19:27:49 +0100 Subject: [PATCH 1/3] F OpenNebula/one#5853: update doc for local driver Signed-off-by: Guillermo Ramos --- .../one_deploy_tutorial_local_ds.rst | 6 ++--- .../one_deploy_tutorial_shared_ds.rst | 2 +- .../frontend_installation/install.rst | 2 +- .../opennebula_services/oned.rst | 4 ++-- .../provision_driver.rst | 4 ++-- .../devel-nm.rst | 2 +- .../infrastructure_drivers_development/sd.rst | 8 +++---- .../release_notes/compatibility.rst | 5 ++++ .../vcenter_driver/vcenter_driver.rst | 4 ++-- .../backups/operations.rst | 6 ++--- .../backups/overview.rst | 2 +- .../backups/restic.rst | 6 ++--- .../host_cluster_management/cluster_guide.rst | 2 +- .../host_cluster_management/hosts.rst | 2 +- .../monitor_alert/install.rst | 12 +++++----- .../references/cli.rst | 6 ++--- .../storage_management/datastores.rst | 18 +++++++------- .../kvm_node/kvm_node_installation.rst | 2 +- .../storage_setup/file_ds.rst | 10 ++++---- .../storage_setup/local_ds.rst | 20 ++++++++++++---- .../storage_setup/overview.rst | 24 +++++++++---------- .../references/template.rst | 4 ++-- .../provision_clusters/references/virtual.rst | 4 ++-- .../provisioning_edge_cluster.rst | 4 ++-- 24 files changed, 87 insertions(+), 72 deletions(-) diff --git a/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_local_ds.rst b/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_local_ds.rst index c7fe04c0dd..42eec14f85 100644 --- a/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_local_ds.rst +++ b/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_local_ds.rst @@ -412,9 +412,9 @@ Output should be similar to the following: oneadmin@front-end:~$ onedatastore list ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT - 2 files 57.1G 94% 0 0 fil fs ssh on - 1 default 57.1G 94% 0 0 img fs ssh on - 0 system - - 0 0 sys - ssh on + 2 files 57.1G 94% 0 0 fil fs local on + 1 default 57.1G 94% 0 0 img fs local on + 0 system - - 0 0 sys - local on Again, verify that the last column, ``STAT``, displays ``on`` and not ``err``. diff --git a/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_shared_ds.rst b/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_shared_ds.rst index 623b469fb6..08bdcbe517 100644 --- a/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_shared_ds.rst +++ b/source/installation_and_configuration/automatic_deployment/one_deploy_tutorial_shared_ds.rst @@ -471,7 +471,7 @@ Output should be similar to the following: oneadmin@ubuntu2404fsn:~$ onedatastore list ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT - 2 files 28G 87% 0 0 fil fs ssh on + 2 files 28G 87% 0 0 fil fs local on 1 default 28G 87% 0 0 img fs shared on 0 system - - 0 0 sys - shared on diff --git a/source/installation_and_configuration/frontend_installation/install.rst b/source/installation_and_configuration/frontend_installation/install.rst index e4e15ad156..64af47db56 100644 --- a/source/installation_and_configuration/frontend_installation/install.rst +++ b/source/installation_and_configuration/frontend_installation/install.rst @@ -252,7 +252,7 @@ The complete list of operating system services provided by OpenNebula: | **opennebula-ssh-socks-cleaner** | Periodic cleaner of SSH persistent connections | opennebula | +---------------------------------------+------------------------------------------------------------------------+---------------------------+ -.. note:: Since 5.12, the OpenNebula comes with an integrated SSH agent as the ``opennebula-ssh-agent`` service which removes the need to copy oneadmin's SSH private key across your Hosts. For more information, you can look at the :ref:`passwordless login ` section of the manual. You can opt to disable this service and configure your environment the old way. +.. note:: Since 5.12, the OpenNebula comes with an integrated SSH agent as the ``opennebula-ssh-agent`` service which removes the need to copy oneadmin's SSH private key across your Hosts. For more information, you can look at the :ref:`passwordless login ` section of the manual. You can opt to disable this service and configure your environment the old way. You are ready to **start** all OpenNebula services with the following command (NOTE: you might want to remove the services from the command arguments if you skipped their configuration steps above): diff --git a/source/installation_and_configuration/opennebula_services/oned.rst b/source/installation_and_configuration/opennebula_services/oned.rst index d9d03e3cf0..d7f6143698 100644 --- a/source/installation_and_configuration/opennebula_services/oned.rst +++ b/source/installation_and_configuration/opennebula_services/oned.rst @@ -419,7 +419,7 @@ Sample configuration: TM_MAD = [ EXECUTABLE = "one_tm", - ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt" + ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,local,ceph,dev,vcenter,iscsi_libvirt" ] The configuration for each driver is defined in the ``TM_MAD_CONF`` section. @@ -489,7 +489,7 @@ Sample configuration: DATASTORE_MAD = [ EXECUTABLE = "one_datastore", - ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,ceph,fs_lvm" + ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,local,ceph,fs_lvm" ] For more information on this driver and how to customize it, please visit the :ref:`storage configuration ` guide. diff --git a/source/integration_and_development/edge_provider_drivers_development/provision_driver.rst b/source/integration_and_development/edge_provider_drivers_development/provision_driver.rst index 0818c172d5..44c4841a85 100644 --- a/source/integration_and_development/edge_provider_drivers_development/provision_driver.rst +++ b/source/integration_and_development/edge_provider_drivers_development/provision_driver.rst @@ -261,12 +261,12 @@ You need to add templates to create your provider instances as well as the edge - name: "${provision}-image" type: 'image_ds' ds_mad: 'fs' - tm_mad: 'ssh' + tm_mad: 'local' safe_dirs: "/var/tmp /tmp" - name: "${provision}-system" type: 'system_ds' - tm_mad: 'ssh' + tm_mad: 'local' safe_dirs: "/var/tmp replica_host: "use-first-host" --- diff --git a/source/integration_and_development/infrastructure_drivers_development/devel-nm.rst b/source/integration_and_development/infrastructure_drivers_development/devel-nm.rst index b50c0ab436..199f098b35 100644 --- a/source/integration_and_development/infrastructure_drivers_development/devel-nm.rst +++ b/source/integration_and_development/infrastructure_drivers_development/devel-nm.rst @@ -68,7 +68,7 @@ For example, this is the directory tree of the bridge driver synced to a virtual .. code-block:: text - root@ubuntu1804-ssh-6ee11-2:/var/tmp/one/vnm/bridge# tree ./ + root@ubuntu1804-local-6ee11-2:/var/tmp/one/vnm/bridge# tree ./ ./ ├── clean ├── clean.d diff --git a/source/integration_and_development/infrastructure_drivers_development/sd.rst b/source/integration_and_development/infrastructure_drivers_development/sd.rst index 0a7ca3e353..e62c1c5851 100644 --- a/source/integration_and_development/infrastructure_drivers_development/sd.rst +++ b/source/integration_and_development/infrastructure_drivers_development/sd.rst @@ -326,7 +326,7 @@ Action scripts needed when the TM is used for the system datastore: ] ... -- **monitor_ds**: monitors a **ssh-like** system datastore. Distributed system datastores should ``exit 0`` on the previous monitor script. Arguments and return values are the same as the monitor script. +- **monitor_ds**: monitors a **local-like** system datastore. Distributed system datastores should ``exit 0`` on the previous monitor script. Arguments and return values are the same as the monitor script. - **pre_backup** and **pre_backup_live**: These actions needs to generate disk backup images, as well as the VM XML representation in the folder ``remote_system_ds/backup``. Each disk is created in the form ``disk..``. The VM representation is stored in a file named ``vm.xml``. The live version needs to pause/snapshot the VM to create consistent backup images. @@ -386,16 +386,16 @@ The driver plugin ``/monitor`` will report the information for two thing - Total storage metrics for the datastore (``USED_MB`` ``FREE_MB`` ``TOTAL_MB``) - Disk usage metrics (all disks: volatile, persistent and non-persistent) -Non-shared System Datastores (SSH-like) +Local System Datastores (SSH-like) -------------------------------------------------------------------------------- -Non-shared SSH datastores are labeled by including a ``.monitor`` file in the datastore directory in any of the clone or ln operations. Only those datastores are monitored remotely by the monitor_ds.sh probe. The datastore is monitored with ``/monitor_ds``, but ``tm_mad`` is obtained by the probes reading from the .monitor file. +Local datastores are labeled by including a ``.monitor`` file in the datastore directory in any of the clone or ln operations. Only those datastores are monitored remotely by the monitor_ds.sh probe. The datastore is monitored with ``/monitor_ds``, but ``tm_mad`` is obtained by the probes reading from the .monitor file. The plugins /monitor_ds + kvm-probes.d/monitor_ds.sh will report the information for two things: - Total storage metrics for the datastore (``USED_MB`` ``FREE_MB`` ``TOTAL_MB``) - Disk usage metrics (all disks volatile, persistent and non-persistent) -.. note:: ``.monitor`` will be only present in SSH datastores to be monitored in the nodes. System Datastores that need to be monitored in the nodes will need to provide a ``monitor_ds`` script and not the ``monitor`` one. This is to prevent errors, and not invoke the shared mechanism for local datastores. +.. note:: ``.monitor`` will be only present in Local datastores to be monitored in the nodes. System Datastores that need to be monitored in the nodes will need to provide a ``monitor_ds`` script and not the ``monitor`` one. This is to prevent errors, and not invoke the shared mechanism for local datastores. The monitor_ds script. -------------------------------------------------------------------------------- diff --git a/source/intro_release_notes/release_notes/compatibility.rst b/source/intro_release_notes/release_notes/compatibility.rst index 25a1315acf..c0e3818276 100644 --- a/source/intro_release_notes/release_notes/compatibility.rst +++ b/source/intro_release_notes/release_notes/compatibility.rst @@ -9,6 +9,11 @@ This guide is aimed at OpenNebula 6.10.x users and administrators who want to up Visit the :ref:`Features list ` and the :ref:`What's New guide ` for a comprehensive list of what's new in OpenNebula 6.10. +New default Local datastore driver +================================================================================ + +Since OpenNebula 6.10.2, the default Local driver is ``local`` instead of ``ssh`` (see :ref:`Local Storage datastore drivers `). The legacy ``ssh`` driver is still supported and nothing needs to be done for already existing datastores to keep working. As supporting qcow2 features such as thin provisioning required breaking compatibility with existing datastores, we decided to take the opportunity to write the ``local`` driver from scratch, making the driver more maintainable and easing new feature development. + Check Datastore Capacity During Image Create ================================================================================ diff --git a/source/legacy_components/vcenter_driver/vcenter_driver.rst b/source/legacy_components/vcenter_driver/vcenter_driver.rst index c6c6979c28..3add39210a 100644 --- a/source/legacy_components/vcenter_driver/vcenter_driver.rst +++ b/source/legacy_components/vcenter_driver/vcenter_driver.rst @@ -72,7 +72,7 @@ The following section configures the vCenter datastore drivers, used to copy ima DATASTORE_MAD = [ EXECUTABLE = "one_datastore", - ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,ceph,fs_lvm,qcow2,vcenter" + ARGUMENTS = "-t 15 -d dummy,fs,lvm,ceph,dev,iscsi_libvirt,vcenter -s shared,ssh,local,ceph,fs_lvm,qcow2,vcenter" ] @@ -82,7 +82,7 @@ The following section configures the vCenter datastore transfer drivers, used to TM_MAD = [ EXECUTABLE = "one_tm", - ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,ceph,dev,vcenter,iscsi_libvirt" + ARGUMENTS = "-t 15 -d dummy,lvm,shared,fs_lvm,qcow2,ssh,local,ceph,dev,vcenter,iscsi_libvirt" ] TM_MAD_CONF = [ diff --git a/source/management_and_operations/backups/operations.rst b/source/management_and_operations/backups/operations.rst index eb73540945..3a379214b3 100644 --- a/source/management_and_operations/backups/operations.rst +++ b/source/management_and_operations/backups/operations.rst @@ -237,10 +237,10 @@ The ``SOURCE`` attribute in the backup images (and increments) is an opaque refe $ restic snapshots repository d5b1499c opened (repository version 2) successfully, password is correct - ID Time Host Tags Paths + ID Time Host Tags Paths ----------------------------------------------------------------------------------------------------------------- - 25f4b298 2022-12-01 13:36:51 ubuntu2204-kvm-ssh-6-5-e795-2.test one-0 /var/lib/one/datastores/0/0/backup - 6968545c 2022-12-01 14:22:44 ubuntu2204-kvm-ssh-6-5-e795-2.test one-0 /var/lib/one/datastores/0/0/backup + 25f4b298 2022-12-01 13:36:51 ubuntu2204-kvm-local-6-5-e795-2.test one-0 /var/lib/one/datastores/0/0/backup + 6968545c 2022-12-01 14:22:44 ubuntu2204-kvm-local-6-5-e795-2.test one-0 /var/lib/one/datastores/0/0/backup ----------------------------------------------------------------------------------------------------------------- **Note**: with the restic driver each snapshot is labeled with the VM id in OpenNebula. diff --git a/source/management_and_operations/backups/overview.rst b/source/management_and_operations/backups/overview.rst index d96250e74b..c04ab460c8 100644 --- a/source/management_and_operations/backups/overview.rst +++ b/source/management_and_operations/backups/overview.rst @@ -48,7 +48,7 @@ Performing a VM backup may require some support from the hypervisor or the disk | vCenter | vCenter\ :sup:`**` | Not supported | +------------+------------------------+---------+-----------+---------+-----------+ -\ :sup:`*` Any datastore based on files with the given format, i.e. NFS/SAN or SSH. +\ :sup:`*` Any datastore based on files with the given format, i.e. NFS/SAN or Local. \ :sup:`**` The legacy vCenter driver is included in the distribution, but no longer receives updates or bug fixes. diff --git a/source/management_and_operations/backups/restic.rst b/source/management_and_operations/backups/restic.rst index e281f0ada3..dd04fb406e 100644 --- a/source/management_and_operations/backups/restic.rst +++ b/source/management_and_operations/backups/restic.rst @@ -81,9 +81,9 @@ After some time, the datastore should be monitored: $ onedatastore list ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT 100 RBackups 1.5T 91% 0 0 bck restic - on - 2 files 19.8G 84% 0 0 fil fs ssh on - 1 default 19.8G 84% 0 1 img fs ssh on - 0 system - - 0 0 sys - ssh on + 2 files 19.8G 84% 0 0 fil fs local on + 1 default 19.8G 84% 0 1 img fs local on + 0 system - - 0 0 sys - local on That's it, we are all set to make VM backups! diff --git a/source/management_and_operations/host_cluster_management/cluster_guide.rst b/source/management_and_operations/host_cluster_management/cluster_guide.rst index dcbcbc755a..beb49fae1d 100644 --- a/source/management_and_operations/host_cluster_management/cluster_guide.rst +++ b/source/management_and_operations/host_cluster_management/cluster_guide.rst @@ -123,7 +123,7 @@ The System Datastore for a Cluster In order to create a complete environment where the scheduler can deploy VMs, your clusters need to have at least one System Datastore. -You can add the default System Datastore (ID: 0), or create a new one to improve its performance (e.g. balance VM I/O between different servers) or to use different System Datastore types (e.g. ``shared`` and ``ssh``). +You can add the default System Datastore (ID: 0), or create a new one to improve its performance (e.g. balance VM I/O between different servers) or to use different System Datastore types (e.g. ``shared`` and ``local``). To use a specific System Datastore with your cluster, instead of the default one, just create it and associate it just like any other datastore (``onecluster adddatastore``). diff --git a/source/management_and_operations/host_cluster_management/hosts.rst b/source/management_and_operations/host_cluster_management/hosts.rst index b87673150e..f65d123f75 100644 --- a/source/management_and_operations/host_cluster_management/hosts.rst +++ b/source/management_and_operations/host_cluster_management/hosts.rst @@ -118,7 +118,7 @@ The information of a Host contains: * **General information** of the Host including its name and the drivers used to interact with it. * **Capacity** (*Host Shares*) for CPU and memory. -* **Local datastore information** (*Local System Datastore*) if the Host is configured to use a local datastore (e.g. in SSH transfer mode). +* **Local datastore information** (*Local System Datastore*) if the Host is configured to use a local datastore (e.g. in Local transfer mode). * **Monitoring Information**, including PCI devices and NUMA information of the node. You can also find hypervisor specific information here. * **Virtual Machines** allocated to the Host. *Wild* are virtual machines running on the Host but not started by OpenNebula. diff --git a/source/management_and_operations/monitor_alert/install.rst b/source/management_and_operations/monitor_alert/install.rst index 9e38fc397b..1ad343b381 100644 --- a/source/management_and_operations/monitor_alert/install.rst +++ b/source/management_and_operations/monitor_alert/install.rst @@ -75,8 +75,8 @@ The OpenNebula Prometheus package comes with a simple script that automatically $ onehost list ID NAME CLUSTER TVM ALLOCATED_CPU ALLOCATED_MEM STAT - 1 kvm-ssh-uimw3-2.test default 0 0 / 100 (0%) 0K / 1.2G (0%) on - 0 kvm-ssh-uimw3-1.test default 0 0 / 100 (0%) 0K / 1.2G (0%) on + 1 kvm-local-uimw3-2.test default 0 0 / 100 (0%) 0K / 1.2G (0%) on + 0 kvm-local-uimw3-1.test default 0 0 / 100 (0%) 0K / 1.2G (0%) on Now, we will generate the prometheus configuration in ``/etc/one/prometheus/prometheus.yml``, as ``root`` (or ``oneadmin``) execute: @@ -118,21 +118,21 @@ This command connects to your cloud as oneadmin to gather the relevant informati - targets: - 127.0.0.1:9100 - targets: - - kvm-ssh-uimw3-2.test:9100 + - kvm-local-uimw3-2.test:9100 labels: one_host_id: '1' - targets: - - kvm-ssh-uimw3-1.test:9100 + - kvm-local-uimw3-1.test:9100 labels: one_host_id: '0' - job_name: libvirt_exporter static_configs: - targets: - - kvm-ssh-uimw3-2.test:9926 + - kvm-local-uimw3-2.test:9926 labels: one_host_id: '1' - targets: - - kvm-ssh-uimw3-1.test:9926 + - kvm-local-uimw3-1.test:9926 labels: one_host_id: '0' diff --git a/source/management_and_operations/references/cli.rst b/source/management_and_operations/references/cli.rst index 27f7a7a680..f1dbcfc28c 100644 --- a/source/management_and_operations/references/cli.rst +++ b/source/management_and_operations/references/cli.rst @@ -187,9 +187,9 @@ For example, in the case of ``onevm list``, by default it looks like this root@supermicro9:~# onevm list ID USER GROUP NAME STAT CPU MEM HOST TIME 9234 oneadmin oneadmin alma8-alma8-6-7-80-e3f1f4b2-6a26f4bd-1825.build unde 0.5 8G 0d 05h57 - 9233 nhansen users alma8-kvm-ssh-6-6-pkofu-2.test runn 0.5 1.3G localhost 0d 07h04 - 9232 nhansen users alma8-kvm-ssh-6-6-pkofu-1.test runn 0.5 1.3G localhost 0d 07h04 - 9231 nhansen users alma8-kvm-ssh-6-6-pkofu-0.test runn 0.5 1.8G localhost 0d 07h04 + 9233 nhansen users alma8-kvm-local-6-6-pkofu-2.test runn 0.5 1.3G localhost 0d 07h04 + 9232 nhansen users alma8-kvm-local-6-6-pkofu-1.test runn 0.5 1.3G localhost 0d 07h04 + 9231 nhansen users alma8-kvm-local-6-6-pkofu-0.test runn 0.5 1.8G localhost 0d 07h04 But you can change the default columns, increase the column width and disable expansion to make it look like this diff --git a/source/management_and_operations/storage_management/datastores.rst b/source/management_and_operations/storage_management/datastores.rst index 58234808b7..2593211656 100644 --- a/source/management_and_operations/storage_management/datastores.rst +++ b/source/management_and_operations/storage_management/datastores.rst @@ -21,9 +21,9 @@ By default, OpenNebula will create an image (``default``), system (``system``), $ onedatastore list ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT - 2 files 50G 86% 0 0 fil fs ssh on - 1 default 50G 86% 0 2 img fs ssh on - 0 system - - 0 0 sys - ssh on + 2 files 50G 86% 0 0 fil fs local on + 1 default 50G 86% 0 2 img fs local on + 0 system - - 0 0 sys - local on .. _datastore_common: @@ -46,7 +46,7 @@ You can access the information about each datastore using the ``onedatastore sho CLUSTERS : 0 TYPE : IMAGE DS_MAD : fs - TM_MAD : ssh + TM_MAD : local BASE PATH : /var/lib/one//datastores/1 DISK_TYPE : FILE STATE : READY @@ -70,7 +70,7 @@ You can access the information about each datastore using the ``onedatastore sho LN_TARGET="SYSTEM" RESTRICTED_DIRS="/" SAFE_DIRS="/" - TM_MAD="ssh" + TM_MAD="local" TYPE="IMAGE_DS" IMAGES @@ -79,7 +79,7 @@ You can access the information about each datastore using the ``onedatastore sho There are four important sections: - * **General Information**, it includes basic information like the name, the file path of the datastore or its type (``IMAGE``). It includes also the set of drivers (``DS_MAD`` and ``TM_MAD``) used to store and transfer images. In this example, the datastores uses a file based driver (``DS_MAD="fs"``) and the SSH protocol for transfers (``TM_MAD=ssh``). + * **General Information**, it includes basic information like the name, the file path of the datastore or its type (``IMAGE``). It includes also the set of drivers (``DS_MAD`` and ``TM_MAD``) used to store and transfer images. In this example, the datastores uses a file based driver (``DS_MAD="fs"``) and the Local protocol for transfers (``TM_MAD=local``). * **Capacity**, including basic usage metrics like total, used, and free space. * **Generic Attributes**, under ``DATASTORE TEMPLATE`` you can find configuration attributes and custom tags (see below). * **Images**, the list of images currently stored in this datastore. @@ -102,7 +102,7 @@ In the case of System Datastore the information is similar: CLUSTERS : 0 TYPE : SYSTEM DS_MAD : - - TM_MAD : ssh + TM_MAD : local BASE PATH : /var/lib/one//datastores/0 DISK_TYPE : FILE STATE : READY @@ -125,14 +125,14 @@ In the case of System Datastore the information is similar: RESTRICTED_DIRS="/" SAFE_DIRS="/var/tmp" SHARED="NO" - TM_MAD="ssh" + TM_MAD="local" TYPE="SYSTEM_DS" IMAGES Note the differences in this case: * Only the transfer driver (``TM_MAD``) is defined. - * For the datastore of this example, there are no overall usage figures. The ``ssh`` driver use the local storage area of each Host. To check the available space in a specific host you need to check the host details with ``onehost show`` command. Note that this behavior may be different for other drivers. + * For the datastore of this example, there are no overall usage figures. The ``local`` driver uses the local storage area of each Host. To check the available space in a specific host you need to check the host details with ``onehost show`` command. Note that this behavior may be different for other drivers. * Images cannot be registered in System Datastores. Basic Configuration diff --git a/source/open_cluster_deployment/kvm_node/kvm_node_installation.rst b/source/open_cluster_deployment/kvm_node/kvm_node_installation.rst index 6a1a8d83b4..350566daea 100644 --- a/source/open_cluster_deployment/kvm_node/kvm_node_installation.rst +++ b/source/open_cluster_deployment/kvm_node/kvm_node_installation.rst @@ -66,7 +66,7 @@ Disable AppArmor on Ubuntu/Debian ------------------------------------------ .. include:: ../common_node/apparmor.txt -.. _kvm_ssh: +.. _kvm_local: Step 4. Configure Passwordless SSH ================================== diff --git a/source/open_cluster_deployment/storage_setup/file_ds.rst b/source/open_cluster_deployment/storage_setup/file_ds.rst index 1a1d58aeda..db6ccff066 100644 --- a/source/open_cluster_deployment/storage_setup/file_ds.rst +++ b/source/open_cluster_deployment/storage_setup/file_ds.rst @@ -20,7 +20,7 @@ The specific attributes for Kernels & Files Datastore are listed in the followin +------------+---------------------------+ | ``DS_MAD`` | ``fs`` | +------------+---------------------------+ -| ``TM_MAD`` | ``ssh`` | +| ``TM_MAD`` | ``local`` | +------------+---------------------------+ .. note:: The recommended ``DS_MAD`` and ``TM_MAD`` are the ones stated above but any other can be used to fit specific use cases. Regarding this, the same :ref:`configuration guidelines ` defined for Image and System Datastores applies for the Kernels & Files Datastore. @@ -32,7 +32,7 @@ For example, the following illustrates the creation of Kernels & Files. > cat kernels_ds.conf NAME = kernels DS_MAD = fs - TM_MAD = ssh + TM_MAD = local TYPE = FILE_DS SAFE_DIRS = /var/tmp/files @@ -43,14 +43,14 @@ For example, the following illustrates the creation of Kernels & Files. ID NAME CLUSTER IMAGES TYPE DS TM 0 system - 0 sys - dummy 1 default - 0 img dummy dummy - 2 files - 0 fil fs ssh - 100 kernels - 0 fil fs ssh + 2 files - 0 fil fs local + 100 kernels - 0 fil fs local You can check more details of the datastore by issuing the ``onedatastore show `` command. Host Configuration ================== -The recommended ``ssh`` driver for the File Datastore does not need any special configuration for the Hosts. Just make sure that there is enough space under the datastore location (``/var/lib/one/datastores`` by default) to hold the VM files in the Front-end and Hosts. +The recommended ``local`` driver for the File Datastore does not need any special configuration for the Hosts. Just make sure that there is enough space under the datastore location (``/var/lib/one/datastores`` by default) to hold the VM files in the Front-end and Hosts. If different ``DS_MAD`` or ``TM_MAD`` attributes are used, refer to the corresponding node set up guide of the corresponding driver. diff --git a/source/open_cluster_deployment/storage_setup/local_ds.rst b/source/open_cluster_deployment/storage_setup/local_ds.rst index f17bedd4b1..d78f5a08e0 100644 --- a/source/open_cluster_deployment/storage_setup/local_ds.rst +++ b/source/open_cluster_deployment/storage_setup/local_ds.rst @@ -39,7 +39,7 @@ To create a new System Datastore, you need to set following (template) parameter +---------------+-------------------------------------------------+ | ``TYPE`` | ``SYSTEM_DS`` | +---------------+-------------------------------------------------+ -| ``TM_MAD`` | ``ssh`` | +| ``TM_MAD`` | ``local`` | +---------------+-------------------------------------------------+ This can be done either in Sunstone or through the CLI; for example, to create a local System Datastore simply enter: @@ -48,7 +48,7 @@ This can be done either in Sunstone or through the CLI; for example, to create a $ cat systemds.txt NAME = local_system - TM_MAD = ssh + TM_MAD = local TYPE = SYSTEM_DS $ onedatastore create systemds.txt @@ -68,7 +68,7 @@ To create a new Image Datastore, you need to set the following (template) parame +---------------+-----------------------------------------------------------------+ | ``DS_MAD`` | ``fs`` | +---------------+-----------------------------------------------------------------+ -| ``TM_MAD`` | ``ssh`` | +| ``TM_MAD`` | ``local`` | +---------------+-----------------------------------------------------------------+ | ``CONVERT`` | ``yes`` (default) or ``no``. Change Image format to ``DRIVER`` | +---------------+-----------------------------------------------------------------+ @@ -80,7 +80,7 @@ For example, the following illustrates the creation of a Local Datastore: $ cat ds.conf NAME = local_images DS_MAD = fs - TM_MAD = ssh + TM_MAD = local $ onedatastore create ds.conf ID: 100 @@ -101,12 +101,22 @@ Additional Configuration .. note:: When using a Local Storage Datastore the ``QCOW2_OPTIONS`` attribute is ignored since the cloning script uses the ``tar`` command instead of ``qemu-img``. +Datastore Drivers +================================================================================ + +.. _local_ds_drivers: + +There are currently two Local transfer drivers: + +- **local**: reference Local driver since OpenNebula 6.10.2, used by default for newly created datastores. Supports operations such as thin provisioning for images in qcow2 format. +- **ssh**: legacy but still supported for compatibility reasons. Unable to leverage advanced qcow2 features. + Datastore Internals ================================================================================ .. include:: internals_fs_common.txt -In this case, the System Datastore is distributed among the Hosts. The **ssh** transfer driver uses the Hosts' local storage to place the images of running Virtual Machines. All the operations are then performed locally but images have to always be copied to the Hosts, which in turn can be a very resource-demanding operation. +In this case, the System Datastore is distributed among the Hosts. The **local** transfer driver uses the Hosts' local storage to place the images of running Virtual Machines. All the operations are then performed locally but images have to always be copied to the Hosts, which in turn can be a very resource-demanding operation. |image2| diff --git a/source/open_cluster_deployment/storage_setup/overview.rst b/source/open_cluster_deployment/storage_setup/overview.rst index 631c7e61fc..814bc54de4 100644 --- a/source/open_cluster_deployment/storage_setup/overview.rst +++ b/source/open_cluster_deployment/storage_setup/overview.rst @@ -33,18 +33,18 @@ System Datastores Each datastore supports different features, here is a basic overview: -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| | :ref:`NFS/NAS ` | :ref:`SSH ` | :ref:`OneStor ` | :ref:`Ceph ` | :ref:`SAN ` | :ref:`iSCSI `| -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Disk snapshots | yes | yes | yes | yes | no | no | -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| VM snapshots | yes | yes | yes | no | no | no | -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Live migration | yes | yes | yes | yes | yes | yes | -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Fault tolerance | yes | no | no | yes | yes | yes | -| (:ref:`VM ha `) | | | | | | | -+------------------------+-------------------------+-----------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ +| | :ref:`NFS/NAS ` | :ref:`Local ` | :ref:`OneStor ` | :ref:`Ceph ` | :ref:`SAN ` | :ref:`iSCSI `| ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ +| Disk snapshots | yes | yes | yes | yes | no | no | ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ +| VM snapshots | yes | yes | yes | no | no | no | ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ +| Live migration | yes | yes | yes | yes | yes | yes | ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ +| Fault tolerance | yes | no | no | yes | yes | yes | +| (:ref:`VM ha `) | | | | | | | ++------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ How Should I Read This Chapter diff --git a/source/provision_clusters/references/template.rst b/source/provision_clusters/references/template.rst index 1d3fca807f..c1f5c01c83 100644 --- a/source/provision_clusters/references/template.rst +++ b/source/provision_clusters/references/template.rst @@ -117,7 +117,7 @@ Example of datastore defined from regular template: $ cat ds.tpl NAME="myprovision-images" - TM_MAD="ssh" + TM_MAD="local" DS_MAD="fs" $ onedatastore create ds.tpl @@ -130,7 +130,7 @@ Example of the same datastore defined in provision template: datastores: - name: "myprovision-images" ds_mad: fs - tm_mad: ssh + tm_mad: local OpenNebula virtual objects -------------------------------------------------------------------------------- diff --git a/source/provision_clusters/references/virtual.rst b/source/provision_clusters/references/virtual.rst index 3306c84703..dcd6c1e27d 100644 --- a/source/provision_clusters/references/virtual.rst +++ b/source/provision_clusters/references/virtual.rst @@ -255,10 +255,10 @@ For example: datastores: - name: "test_images" ds_mad: fs - tm_mad: ssh + tm_mad: local - name: "test_system" type: system_ds - tm_mad: ssh + tm_mad: local safe_dirs: "/var/tmp /tmp" images: diff --git a/source/quick_start/operation_basics/provisioning_edge_cluster.rst b/source/quick_start/operation_basics/provisioning_edge_cluster.rst index 1a4efde516..201688242b 100644 --- a/source/quick_start/operation_basics/provisioning_edge_cluster.rst +++ b/source/quick_start/operation_basics/provisioning_edge_cluster.rst @@ -248,8 +248,8 @@ List datastores: ``oneprovision datastore list``. $ oneprovision datastore list ID NAME SIZE AVA CLUSTERS IMAGES TYPE DS TM STAT - 101 aws-cluste - - 100 0 sys - ssh on - 100 aws-cluste 71.4G 90% 100 0 img fs ssh o + 101 aws-cluste - - 100 0 sys - local on + 100 aws-cluste 71.4G 90% 100 0 img fs local on List networks: ``oneprovision network list``. From 6fc2fa7fe0355931b4846d497aa9558ec26e8d89 Mon Sep 17 00:00:00 2001 From: Guillermo Ramos Date: Wed, 27 Nov 2024 10:23:41 +0100 Subject: [PATCH 2/3] Remove references to OneStor Signed-off-by: Guillermo Ramos --- .../storage_setup/index.rst | 1 - .../storage_setup/onestor_ds.rst | 176 ------------------ .../storage_setup/overview.rst | 25 ++- .../edge_clusters/onprem_cluster.rst | 2 +- 4 files changed, 13 insertions(+), 191 deletions(-) delete mode 100644 source/open_cluster_deployment/storage_setup/onestor_ds.rst diff --git a/source/open_cluster_deployment/storage_setup/index.rst b/source/open_cluster_deployment/storage_setup/index.rst index 25bc212a64..abaf03fa26 100644 --- a/source/open_cluster_deployment/storage_setup/index.rst +++ b/source/open_cluster_deployment/storage_setup/index.rst @@ -8,7 +8,6 @@ Open Cloud Storage Setup Overview NFS/NAS Datastore Local Storage Datastore - OneStor Datastore Ceph Datastore SAN Datastore Raw Device Mapping Datastore diff --git a/source/open_cluster_deployment/storage_setup/onestor_ds.rst b/source/open_cluster_deployment/storage_setup/onestor_ds.rst deleted file mode 100644 index f4a1478447..0000000000 --- a/source/open_cluster_deployment/storage_setup/onestor_ds.rst +++ /dev/null @@ -1,176 +0,0 @@ -.. _onestor_ds: -.. _replica_tm: - -================================================================================ -OneStor Datastore -================================================================================ - -Like the Local Storage Datastore this configuration uses the local storage area of each Host to run VMs. On top of it, it provides: - - * Caching features to reduce image transfers and speed up boot times. - * Automatic recovery mechanisms for qcow2 images and KVM hypervisor. - -Additionally you'll need a storage area for the VM disk image repository. Disk images are transferred from the repository to the Hosts and cache areas using the SSH protocol. - -Front-end Setup -================================================================================ - -The Front-end needs to prepare the storage area for: - -* **Image Datastores**, to store the image repository. -* **System Datastores**, will hold temporary disks and files for VMs ``stopped`` and ``undeployed``. - -Simply make sure that there is enough space under ``/var/lib/one/datastores`` to store Images and the disks of the ``stopped`` and ``undeployed`` Virtual Machines. Note that ``/var/lib/one/datastores`` **can be mounted from any NAS/SAN server in your network**. - -Host Setup -================================================================================ - -Just make sure that there is enough space under ``/var/lib/one/datastores`` to store the disks of running VMs on that Host. - -.. warning:: Make sure all the Hosts, including the Front-end, can SSH to any other host (including themselves), otherwise migrations will not work. - -One additional Host per cluster needs to be designated as ``REPLICA_HOST`` and it will hold the disk images cache under ``/var/lib/one/datastores``. It is recommended to add extra disk space in this Host. - -OpenNebula Configuration -================================================================================ -Once the Nodes and Front-end storage is setup, the OpenNebula configuration comprises the creation of an Image and System Datastores. - -Create System Datastore --------------------------------------------------------------------------------- - -You need to create a System Datastore for each cluster in your cloud, using the following (template) parameters: - -+------------------+-------------------------------------------------+ -| Attribute | Description | -+==================+=================================================+ -| ``NAME`` | Name of datastore | -+------------------+-------------------------------------------------+ -| ``TYPE`` | ``SYSTEM_DS`` | -+------------------+-------------------------------------------------+ -| ``TM_MAD`` | ``ssh`` | -+------------------+-------------------------------------------------+ -| ``REPLICA_HOST`` | hostname of the designated cache Host | -+------------------+-------------------------------------------------+ - -For example, consider a cloud with two clusters; the datastore configuration could be as follows: - -.. prompt:: text $ auto - - # onedatastore list -l ID,NAME,TM,CLUSTERS - ID NAME TM CLUSTERS - 101 system_replica_2 ssh 101 - 100 system_replica_1 ssh 100 - 1 default ssh 0,100,101 - 0 system ssh 0 - -Note that in this case a **single** Image Datastore (``1``) is shared across clusters ``0``, ``100`` and ``101``. Each cluster has its own System Datastore (``100`` and ``101``) with replication enabled, while System Datastore ``0`` does not use replication. - -**Replication is enabled** by the presence of the ``REPLICA_HOST`` key, with the name of one of the Hosts belonging to the cluster. Here's an example of the replica System Datastore settings: - -.. prompt:: text $ auto - - # onedatastore show 100 - ... - DISK_TYPE="FILE" - REPLICA_HOST="cluster100-host1" - TM_MAD="ssh" - TYPE="SYSTEM_DS" - ... - -.. note:: You need to balance your storage transfer patterns (number of VMs created, disk image sizes...) with the number of Hosts per cluster to make an effective use of the caching mechanism. - -Create Image Datastore --------------------------------------------------------------------------------- - -To create a new Image Datastore, you need to set the following (template) parameters: - -+---------------+-----------------------------------------------------------------+ -| Attribute | Description | -+===============+=================================================================+ -| ``NAME`` | Name of datastore | -+---------------+-----------------------------------------------------------------+ -| ``DS_MAD`` | ``fs`` | -+---------------+-----------------------------------------------------------------+ -| ``TM_MAD`` | ``ssh`` | -+---------------+-----------------------------------------------------------------+ -| ``CONVERT`` | ``yes`` (default) or ``no``. Change Image format to ``DRIVER`` | -+---------------+-----------------------------------------------------------------+ - -For example, the following illustrates the creation of a Local Datastore: - -.. prompt:: text $ auto - - $ cat ds.conf - NAME = local_images - DS_MAD = fs - TM_MAD = ssh - - $ onedatastore create ds.conf - ID: 100 - -Also note that there are additional attributes that can be set. Check the :ref:`datastore template attributes `. - -.. include:: qcow2_options.txt - - -REPLICA_HOST vs. REPLICA_STORAGE_IP --------------------------------------------------------------------------------- - -OneStor was originally designed for Edge clusters where the hosts are typically reached on their public IPs. However, for copying the image between the hosts (replica -> hypervisor) it might be useful to use a local IP address from the private network. Therefore, in the host template of the replica host, it’s possible to define REPLICA_STORAGE_IP which will be used instead the REPLICA_HOST in that case. - - -Additionally, the following attributes can be tuned in configuration files ``/var/lib/one/remotes/etc/tm/ssh/sshrc``: - -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| Attribute | Description | -+================================+===================================================================================================================================+ -| ``REPLICA_COPY_LOCK_TIMEOUT`` | Timeout to expire lock operations should be adjusted to the maximum image transfer time between Image Datastores and clusters. | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| ``REPLICA_RECOVERY_SNAPS_DIR`` | Default directory to store the recovery snapshots. These snapshots are used to recover VMs in case of Host failure in a cluster | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| ``REPLICA_SSH_OPTS`` | SSH options when copying from the replica to the hypervisor speed. Weaker ciphers on secure networks are preferred | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| ``REPLICA_SSH_FE_OPTS`` | SSH options when copying from the Front-end to the replica. Stronger ciphers on public networks are preferred | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| ``REPLICA_MAX_SIZE_MB`` | Maximum size of cached images on replica in MB | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ -| ``REPLICA_MAX_USED_PERC`` | Maximum usage in % of the replica filesystem | -+--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------+ - -Recovery Snapshots -================================================================================ - -.. important:: * Recovery Snapshots are only available for KVM and qcow2 Image formats - * As the recovery snapshot are created by the monitoring client and not by a driver action, it requires password-less ssh connection from the hypervisors to the ``REPLICA_HOST``. Which means that also private ssh key of oneadmin user needs to be distributed on the nodes. - -Additionally, in replica mode you can enable recovery snapshots for particular VM disks. You can do it by adding the option ``RECOVERY_SNAPSHOT_FREQ`` to ``DISK`` in the VM template. - -.. prompt:: bash $ auto - - $ onetemplate show 100 - ... - DISK=[ - IMAGE="image-name", - RECOVERY_SNAPSHOT_FREQ="3600" ] - -Using this setting, the disk will be snapshotted every hour and a copy of the snapshot will be prepared on the replica. Should the host where the VM is running later fail, it can be recovered, either manually or through the fault tolerance hooks: - -.. prompt:: bash $ auto - - $ onevm recover --recreate [VMID] - -During the recovery the VM is recreated from the recovery snapshot. - - -Datastore Internals -================================================================================ - -.. include:: internals_fs_common.txt - -In this mode the images are cached in each cluster and so are available close to the hypervisors. This effectively reduces the bandwidth pressure of the Image Datastore servers and reduces deployment times. This is especially important for edge-like deployments where copying images from the Front-end to the hypervisor for each VM could be slow. - -This replication mode implements a three-level storage hierarchy: *cloud* (image datastore), *cluster* (replica cache) and *hypervisor* (system datastore). Note that replication occurs at cluster level and a System Datastore needs to be configured for each cluster. - -|image3| - -.. |image3| image:: /images/fs_ssh_replica.png diff --git a/source/open_cluster_deployment/storage_setup/overview.rst b/source/open_cluster_deployment/storage_setup/overview.rst index 814bc54de4..e6f92124d7 100644 --- a/source/open_cluster_deployment/storage_setup/overview.rst +++ b/source/open_cluster_deployment/storage_setup/overview.rst @@ -22,7 +22,6 @@ Image Datastores There are different Image Datastores depending on how the images are stored on the underlying storage technology: - :ref:`NFS/NAS ` - :ref:`Local Storage ` - - :ref:`OneStor ` - :ref:`Ceph ` - :ref:`SAN ` - :ref:`Raw Device Mapping ` @@ -33,18 +32,18 @@ System Datastores Each datastore supports different features, here is a basic overview: -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| | :ref:`NFS/NAS ` | :ref:`Local ` | :ref:`OneStor ` | :ref:`Ceph ` | :ref:`SAN ` | :ref:`iSCSI `| -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Disk snapshots | yes | yes | yes | yes | no | no | -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| VM snapshots | yes | yes | yes | no | no | no | -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Live migration | yes | yes | yes | yes | yes | yes | -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ -| Fault tolerance | yes | no | no | yes | yes | yes | -| (:ref:`VM ha `) | | | | | | | -+------------------------+-------------------------+-------------------------+-----------------------------+-----------------------+--------------------------+-------------------------+ ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ +| | :ref:`NFS/NAS ` | :ref:`Local ` | :ref:`Ceph ` | :ref:`SAN ` | :ref:`iSCSI `| ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ +| Disk snapshots | yes | yes | yes | no | no | ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ +| VM snapshots | yes | yes | no | no | no | ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ +| Live migration | yes | yes | yes | yes | yes | ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ +| Fault tolerance | yes | no | yes | yes | yes | +| (:ref:`VM ha `) | | | | | | ++------------------------+-------------------------+-------------------------+-----------------------+--------------------------+-------------------------+ How Should I Read This Chapter diff --git a/source/provision_clusters/edge_clusters/onprem_cluster.rst b/source/provision_clusters/edge_clusters/onprem_cluster.rst index 062762689f..5351221f23 100644 --- a/source/provision_clusters/edge_clusters/onprem_cluster.rst +++ b/source/provision_clusters/edge_clusters/onprem_cluster.rst @@ -42,7 +42,7 @@ OpenNebula Resources The following resources, associated to each Edge Cluster, will be created in OpenNebula: -* Image and System datastore for the cluster. The storage is configured to use the Hosts :ref:`local storage through OneStor drivers `. On-Premises clusters also include access to the default datastore, so you can easily share images across clusters. +* Image and System datastore for the cluster. The storage is configured to use the hosts' :ref:`Local storage system datastore `. On-Premises clusters also include access to the default datastore, so you can easily share images across clusters. * Public Network, bound to the Internet interface through a Linux Bridge. * Private Networking, implemented using a VXLAN overlay on the management network. From 97e2d766dbcf2aef728662458b461cac9b7c411a Mon Sep 17 00:00:00 2001 From: "Ruben S. Montero" Date: Thu, 2 Jan 2025 08:37:54 +0100 Subject: [PATCH 3/3] Update install.rst --- .../frontend_installation/install.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/installation_and_configuration/frontend_installation/install.rst b/source/installation_and_configuration/frontend_installation/install.rst index 64af47db56..c914bd7f7f 100644 --- a/source/installation_and_configuration/frontend_installation/install.rst +++ b/source/installation_and_configuration/frontend_installation/install.rst @@ -252,7 +252,7 @@ The complete list of operating system services provided by OpenNebula: | **opennebula-ssh-socks-cleaner** | Periodic cleaner of SSH persistent connections | opennebula | +---------------------------------------+------------------------------------------------------------------------+---------------------------+ -.. note:: Since 5.12, the OpenNebula comes with an integrated SSH agent as the ``opennebula-ssh-agent`` service which removes the need to copy oneadmin's SSH private key across your Hosts. For more information, you can look at the :ref:`passwordless login ` section of the manual. You can opt to disable this service and configure your environment the old way. +.. note:: Since 5.12, the OpenNebula comes with an integrated SSH agent as the ``opennebula-ssh-agent`` service which removes the need to copy oneadmin's SSH private key across your Hosts. For more information, refer to the :ref:`passwordless login ` section of the manual. You are ready to **start** all OpenNebula services with the following command (NOTE: you might want to remove the services from the command arguments if you skipped their configuration steps above):