Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .custom_wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,7 @@ auth
balancers
balancer's
backend
backends
backport
backported
br
Expand Down
81 changes: 81 additions & 0 deletions how-to/operations/deploy-pure-storage-backend.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
Deploy a Pure Storage backend
=============================

Overview
--------

Use this procedure to deploy a Pure Storage backend for Cinder. The backend is
deployed as the ``cinder-volume-purestorage`` charm.

Requirements
------------

You will need:

* a bootstrapped Canonical OpenStack deployment with storage capability already
in place
* network connectivity from the storage nodes to the Pure Storage array
* a valid Pure Storage API token
* a backend instance name that satisfies Juju application naming rules,
for example ``pure-prod``

Inspect the available options
-----------------------------

If you want to review the supported configuration keys before deploying the
backend, run:

.. code-block:: text

sunbeam storage options purestorage

Create the backend configuration
--------------------------------

You can provide the backend settings in a YAML file or pass the equivalent CLI
options directly to the deployment command. The required keys are ``san-ip``
and ``pure-api-token``.

For example, create a file named ``purestorage.yaml`` with the following
content:

.. code-block:: yaml

san-ip: 192.0.2.10
pure-api-token: 01234567-89ab-cdef-0123-456789abcdef
protocol: iscsi
volume-backend-name: pure-iscsi
backend-availability-zone: az1
pure-iscsi-cidr: 192.0.2.0/24

Set ``protocol`` to ``iscsi``, ``fc``, or ``nvme`` to match your deployment.
For NVMe/TCP deployments, you can also set ``pure-nvme-cidr`` and
``pure-nvme-transport``. Set ``pure-nvme-transport`` to ``tcp``.

Comment on lines +52 to +54
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Pure Storage how-to mentions configuring pure-nvme-transport, but the manifest reference’s purestorage config key list omits it. Please align these docs (either document pure-nvme-transport in the manifest reference, or remove/adjust the how-to guidance if it isn’t a supported key).

Suggested change
For NVMe/TCP deployments, you can also set ``pure-nvme-cidr`` and
``pure-nvme-transport``. Only ``tcp`` is supported for
``pure-nvme-transport``.
For NVMe/TCP deployments, you can also set ``pure-nvme-cidr`` to specify the
NVMe/TCP network used by the array. NVMe transport is TCP.

Copilot uses AI. Check for mistakes.
Deploy the backend
------------------

Deploy the backend with the backend type (``purestorage``), a
Juju-compatible backend instance name, and the configuration file:

.. code-block:: text

sunbeam storage add purestorage pure-prod --config-file purestorage.yaml

If you prefer not to use a file, pass the equivalent options directly on the
command line.

Verify the backend
------------------

Check that the backend has been added:

.. code-block:: text

sunbeam storage list

To inspect the deployed backend in more detail, run:

.. code-block:: text

sunbeam storage show pure-prod
106 changes: 106 additions & 0 deletions how-to/operations/enable-a-gated-storage-backend.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
Enable and deploy a gated storage backend
=========================================

Use this procedure to unlock a gated in-tree storage backend in the CLI and
then deploy it. For general information about feature gates, see
:doc:`Manage experimental features </how-to/operations/manage-experimental-features>`.

.. note::

Pure Storage is generally available and does not require a feature gate.
Add it directly with ``sunbeam storage add purestorage ...``.

List the available feature gates
--------------------------------

List the gates that are available in your deployment:

.. code:: text

sunbeam list-feature-gates

Identify the gate key for the storage backend that you want to deploy. Current
gated in-tree storage backends use keys such as
``feature.storage.dellsc`` and ``feature.storage.hitachi``.

Enable the storage backend gate
-------------------------------

Unlock the backend by setting its feature gate to ``true``:

.. code:: text

sudo snap set openstack feature.storage.<backend>=true

Replace ``<backend>`` with the storage backend name, for example
``dellsc`` or ``hitachi``.

.. note::

Unlocking the gate makes the backend visible in the CLI. It does not deploy
the backend.

Verify that the backend is unlocked
-----------------------------------

Run the feature gate command again and confirm that the **Unlocked** column is
set for your storage backend:

.. code:: text

sunbeam list-feature-gates

If the backend does not appear immediately in the CLI, start a new command
invocation and check again.

In local multi-node deployments, gate changes propagate automatically across
nodes in roughly 5 to 10 seconds. In MAAS deployments, you may need to run the
same ``snap set`` command on each node even though the gate state is still
stored in the cluster database.

Review the backend options in the CLI
---------------------------------------

After the gate is unlocked, confirm that the backend is now exposed by
the storage commands:

.. code:: text

sunbeam storage add --help

or:

.. code:: text

sunbeam storage options <backend>

Use ``sunbeam storage options <backend>`` to review the configuration fields
required by the backend before you create its YAML configuration file.

Deploy the backend
------------------

Add the backend by using the backend type, an instance name, and a backend
configuration file:

.. code:: text

sunbeam storage add <backend> <name> --config-file <backend>.yaml

For example, to deploy a Hitachi backend:

.. code:: text

sunbeam storage add hitachi hitachi-prod --config-file hitachi.yaml

Verify the deployment
---------------------

List deployed storage backends and confirm that the new backend is present:

.. code:: text

sunbeam storage list

Once deployed, the backend remains managed separately from the feature gate
state.
2 changes: 2 additions & 0 deletions how-to/operations/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ Operations
:maxdepth: 2

cluster-upgrades
deploy-pure-storage-backend
enable-a-gated-storage-backend
live-migration
maintenance-mode
manage-experimental-features
Expand Down
47 changes: 47 additions & 0 deletions reference/manifest-file-reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -406,3 +406,50 @@ manifest file will all its supported keys.
# Base64 encoded certificate for unit CSR Unique ID: subject
certificate: <Base64 encoded certificate>
...
storage:

# Storage is keyed by backend type, then by instance name.
# Current backend types are dellsc, hitachi, and purestorage.
dellsc:
<instance-name>:
config:
san-ip: <ip-or-hostname>
san-username: <username>
san-password: <password>
protocol: [fc, iscsi]
# Shared storage config fields.
volume-backend-name: <backend-name>
backend-availability-zone: <availability-zone>
# Additional Dell Storage Center options also use kebab-case.
# Same structure as core.software.
software: {}

hitachi:
<instance-name>:
config:
hitachi-storage-id: <storage-id>
hitachi-pools: <pool>,<pool>,...
san-ip: <ip-or-hostname>
san-username: <username>
san-password: <password>
protocol: [fc, iscsi]
volume-backend-name: <backend-name>
backend-availability-zone: <availability-zone>
# Additional Hitachi options also use kebab-case.
# Same structure as core.software.
software: {}

purestorage:
<instance-name>:
config:
san-ip: <ip-or-hostname>
pure-api-token: <api-token>
protocol: [iscsi, fc, nvme]
pure-iscsi-cidr: <cidr>
pure-nvme-cidr: <cidr>
Comment thread
gboutry marked this conversation as resolved.
pure-nvme-transport: tcp
volume-backend-name: <backend-name>
backend-availability-zone: <availability-zone>
# Additional Pure Storage options also use kebab-case.
# Same structure as core.software.
software: {}