Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion antora.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: rhods-admin
title: Red Hat OpenShift AI Administration
version: 1.33
version: 2.10
nav:
- modules/ROOT/nav.adoc
- modules/chapter1/nav.adoc
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/dsproject_removal.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/login_openshift_ai.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_instance1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_instance2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_operator1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_operator2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_operator3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_operator5.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/nvidia_gpu_pods.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added modules/chapter1/images/openshiftai_pods.png
Binary file added modules/chapter1/images/openshiftai_upgrade1.png
Binary file added modules/chapter1/images/uninstall_authorino1.png
Binary file added modules/chapter1/images/uninstall_authorino2.png
Binary file added modules/chapter1/images/uninstall_dsc_delete.png
113 changes: 88 additions & 25 deletions modules/chapter1/pages/dependencies-install-web-console.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,48 +6,111 @@ It is generally recommended to install any dependent operators prior to installi

// This section given below is the same as in the previous chapter. Is the whole section with explanation required here again?

https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat OpenShift Pipelines Operator]::
The *Red Hat OpenShift Pipelines Operator* is required if you want to install the *Data Science Pipelines* component.
https://www.redhat.com/en/technologies/cloud-computing/openshift/serverless[Red{nbsp}Hat OpenShift Serverless Operator]::
The *Red{nbsp}Hat OpenShift Serverless Operator* is required if you want to install the *single-model serving platform component*.

https://catalog.redhat.com/software/container-stacks/detail/5ec53e8c110f56bd24f2ddc4[Red{nbsp}Hat OpenShift Service Mesh Operator]::
The *Red{nbsp}Hat OpenShift Serverless Operator* is required if you want to install the *single-model serving platform component*.

https://developers.redhat.com/articles/2021/06/18/authorino-making-open-source-cloud-native-api-security-simple-and-flexible[Red{nbsp}Hat OpenShift Authorino (technical preview) Operator]::
The *Red{nbsp}Hat Authorino Operator* is required to support enforcing authentication policies in *Red Hat OpenShift AI*.

https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html[NVIDIA GPU Operator]::
The *NVIDIA GPU Operator* is required for GPU support in *Red Hat OpenShift AI*.
https://docs.openshift.com/container-platform/latest/hardware_enablement/psap-node-feature-discovery-operator.html[Node Feature Discovery Operator]::
The *Node Feature Discovery Operator* is a prerequisite for the *NVIDIA GPU Operator*.

This section will discuss the process for installing the dependent operators using the OpenShift Web Console.

== Installation of Data Science Pipelines Dependencies
== Installation of Red Hat OpenShift Serverless Dependencies
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
== Installation of Red Hat OpenShift Serverless Dependencies
== Installation of Red Hat OpenShift Serverless


The following section discusses installing the *Red{nbsp}Hat OpenShift Serverless* operator.

=== Lab: Installation of the *Red{nbsp}Hat OpenShift Serverless* operator

1. Login to Red{nbsp}Hat OpenShift using a user which has the _cluster-admin_ role assigned.

2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat OpenShift Serverless*
+
image::serverless_operator_search.png[width=800]

3. Click on the *Red{nbsp}Hat OpenShift Serverless* operator. In the pop up window, select the *stable* channel and the most recent version of the serverless operator. Click on **Install** to open the operator's installation view.
+
image::serverless_operator_install1.png[width=600]

4. In the `Install Operator` page, select the default values for all the fields and click *Install*.
+
image::serverless_operator_install2.png[width=800]

5. A window showing the installation progress will pop up.
+
image::serverless_operator_install3.png[width=800]

6. When the installation finishes the operator is ready to be used by *Red{nbsp}Hat OpenShift AI*.
+
image::serverless_operator_install4.png[width=800]

*Red{nbsp}Hat OpenShift Serverless* is now successfully installed.

== Installation of Red Hat OpenShift Service Mesh

The following section discusses installing the *Red{nbsp}Hat OpenShift Service Mesh* operator.

=== Lab: Installation of the *Red{nbsp}Hat OpenShift Service Mesh* operator

1. Login to Red{nbsp}Hat OpenShift using a user which has the _cluster-admin_ role assigned.

2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat OpenShift Service Mesh*
+
image::servicemesh_operator_search.png[width=800]

3. Click on the *Red{nbsp}Hat OpenShift Service Mesh* operator. In the pop up window, select the *stable* channel and the most recent version of the server mesh operator. Click on **Install** to open the operator's installation view.
+
image::servicemesh_operator_install1.png[width=600]

4. In the `Install Operator` page, select the default values for all the fields and click *Install*.
+
image::servicemesh_operator_install2.png[width=800]

5. A window showing the installation progress will pop up.
+
image::servicemesh_operator_install3.png[width=800]

6. When the installation finishes the operator is ready to be used by *Red{nbsp}Hat OpenShift AI*.
+
image::servicemesh_operator_install4.png[width=800]

The Data Science Pipelines component utilizes *Red{nbsp}Hat OpenShift Pipelines* as an execution engine for all pipeline runs, and is required to be installed to take advantage of the Data Science Pipelines component.
*Red{nbsp}Hat OpenShift Service Mesh* is now successfully installed.

The following section discusses installing the *Red{nbsp}Hat OpenShift Pipelines* operator.
== Installation of Red Hat Authorino

=== Lab: Installation of the *Red{nbsp}Hat OpenShift Pipelines* operator
=== Lab: Installation of the *Red{nbsp}Hat Authorino* operator

1. Login to Red{nbsp}Hat OpenShift using a user which has the _cluster-admin_ role assigned.

2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat OpenShift Pipelines*
2. Navigate to **Operators** -> **OperatorHub** and search for *Red{nbsp}Hat Authorino*
+
image::pipeline_search.png[width=800]
image::authorino_operator_search.png[width=800]

3. Click on the *Red{nbsp}Hat OpenShift Pipelines* operator. In the pop up window, select the *latest* channel and the most recent version of the pipelines operator. Click on **Install** to open the operator's installation view.
3. Click on the *Red{nbsp}Hat Authorino* operator. In the pop up window, select the *stable* channel and the most recent version of the serverless operator. Click on **Install** to open the operator's installation view.
+
image::pipeline_install1.png[width=800]
image::authorino_operator_install1.png[width=600]

4. In the `Install Operator` page, select the default values for all the fields and click *Install*.
+
image::pipeline_install2.png[width=800]
image::authorino_operator_install2.png[width=800]

5. A window showing the installation progress will pop up.
+
image::pipeline_install3.png[width=800]
image::authorino_operator_install3.png[width=800]

6. When the installation finishes the operator is ready to be used by *Red{nbsp}Hat OpenShift AI*.
+
image::pipeline_install4.png[width=800]
image::authorino_operator_install4.png[width=800]

*Red{nbsp}Hat OpenShift Pipelines* is now successfully installed.
*Red{nbsp}Hat Authorino* is now successfully installed.

TIP: For assistance in installing OpenShift Pipelines from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/openshift-pipelines-operator[redhat-cop/gitops-catalog/openshift-pipelines-operator] GitHub repo.
The following section discusses installing the *Red{nbsp}Hat - Authorino* operator.
Comment thread
kknoxrht marked this conversation as resolved.

== Lab: Installation of GPU Dependencies

Expand All @@ -67,33 +130,33 @@ TIP: To view the list of GPU models supported by the *NVIDIA GPU Operator* refer

2. Navigate to **Operators** -> **OperatorHub** and search for *Node Feature Discovery*
+
image::nfd_search.png[width=800]
image::node_feature_discovery_search.png[width=800]

3. Two options for the *Node Feature Discovery* operator will be available. Click on the one with *Red Hat* in the top right hand corner, and in the pop up window click on **Install** to open the operator's installation view.
+
IMPORTANT: Make sure you select *Node Feature Discovery* from *Red{nbsp}Hat* NOT the Community version.
+
image::nfd_install1.png[width=800]
image::node_feature_discovery_install1.png[width=800]

4. In the `Install Operator` page, select the option to *Enable Operator recommended cluster monitoring on this Namespace*, and keep all the rest of the parameters at their default values.
+
NOTE: Some of these options may vary slightly depending on your version of OpenShift. Please refer to the official Node Feature Discovery Documentation for your version of OpenShift for the recommended settings.
+
image::nfd_install2.png[width=800]
image::node_feature_discovery_install2.png[width=800]

5. Click the **Install** button at the bottom of the page to proceed with the installation. A window showing the installation progress will pop up.

6. When the installation finishes, click **View Operator** to configure the `Node Feature Discovery` operator.

7. Click the **Create instance** button for the *NodeFeatureDiscovery* object.
+
image::nfd_configure1.png[width=800]
image::node_feature_discovery_instance1.png[width=800]

8. In the `Create NodeFeatureDiscovery` page, leave all fields at their default values, and click the **Create** button.

9. A new set of pods should appear in the **Workloads** -> **Pods** section managed by the *nfd-worker* DaemonSet. Node Feature Discovery will now be able to automatically detect information about the nodes in the cluster and apply labels to those nodes.
+
image::nfd_verify.png[width=800]
image::nvidia_gpu_pods.png[width=800]

TIP: For assistance in installing the Node Feature Discovery Operator from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/nfd[redhat-cop/gitops-catalog/nfd] GitHub repo.

Expand All @@ -105,29 +168,29 @@ TIP: For assistance in installing the Node Feature Discovery Operator from YAML

2. Navigate to **Operators** -> **OperatorHub** and search for *NVIDIA GPU Operator*
+
image::gpu_search.png[width=800]
image::nvidia_operator_search.png[width=800]

3. Click the `NVIDIA GPU Operator` tile. In the pop up window leave all fields at their default values, and click on **Install** to open the operator's installation view.
+
image::gpu_install1.png[width=800]
image::nvidia_gpu_operator1.png[width=800]

4. In the `Install Operator` page, keep all the parameters at their default values, and click the **Install** button at the bottom of the page to proceed with the installation.
+
image::gpu_install2.png[width=800]
image::nvidia_gpu_operator2.png[width=800]

5. A window showing the installation progress will pop up. Wait while the operator finishes installing.

6. When the installation finishes, click the **View Operator** button.

7. Click the **Create instance** button for the *ClusterPolicy* object.
+
image::gpu_configure1.png[width=800]
image::nvidia_gpu_instance1.png[width=800]

8. In the `Create ClusterPolicy` page, leave all fields at their default values, and click the **Create** button.

9. After the *ClusterPolicy* is created, the *NVIDIA GPU Operator* will update the status *State: ready*.
+
image::gpu_verify1.png[width=800]
image::nvidia_gpu_instance2.png[width=800]

10. After the *Red{nbsp}Hat OpenShift AI* operator has been installed and configured, users will be able to see an option for "Number of GPUs" when creating a new workbench.
+
Expand Down
2 changes: 1 addition & 1 deletion modules/chapter1/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ OpenShift AI is supported in two configurations:
For information about OpenShift AI on a Red Hat managed environment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_cloud_service/1[Product Documentation for Red Hat OpenShift AI Cloud Service 1]

* Self-managed software that you can install on-premise or on the public cloud in a self-managed environment, such as *OpenShift Container Platform*.
For information about OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see https://access.redhat.com/documentation/en-us/red_hat_openshift_ai_self-managed/2.8[Product Documentation for Red Hat OpenShift AI Self-Managed 2.8]
For information about OpenShift AI as self-managed software on your OpenShift cluster in a connected or a disconnected environment, see https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.10[Product Documentation for Red Hat OpenShift AI Self-Managed].

In this course we cover installation of *Red Hat OpenShift AI self-managed* using the OpenShift Web Console.
12 changes: 9 additions & 3 deletions modules/chapter1/pages/install-general-info.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,16 @@ Red{nbsp}Hat OpenShift AI is available to install as an operator through the *Op
The product name has been recently changed to *Red{nbsp}Hat OpenShift AI (RHOAI)* (old name *Red{nbsp}Hat OpenShift Data Science*). In this course, most references to the product use the new name. However, references to some UI elements might still use the previous name.
====

In addition to the *Red{nbsp}Hat OpenShift AI* Operator there are some other operators that you may need to install depending on which features and components of *Red{nbsp}Hat OpenShift AI* you want to install and use.
In addition to the *Red{nbsp}Hat OpenShift AI* Operator there are some other operators that you may need to install depending on which features and components of *Red{nbsp}Hat OpenShift AI* you want to install and use.

https://www.redhat.com/en/technologies/cloud-computing/openshift/pipelines[Red{nbsp}Hat OpenShift Pipelines Operator]::
The *Red{nbsp}Hat OpenShift Pipelines Operator* is required if you want to install the *Red{nbsp}Hat OpenShift AI Pipelines* component.
https://www.redhat.com/en/technologies/cloud-computing/openshift/serverless[Red{nbsp}Hat OpenShift Serverless Operator]::
The *Red Hat OpenShift Serverless operator* provides a collection of APIs that enables containers, microservices and functions to run "serverless". The *Red{nbsp}Hat OpenShift Serverless Operator* is required if you want to install the single-model serving platform component.

https://catalog.redhat.com/software/container-stacks/detail/5ec53e8c110f56bd24f2ddc4[Red{nbsp}Hat OpenShift Service Mesh Operator]::
*Red Hat OpenShift Service Mesh operator* provides an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. The *Red{nbsp}Hat OpenShift Serverless Operator* is required if you want to install the single-model serving platform component.

https://developers.redhat.com/articles/2021/06/18/authorino-making-open-source-cloud-native-api-security-simple-and-flexible[Red{nbsp}Hat Authorino (technical preview) Operator]::
*Red Hat Authorino* is an open source, Kubernetes-native external authorization service to protect APIs. The *Red{nbsp}Hat Authorino Operator* is required to support enforcing authentication policies in Red Hat OpenShift AI.

https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/index.html[NVIDIA GPU Operator]::
The *NVIDIA GPU Operator* is required for GPU support in Red Hat OpenShift AI.
Expand Down
12 changes: 6 additions & 6 deletions modules/chapter1/pages/rhods-install-web-console.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ IMPORTANT: The installation requires a user with the _cluster-admin_ role

. Navigate to **Operators** -> **OperatorHub** and search for *OpenShift AI*.
+
image::rhods_install1.png[title=Search for OpenShift AI operator,width=800]
image::openshift_ai_operator_search.png[title=Search for OpenShift AI operator,width=800]
// The sentence in this image is not captured correctly

. Click on the `Red{nbsp}Hat OpenShift AI` operator. In the pop up window that opens, ensure you select the latest version in the *stable* channel and click on **Install** to open the operator's installation view.
+
image::rhods_install2.png[title=OpenShift AI Operator Details,width=800]
image::openshift_ai_operator_install2.png[title=OpenShift AI Operator Details,width=800]

. In the `Install Operator` page, leave all of the options as default and click on the *Install* button to start the installation.

Expand All @@ -27,24 +27,24 @@ image::rhods_install2.png[title=OpenShift AI Operator Details,width=800]

. After creating the *DataScienceCluster*, a view showing the *DataScienceCluster* details opens. Wait until the status of the cluster reads *Phase: Ready*. This represents the status of the whole cluster.
+
image::rhods2-clusters.png[title=DataScienceCluster Instance Ready,width=800]
image::openshift_ai_dsc_cluster2.png[title=DataScienceCluster Instance Ready,width=800]

. The operator should be installed and configured now.
In the applications window in the right upper corner of the screen the *Red{nbsp}Hat OpenShift AI* dashboard should be available.
+
image::rhods_verify1.png[title=RHOAI Dashboard]
image::red_hat_openshift_ai1.png[title=RHOAI Dashboard]

. Click the *Red{nbsp}Hat OpenShift AI* button to log in to the *Red{nbsp}Hat OpenShift AI* dashboard. Log in as the *admin* user (With the same password that you used to log in to the OpenShift web console).
+
image::rhods_verify2.png[title=Red Hat OpenShift AI Log in,width=800]

. You should be able to see the Red Hat OpenShift AI home page.
+
image::rhoai-home.png[title=Red Hat OpenShift AI Home Page]
image::openshiftai_dashboard1.png[title=Red Hat OpenShift AI Home Page]
+
IMPORTANT: It may take a while to start all the service pods hence the login window may not be accessible immediately. If you are getting an error, check the status of the pods in the project *redhat-ods-applications*.
Navigate to *Workloads* -> *pods* and select project *redhat-ods-applications*. All pods must be running and be ready. If they are not, wait until they become running and ready.
+
image::rhods_verify_pods.png[title=Pods in Running state,width=800]
image::openshiftai_pods.png[title=Pods in Running state,width=800]

TIP: For assistance installing the *Red{nbsp}Hat Openshift AI* from YAML or via ArgoCD, refer to examples found in the https://github.com/redhat-cop/gitops-catalog/tree/main/rhods-operator[redhat-cop/gitops-catalog/rhods-operator] GitHub repo.
Loading