Skip to content
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Set the module's `pluginId` to `catalog` to match the `pluginId` of the `keycloa
+
[source,javascript]
----
backend.add(import(backstage-plugin-catalog-backend-module-keycloak-transformer))
$ backend.add(import(backstage-plugin-catalog-backend-module-keycloak-transformer))
----

.Verification
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ You can customize the Adoption Insights plugin to suit your needs by disabling o

* To customize `maxBufferSize`, `flushInterval`, `debug`, and `licensedUsers` in the Adoption Insights plugin, in your {product} `app-config.yaml` file, update the relevant settings as shown in the following code:
+
[source,terminal]
[source,yaml]
----
app:
analytics:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ By setting the `spec.monitoring.enabled` field to `true` in your {product} custo
+
[source,bash]
----
oc edit Backstage <instance-name>
$ oc edit Backstage <instance-name>
----
. In the CR, locate the `spec` field and add the `monitoring` configuration block.
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Operator-backed deployment::
----
# Update OPERATOR_NS accordingly
OPERATOR_NS=rhdh-operator
kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
$ kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
----

. Find the `deployment.yaml` key in the ConfigMap and add the annotations to the `spec.template.metadata.annotations` field as follows:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ To verify if the scraping works:
+
[source,bash]
----
kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
$ kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
----

. Open your web browser and navigate to `pass:c[http://localhost:9090]` to access the Prometheus console.
Expand Down
2 changes: 1 addition & 1 deletion modules/observe/proc-enabling-azure-monitor-metrics.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ To enable managed Prometheus monitoring, use the `-enable-azure-monitor-metrics`

[source,bash]
----
az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics
$ az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics
----

The previous command installs the metrics add-on, which gathers https://learn.microsoft.com/en-us/azure/azure-monitor/metrics/prometheus-metrics-overview[Prometheus metrics]. Using the previous command, you can enable monitoring of Azure resources through both native Azure Monitor metrics. You can also view the results in the portal under *Monitoring -> Insights*. For more information, see https://learn.microsoft.com/en-us/azure/azure-monitor/platform/monitor-azure-resource[Monitor Azure resources with Azure Monitor].
Expand Down
12 changes: 6 additions & 6 deletions modules/observe/proc-forward-audit-log-splunk.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ You can use the {logging-brand-name} ({logging-short}) Operator and a `ClusterLo
.Example command to switch to a namespace
[source,bash]
----
oc project openshift-logging
$ oc project openshift-logging
----
--
. Create a `serviceAccount` named `log-collector` and bind the `collect-application-logs` role to the `serviceAccount` :
Expand All @@ -29,13 +29,13 @@ oc project openshift-logging
.Example command to create a `serviceAccount`
[source,bash]
----
oc create sa log-collector
$ oc create sa log-collector
----

.Example command to bind a role to a `serviceAccount`
[source,bash]
----
oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
$ oc create clusterrolebinding log-collector --clusterrole=collect-application-logs --serviceaccount=openshift-logging:log-collector
----
--
. Generate a `hecToken` in your Splunk instance.
Expand All @@ -45,13 +45,13 @@ oc create clusterrolebinding log-collector --clusterrole=collect-application-log
.Example command to create a key/value secret with `hecToken`
[source,bash]
----
oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
$ oc -n openshift-logging create secret generic splunk-secret --from-literal=hecToken=<HEC_Token>
----

.Example command to verify a secret
[source,bash]
----
oc -n openshift-logging get secret/splunk-secret -o yaml
$ oc -n openshift-logging get secret/splunk-secret -o yaml
----
--
. Create a basic `ClusterLogForwarder`resource YAML file as follows:
Expand Down Expand Up @@ -160,7 +160,7 @@ pipelines:
.Example command to apply `ClusterLogForwarder` configuration
[source,bash]
----
oc apply -f <ClusterLogForwarder-configuration.yaml>
$ oc apply -f <ClusterLogForwarder-configuration.yaml>
----
--
. Optional: To reduce the risk of log loss, configure your `ClusterLogForwarder` pods using the following options:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ You must use the proxy setup to ensure configuration compatibility if the Roadie
+
[source,bash]
----
echo -n 'your-atlassian-email:your-jira-api-token' | base64
$ echo -n 'your-atlassian-email:your-jira-api-token' | base64
----
** Jira datacenter: Create and use a Personal Access Token (PAT) in your Jira datacenter account. For more information, see the https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html[Atlassian] documentation.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@
[id="proc-populating-the-api-definition-tab_{context}"]
= Populating the API Definition tab in {product-very-short} API entities

Since {rhoai-short} does not expose the OpenAPI specification by default, the AI platform engineer can take the following steps to provide this valuable information:
Because {rhoai-short} does not expose the OpenAPI specification by default, the AI platform engineer can take the following steps to provide this valuable information:

.Procedure

. Retrieve OpenAPI JSON: Use a tool like `curl` to fetch the specification directly from the running endpoint of the AI model server. The following command provides the precise endpoint (`/openapi.json`) and shows how to include a `Bearer` token if the model requires authentication for access.
+
[source,bash]
----
curl -k -H "Authorization: Bearer $MODEL_API_KEY" https://$MODEL_ROOT_URL_INCLUDING_PORT/openapi.json | jq > open-api.json
$ curl -k -H "Authorization: Bearer $MODEL_API_KEY" https://$MODEL_ROOT_URL_INCLUDING_PORT/openapi.json | jq > open-api.json
----

. Set Property in {rhoai-short}.
Expand All @@ -24,4 +24,4 @@ We recommend using *Model Version* instead of *Registered Model* to maintain sta

.. In the **Properties** section, set a key/value pair where the key is `API Spec` and the value is the entire JSON content from the `open-api.json` file.

. Propagation: The {openshift-ai-connector-name} periodically polls the {rhoai-short} Model Registry, propagates this JSON, and renders the interactive API documentation in the *Definition* tab of the {product-very-short} API entity.
. Propagation: The {openshift-ai-connector-name} periodically polls the {rhoai-short} Model Registry, propagates this JSON, and renders the interactive API documentation in the *Definition* tab of the {product-very-short} API entity.
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Validate that the dynamic plugins have been successfully installed into your {pr

[source,bash,subs=+attributes]
----
oc logs -c install-dynamic-plugins deployment/<your {product-very-short} deployment>
$ oc logs -c install-dynamic-plugins deployment/<your {product-very-short} deployment>
----

The `install-dynamic-plugin` logs allow you to check the following installation logs for successful logs:
Expand Down Expand Up @@ -53,22 +53,22 @@ The {openshift-ai-connector-name-short} sidecars manage the data fetching and st
+
[source,bash]
----
oc get configmap bac-import-model -o json | jq -r '.binaryData | to_entries[] | "=== \(.key) ===\n" + (.value | @base64d | fromjson | .body | @base64d | fromjson | tostring)' | jq -R 'if startswith("=== ") then . else (. | fromjson) end'
$ oc get configmap bac-import-model -o json | jq -r '.binaryData | to_entries[] | "=== \(.key) ===\n" + (.value | @base64d | fromjson | .body | @base64d | fromjson | tostring)' | jq -R 'if startswith("=== ") then . else (. | fromjson) end'
----

. Check Location Service API: Confirm the location service is providing data to the {product-very-short} Entity Provider.
+
[source,bash,subs=+attributes]
----
oc rsh -c backstage-backend deployment/<your {product-very-short} deployment>
curl http://localhost:9090/list
$ oc rsh -c backstage-backend deployment/<your {product-very-short} deployment>
$ curl http://localhost:9090/list
----

. Check Sidecar Container Logs:
+
[source,bash]
----
oc logs -c rhoai-normalizer deployment/<your {product-very-short} deployment>
oc logs -c storage-rest deployment/<your {product-very-short} deployment>
oc logs -c location deployment/<your {product-very-short} deployment>
$ oc logs -c rhoai-normalizer deployment/<your {product-very-short} deployment>
$ oc logs -c storage-rest deployment/<your {product-very-short} deployment>
$ oc logs -c location deployment/<your {product-very-short} deployment>
----
Original file line number Diff line number Diff line change
Expand Up @@ -9,40 +9,40 @@ To access the same {rhoai-short} data as the connector, use `curl` to query the
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/registered_models | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/registered_models | jq
----

* Example showing how to fetch model versions
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_versions | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_versions | jq
----

* Example showing how to fetch model artifacts
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_artifacts | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_artifacts | jq
----

* Example showing how to fetch inference services
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/inference_services | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/inference_services | jq
----

* Example showing how to fetch serving environments
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/serving_environments | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/serving_environments | jq
----

* Example showing how to fetch catalog sources
+
[source,bash]
----
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_CATALOG_URL/api/model_catalog/v1alpha1/sources | jq
$ curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_CATALOG_URL/api/model_catalog/v1alpha1/sources | jq
----
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To run the script from the root directory of the repository, you must use the `-
+
[source,bash]
----
./scripts/build.sh --image=quay.io/orchestrator/demo-basic:test -w 01_basic/ -m 01_basic/manifests
$ ./scripts/build.sh --image=quay.io/orchestrator/demo-basic:test -w 01_basic/ -m 01_basic/manifests
----
+
This build command produces the following two artifacts:
Expand Down
4 changes: 2 additions & 2 deletions modules/orchestrator/proc-creating-and-running-workflows.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ The `kn-workflow` CLI is an essential tool that generates workflow manifests and
+
[source,bash]
----
kn-workflow quarkus create --name <specify project name, for example ,00_new_project>
$ kn-workflow quarkus create --name <specify project name, for example ,00_new_project>
----

. Edit the workflow, add schema and specific files, and run it locally from project folder as shown in the following example:
+
[source,bash]
----
kn-workflow quarkus run
$ kn-workflow quarkus run
----
. Run the workflow locally using the `kn-workflow run` which pulls the following image:
+
Expand Down
24 changes: 12 additions & 12 deletions modules/orchestrator/proc-deploying-workflows-on-a-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[id="proc-deploying-workflows-on-a-cluster_{context}"]
= Deploying workflows on a cluster

You can deploy the workflow on a cluster, since the image is pushed to the image registry and the deployment manifests are available.
You can deploy the workflow on a cluster, because the image is pushed to the image registry and the deployment manifests are available.

.Prerequisites

Expand All @@ -24,23 +24,23 @@ For instructions on how to install these components, see the {orchestrator-book-
+
[source,bash]
----
kubectl create -n <your_namespace> -f ./01_basic/manifests/.
$ kubectl create -n <your_namespace> -f ./01_basic/manifests/.
----

. After deployment, monitor the status of the workflow pods as shown in the following example:
+
[source,yaml]
[source,bash]
----
kubectl get pods -n <your_namespace> -l app=basic
$ kubectl get pods -n <your_namespace> -l app=basic
----
+
The pod may initially appear in an `Error` state because of missing or incomplete configuration in the Secret or ConfigMap.

. Inspect the Pod logs as shown in the following example:
+
[source,yaml]
[source,bash]
----
oc logs -n <your_namespace> basic-f7c6ff455-vwl56
$ oc logs -n <your_namespace> basic-f7c6ff455-vwl56
----
+
The following code is an example of the output:
Expand All @@ -57,9 +57,9 @@ The error indicates a missing property: `quarkus.openapi-generator.notifications

. In such a case where the logs show the `ConfigurationException: Failed to read configuration properties` error or indicate a missing value, retrieve the ConfigMap as shown in the following example:
+
[source,yaml]
[source,bash]
----
oc get -n <your_namespace> configmaps basic-props -o yaml
$ oc get -n <your_namespace> configmaps basic-props -o yaml
----
+
The following code is an example of the sample output:
Expand All @@ -79,19 +79,19 @@ Resolve the placeholders using values provided using a Secret.

. You must edit the corresponding Secret and provide appropriate base64-encoded values to resolve the placeholders in `application.properties` as shown in the following example:
+
[source,yaml]
[source,bash]
----
kubectl edit secrets -n <your_namespace> basic-secrets
$ kubectl edit secrets -n <your_namespace> basic-secrets
----
. Restart the workflow Pod for Secret changes to take effect in OpenShift Serverless Logic `v1.36`.

.Verification

. Verify the deployment status by checking the Pods again as shown in the following example:
+
[source,yaml]
[source,bash]
----
oc get pods -n <your_namespace> -l app=basic
$ oc get pods -n <your_namespace> -l app=basic
----
+
The expected status for a successfully deployed workflow Pod is as shown in the following example:
Expand Down
2 changes: 1 addition & 1 deletion modules/orchestrator/proc-enable-orchestrator-plugin.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Additionally, the `ref: sonataflow` field installs the Openshift Serverless and
====
+
.Example: Complete configuration of the Orchestrator plugin
[source,subs="+attributes,+quotes"]
[source,yaml,subs="+attributes,+quotes"]
----
apiVersion: v1
kind: ConfigMap
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ You can use Orchestrator Infrastructure for {product} to install components for
+
[source,terminal,subs="attributes+"]
----
helm install <release_name> redhat-developer/redhat-developer-hub-orchestrator-infra
$ helm install <release_name> redhat-developer/redhat-developer-hub-orchestrator-infra
----
+
[NOTE]
Expand Down
2 changes: 1 addition & 1 deletion modules/orchestrator/proc-helper-script-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Do not use `plugin-infra.sh` in production.
+
[source,terminal,subs="+attributes,+quotes"]
----
curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/refs/heads/release-{product-version}/config/profile/rhdh/plugin-infra/plugin-infra.sh # Specify the {product} version in the URL or use main
$ curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/refs/heads/release-{product-version}/config/profile/rhdh/plugin-infra/plugin-infra.sh # Specify the {product} version in the URL or use main
----

. Run the script:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ yq -r '.data."controllers_cfg.yaml" | from_yaml | .. | select(tag == "!!str") |
+
[source,terminal,subs="+attributes,+quotes"]
----
oc-mirror --config=ImageSetConfiguration.yaml file:///path/to/mirror-archive --authfile /path/to/authfile --v2
$ oc-mirror --config=ImageSetConfiguration.yaml file:///path/to/mirror-archive --authfile /path/to/authfile --v2
----
+
[NOTE]
Expand All @@ -82,8 +82,8 @@ The `oc-mirror` command pulls the charts listed in the `ImageSetConfiguration` f
+
[source,terminal,subs="+attributes,+quotes"]
----
cd <workspace folder>/working-dir/cluster-resources/
oc apply -f .
$ cd <workspace folder>/working-dir/cluster-resources/
$ oc apply -f .
----
+
. Transfer the generated mirror archive file, for example, `/path/to/mirror-archive/mirror_000001.tar`, to a bastion host within your disconnected environment.
Expand All @@ -92,7 +92,7 @@ oc apply -f .
+
[source,terminal,subs="+attributes,+quotes"]
----
oc-mirror --v2 --from <mirror-archive-file> docker://<target-registry-url:port> --workspace file://<workspace folder> --authfile /path/to/authfile
$ oc-mirror --v2 --from <mirror-archive-file> docker://<target-registry-url:port> --workspace file://<workspace folder> --authfile /path/to/authfile
----
+
where:
Expand Down
Loading