diff --git a/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc b/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc index 354d06ea69..62e1aff196 100644 --- a/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc +++ b/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc @@ -156,8 +156,7 @@ type: kubernetes.io/service-account-token ---- . Update your {product-very-short} dynamic plugin configuration. -The {product-very-short} Pod requires two dynamic plugins. -.. In your {product-very-short} dynamic plugins ConfigMap, add the following code: +The {product-very-short} Pod requires two dynamic plugins. In your {product-very-short} dynamic plugins ConfigMap, add the following code: + [source,yaml] ---- @@ -172,9 +171,7 @@ plugins: ** If {product-very-short} was installed using the Operator, modify your {product-very-short} custom resource (CR) instance. ** If {product-very-short} was installed using the Helm charts, modify the *Deployment* specification. -. The system relies on three sidecar containers ({openshift-ai-connector-name}) running alongside the `backstage-backend` container. - -Add these sidecar containers to your configuration referencing the `rhdh-rhoai-connector-token` Secret: +. The system relies on three sidecar containers ({openshift-ai-connector-name}) running alongside the `backstage-backend` container. Add these sidecar containers to your configuration referencing the `rhdh-rhoai-connector-token` Secret: ** `location`: Provides the REST API for {product-very-short} plugins to fetch model metadata. ** `storage-rest`: Maintains a cache of AI Model metadata in a ConfigMap called `bac-import-model`. ** `rhoai-normalizer`: Acts as a Kubernetes controller and {rhoai-short} client, normalizing {rhoai-short} metadata for the connector. The following code block is an example: @@ -199,7 +196,7 @@ spec: envFrom: - secretRef: name: rhdh-rhoai-connector-token - image: quay.io/redhat-ai-dev/model-catalog-location-service@sha256:763311530fb842a1366447e661ca22563e6ef22505d993716aea350bbbfae9a0 + image: quay.io/redhat-ai-dev/model-catalog-location-service@sha256:c4471e07be6e0dbe821613053e6264a552cacda7f8604dbf306e6ac9e81e8ab9 imagePullPolicy: Always name: location ports: @@ -249,7 +246,7 @@ spec: envFrom: - secretRef: name: rhdh-rhoai-connector-token - image: quay.io/redhat-ai-dev/model-catalog-rhoai-normalizer@sha256:fe6c05d57495d6217c4d584940ec552c3727847ff60f39f5d04f94be024576d8 + image: quay.io/redhat-ai-dev/model-catalog-rhoai-normalizer@sha256:9f19742450a3a9c6d9c01d8341a20db7eb5a52a39348f488ae06b6aa49754a26 imagePullPolicy: Always name: rhoai-normalizer volumeMounts: @@ -275,4 +272,4 @@ where: `modelCatalog`:: Specifies the name of the provider. `development`:: Defines future connector capability beyond a single `baseUrl`. -`baseUrl`:: For Developer Preview, this value is the only one supported. Future releases might support external routes. \ No newline at end of file +`baseUrl`:: For Developer Preview, this value is the only one supported. Future releases might support external routes. diff --git a/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc b/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc index d4ced2e4b2..41be93ad25 100644 --- a/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc +++ b/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc @@ -39,4 +39,10 @@ While {rhoai-short} provides essential data, an AI platform engineer using {rhoa |`License` |Links |A URL to the license file of the model. -|=== \ No newline at end of file + +|`rhdh.modelcatalog.io/model-name` +|Annotations +|A name of the model used when communicating with the model server's REST API. {openshift-ai-connector-name} stores this name in the `rhdh.modelcatalog.io/model-name` annotation on the `Resource` entity, and defaults this annotation's value to the combined names of the `ResourceModel` and `ModelVersion` in the model registry. + +If the model registry is not used, the `KServe InferenceService` name becomes the default value instead, as it often is the same as the model name used when communicating with the model server's REST API. However, the names are not guaranteed to match. If the names do not match, provide the correct model name for model REST API invocations. +|===