diff --git a/artifacts/attributes.adoc b/artifacts/attributes.adoc index 350ff6215b..1b26be0ede 100644 --- a/artifacts/attributes.adoc +++ b/artifacts/attributes.adoc @@ -65,6 +65,7 @@ :rhdeveloper-name: Red Hat Developer :rhel: Red Hat Enterprise Linux :rhoai-brand-name: Red Hat OpenShift AI +:rhoai-short: RHOAI :rhoserverless-brand-name: Red Hat OpenShift Serverless :rhsso-brand-name: Red Hat Single-Sign On :rhsso: RHSSO @@ -171,8 +172,17 @@ :plugin-type-name: custom :plugin-type-name-uppercase: Custom + :scorecard-plugin-book-link: {product-docs-link}/html-single/understand_and_visualize_red_hat_developer_hub_project_health_using_scorecards/index :scorecard-plugin-book-title: Understand and visualize {product} project health using Scorecards :model-context-protocol-link: {product-docs-link}/html-single/interacting_with_model_context_protocol_tools_for_red_hat_developer_hub/index :model-context-protocol-title: Interacting with Model Context Protocol tools for {product} + +:openshift-ai-connector-for-rhdh-link: {product-docs-link}/html-single/integrating_rhdh_with_openshift_ai_connector_for_rhdh/index +:openshift-ai-connector-for-rhdh-title: Integrate {product} with {openshift-ai-connector-name} to leverage AI models + +:openshift-ai-connector-name: OpenShift AI Connector for {product} +:openshift-ai-connector-name-short: OpenShift AI Connector for {product-very-short} + +:rhoai-docs-link: link:https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.25/html-single \ No newline at end of file diff --git a/artifacts/snip-developer-preview-rhoai.adoc b/artifacts/snip-developer-preview-rhoai.adoc new file mode 100644 index 0000000000..fc8cff4911 --- /dev/null +++ b/artifacts/snip-developer-preview-rhoai.adoc @@ -0,0 +1,6 @@ +[IMPORTANT] +==== +This section describes Developer Preview features in the {openshift-ai-connector-name} plugin. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. + +For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/[Developer Preview Support Scope]. +==== diff --git a/modules/openshift-ai-connector-for-rhdh/con-understand-how-ai-assets-map-to-rhdh-catalog.adoc b/modules/openshift-ai-connector-for-rhdh/con-understand-how-ai-assets-map-to-rhdh-catalog.adoc new file mode 100644 index 0000000000..ae3033dc9d --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/con-understand-how-ai-assets-map-to-rhdh-catalog.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-understand-how-ai-assets-map-to-rhdh-catalog_{context}"] += Understand how AI assets map to the {product} Catalog + +include::{docdir}/artifacts/snip-developer-preview-rhoai.adoc[] + +The {openshift-ai-connector-name} ({openshift-ai-connector-name-short}) serves as a crucial link, enabling the discovery and accessibility of AI assets managed within the {rhoai-brand-name} offering directly within your {product-very-short} instance. + +For more information on model registry components, see {rhoai-docs-link}/enabling_the_model_registry_component/index#overview-of-model-registries_model-registry-config[Overview of model registries and model catalog]. \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/proc-populating-the-api-definition-tab.adoc b/modules/openshift-ai-connector-for-rhdh/proc-populating-the-api-definition-tab.adoc new file mode 100644 index 0000000000..b71596cde7 --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/proc-populating-the-api-definition-tab.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-populating-the-api-definition-tab_{context}"] += Populating the API Definition tab in {product-very-short} API entities + +Since {rhoai-short} does not expose the OpenAPI specification by default, the AI platform engineer can take the following steps to provide this valuable information: + +.Procedure + +. Retrieve OpenAPI JSON: Use a tool like `curl` to fetch the specification directly from the running endpoint of the AI model server. The following command provides the precise endpoint (`/openapi.json`) and shows how to include a `Bearer` token if the model requires authentication for access. ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $MODEL_API_KEY" https://$MODEL_ROOT_URL_INCLUDING_PORT/openapi.json | jq > open-api.json +---- + +. Set Property in {rhoai-short}. +.. In the *{rhoai-short}* dashboard, go to *Model Registry* and select the appropriate *Model Version*. ++ +[NOTE] +==== +We recommend using *Model Version* instead of *Registered Model* to maintain stability if the API changes between versions. +==== + +.. In the **Properties** section, set a key/value pair where the key is `API Spec` and the value is the entire JSON content from the `open-api.json` file. + +. Propagation: The {openshift-ai-connector-name} periodically polls the {rhoai-short} Model Registry, propagates this JSON, and renders the interactive API documentation in the *Definition* tab of the {product-very-short} API entity. \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc b/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc new file mode 100644 index 0000000000..20ac6a1fdc --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc @@ -0,0 +1,279 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai_{context}"] += Setting up {openshift-ai-connector-name} with {rhoai-brand-name} + +The installation of the {openshift-ai-connector-name} requires manual updates to {product-very-short}-related Kubernetes resources. + +.{rhoai-short} Prerequisites + +* To import model cards from the model catalog into TechDocs, you must use {rhoai-short} 2.25. ++ +[NOTE] +==== +If you upgraded to {rhoai-short} 2.25 from an earlier version, you must manually enable the model catalog dashboard and model registry before you can import model cards. +==== + +* If you used the model catalog in earlier versions of {rhoai-short}, TechDocs propagation does not work for any models you registered into the model registry while at those earlier versions; only models registered into model registry from a {rhoai-short} 2.25 model catalog have their model cards transferred to {product-very-short} as TechDocs. + +* For the rest of the features, version 2.20 or later suffices. Enabling model registry and its associated dashboard allows for a user experience that more directly allows for customizing AI Model metadata. +For best overall experience, {rhoai-short} 2.25 is recommended. + +For more details, see {rhoai-docs-link}/enabling_the_model_registry_component/index[Enabling the model registry component]. + +.Procedure + +. Configure {rhoai-short}-related RBAC and credentials. +A Kubernetes `ServiceAccount` and a `service-account-token` Secret are required for the connector to retrieve data from {rhoai-short}. The following resources must be created, replacing namespace names (`ai-rhdh` for {product-very-short}, `rhoai-model-registries` for {rhoai-short}) as needed: +** `ServiceAccount` (`rhdh-rhoai-connector`). For example: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: rhdh-rhoai-connector + namespace: ai-rhdh +---- +** `ClusterRole` and `ClusterRoleBinding` (`rhdh-rhoai-connector`) to allow access to OCP resources like `routes`, `services`, and `inferenceservices`. For example: ++ +[source,yaml] +---- +# Example for `ClusterRole` +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: rhdh-rhoai-connector + annotations: + argocd.argoproj.io/sync-wave: "0" +rules: + - apiGroups: + - apiextensions.k8s.sio + resources: + - customresourcedefinitions + verbs: + - get + - apiGroups: + - route.openshift.io + resources: + - routes + verbs: + - get + - list + - watch + - apiGroups: [""] + resources: + - serviceaccounts + - services + verbs: + - get + - list + - watch + + - apiGroups: ["serving.kserve.io"] + resources: ["inferenceservices"] + verbs: ["get", "list", "watch"] +---- ++ +[source,yaml] +---- +# Example for `ClusterRoleBinding` +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: rhdh-rhoai-connector +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: rhdh-rhoai-connector +subjects: + - kind: ServiceAccount + name: rhdh-rhoai-connector + namespace: ai-rhdh +---- +** `Role` and `RoleBinding` to allow ConfigMap updates within the {product-very-short} namespace. For example: ++ +[source,yaml] +---- +# Example for `Role` +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: rhdh-rhoai-connector + namespace: ai-rhdh +rules: + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["get", "list", "watch", "create", "update", "patch"] +---- ++ +[source,yaml] +---- +# Example for `RoleBinding` +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: rhdh-rhoai-dashboard-permissions + namespace: rhoai-model-registries +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: registry-user-modelregistry-public +subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: system:serviceaccounts:ai-rhdh +---- +** `RoleBinding` in the {rhoai-short} namespace to grant the {product-very-short} `ServiceAccount` read permissions to the model registry data (binding to `registry-user-modelregistry-public`). ++ +[source,yaml] +---- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: rhdh-rhoai-connector + namespace: ai-rhdh +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: rhdh-rhoai-connector +subjects: + - kind: ServiceAccount + name: rhdh-rhoai-connector + namespace: ai-rhdh +---- +** Secret (`rhdh-rhoai-connector-token`) of type `kubernetes.io/service-account-token` that goes along with the `rhdh-rhoai-connector` `ServiceAccount`. ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: rhdh-rhoai-connector-token + namespace: ai-rhdh + annotations: + kubernetes.io/service-account.name: rhdh-rhoai-connector +type: kubernetes.io/service-account-token +---- + +. Update your {product-very-short} dynamic plugin configuration. +The {product-very-short} Pod requires two dynamic plugins. +.. In your {product-very-short} dynamic plugins ConfigMap, add the following code: ++ +[source,yaml] +---- +plugins: + - disabled: false + package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog:bs_1.42.5__0.7.0!red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog + - disabled: false + package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend:bs_1.42.5__0.3.0!red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend +---- + +. Add the `Connector` sidecar containers to the {product-very-short} Pod. +** If {product-very-short} was installed using the Operator, modify your {product-very-short} custom resource (CR) instance. +** If {product-very-short} was installed using the Helm charts, modify the *Deployment* specification. + +. The system relies on three sidecar containers ({openshift-ai-connector-name}) running alongside the `backstage-backend` container. Add these sidecar containers to your configuration referencing the `rhdh-rhoai-connector-token` Secret: +** `location`: Provides the REST API for {product-very-short} plugins to fetch model metadata. +** `storage-rest`: Maintains a cache of AI Model metadata in a ConfigMap called `bac-import-model`. +** `rhoai-normalizer`: Acts as a Kubernetes controller and {rhoai-short} client, normalizing {rhoai-short} metadata for the connector. The following code block is an example: ++ +[source,yaml] +---- +spec: + template: + spec: + containers: + - name: backstage-backend + - env: + - name: NORMALIZER_FORMAT + value: JsonArrayFormat + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + envFrom: + - secretRef: + name: rhdh-rhoai-connector-token + image: quay.io/redhat-ai-dev/model-catalog-location-service@sha256:4f6ab6624a29f627f9f861cfcd5d18177d46aa2c67a81a75a1502c49bc2ff012 + + imagePullPolicy: Always + name: location + ports: + - containerPort: 9090 + name: location + protocol: TCP + volumeMounts: + - mountPath: /opt/app-root/src/dynamic-plugins-root + name: dynamic-plugins-root + workingDir: /opt/app-root/src + - env: + - name: NORMALIZER_FORMAT + value: JsonArrayFormat + - name: STORAGE_TYPE + value: ConfigMap + - name: BRIDGE_URL + value: http://localhost:9090 + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + envFrom: + - secretRef: + name: rhdh-rhoai-connector-token + image: quay.io/redhat-ai-dev/model-catalog-storage-rest@sha256:398095e7469e86d84b1196371286363f4b7668aa3e26370b4d78cb8d4ace1dc9 + + imagePullPolicy: Always + name: storage-rest + volumeMounts: + - mountPath: /opt/app-root/src/dynamic-plugins-root + name: dynamic-plugins-root + workingDir: /opt/app-root/src + - env: + - name: NORMALIZER_FORMAT + value: JsonArrayFormat + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + envFrom: + - secretRef: + name: rhdh-rhoai-connector-token + image: quay.io/redhat-ai-dev/model-catalog-rhoai-normalizer@sha256:fe6c05d57495d6217c4d584940ec552c3727847ff60f39f5d04f94be024576d8 + + imagePullPolicy: Always + name: rhoai-normalizer + volumeMounts: + - mountPath: /opt/app-root/src/dynamic-plugins-root + name: dynamic-plugins-root + workingDir: /opt/app-root/src +---- + +. Enable `Connector` in your `{product-very-short}{my-app-config-file}` file. +In your `{backstage} `app-config.extra.yaml` file, configure `Entity Provider` under the `catalog.providers` section: ++ +[source,yaml] +---- +providers: + modelCatalog: + development: + baseUrl: http://localhost:9090 +---- + +where: + +`modelCatalog`:: Specifies the name of the provider. +`development`:: Defines future connector capability beyond a single `baseUrl`. +`baseUrl`:: For Developer Preview, this value is the only one supported. Future releases might support external routes. \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/proc-troubleshooting-connector-functionality.adoc b/modules/openshift-ai-connector-for-rhdh/proc-troubleshooting-connector-functionality.adoc new file mode 100644 index 0000000000..ead4a0074e --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/proc-troubleshooting-connector-functionality.adoc @@ -0,0 +1,69 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-troubleshooting-connector-functionality_{context}"] += Troubleshooting Connector functionality + +The connector system consists of the two dynamic plugins and the three {openshift-ai-connector-name-short} sidecar containers. Generally speaking, the logs collected must be provided to {company-name} Support for analysis. + +The actual contents of the diagnostic data are not part of any product guaranteed specification, and can change at any time. + +== Checking Dynamic Plugins status + +Validate that the dynamic plugins have been successfully installed into your {product-very-short} project Pod by using the following command: ++ +[source,bash,subs=+attributes] +---- +oc logs -c install-dynamic-plugins deployment/ +---- + +The `install-dynamic-plugin` logs allow you to check the following installation logs for successful logs: + +* `red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog` (Entity Provider) +* `red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend` (TechDoc URL Reader) + +== Inspecting plugin logs + +View the {openshift-ai-connector-name} plugins in the `backstage-backend` container. Items to look for: + +[cols="3,4,4"] +|=== +|Plugin Component |Logger Service Target |Common Log Text + +|Model Catalog Entity Provider +|`ModelCatalogResourceEntityProvider` +|`Discovering ResourceEntities from Model Server...` + +|Model Catalog TechDoc URL Reader +|`ModelCatalogBridgeTechdocUrlReader` +|`ModelCatalogBridgeTechdocUrlReader.readUrl` +|=== + +To enable debug logging, set the `LOG_LEVEL` environment variable to `debug` on the `backstage-backend` container. For more information, see {monitoring-and-logging-book-link}[{monitoring-and-logging-book-title}]. + +== Inspecting the {openshift-ai-connector-name-short} + +The {openshift-ai-connector-name-short} sidecars manage the data fetching and storage: + +. Check Cached Data (ConfigMap): The processed AI Model metadata is stored in a `ConfigMap`. ++ +[source,bash] +---- +oc get configmap bac-import-model -o json | jq -r '.binaryData | to_entries[] | "=== \(.key) ===\n" + (.value | @base64d | fromjson | .body | @base64d | fromjson | tostring)' | jq -R 'if startswith("=== ") then . else (. | fromjson) end' +---- + +. Check Location Service API: Confirm the location service is providing data to the {product-very-short} Entity Provider. ++ +[source,bash,subs=+attributes] +---- +oc rsh -c backstage-backend deployment/ +curl http://localhost:9090/list +---- + +. Check Sidecar Container Logs: ++ +[source,bash] +---- +oc logs -c rhoai-normalizer deployment/ +oc logs -c storage-rest deployment/ +oc logs -c location deployment/ +---- \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc b/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc new file mode 100644 index 0000000000..d4ced2e4b2 --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-enrich-ai-model-metadata_{context}"] += Enrich AI model metadata for enhanced {product} experience + +While {rhoai-short} provides essential data, an AI platform engineer using {rhoai-short} can enrich the {backstage}/{product-very-short} experience by adding `custom properties` to the `ModelVersion` or `RegisteredModel` (or annotations to the `KServe InferenceService` if the model registry is not used) so that the {openshift-ai-connector-name} can add the information to the {product-very-short} entities it creates. For more details, see {rhoai-docs-link}/working_with_model_registries/index#editing-model-version-metadata-in-a-model-registry_model-registry[Editing model version metadata in a model registry]. + +|=== +|Property Key |Entity Field Populated |Description + +|`API Spec` +|API Definition Tab +|The OpenAPI / Swagger JSON specification for the model REST API. + +|`API Type` +|API Type +|Correlates to supported {product-very-short}/{backstage} API types (defaults to `openapi`). + +|`TechDocs` +|TechDocs +|URL pointing to a Git repository that follows {product-very-short} TechDocs conventions for the Model Card. Use this setting only if the *Model Card to TechDocs* mapping is not active. + +|`Homepage URL` +|Links +|A URL considered the home page for the model. + +|`Owner` +|Owner +|Overrides the default OpenShift user as the entity owner. + +|`Lifecycle` +|Lifecycle +|Serves a means to express the lifecycle notion of {product-very-short}/{backstage}. + +|`How to use` +|Links +|A URL that points to usage documentation. + +|`License` +|Links +|A URL to the license file of the model. +|=== \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/ref-model-to-entity-mapping.adoc b/modules/openshift-ai-connector-for-rhdh/ref-model-to-entity-mapping.adoc new file mode 100644 index 0000000000..aa919e71eb --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/ref-model-to-entity-mapping.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-model-to-entity-mapping_{context}"] += Model-to-Entity mapping + +Model-to-Entity mapping integrates with {openshift-ai-connector-name-short}, the model catalog, and KServe-based Model Deployments (InferenceServices). This integration automatically converts your AI/ML artifacts into familiar {backstage} entities, simplifying management and providing a unified view of your available AI models to your developer teams. + +This offering interfaces with the {openshift-ai-connector-name-short}, model catalog, and KServe-based Model Deployments (InferenceServices) to create familiar {backstage} entities. + +|=== +|{rhoai-short} Artifact |{product-very-short}/{backstage} Entity Kind |{product-very-short}/{backstage} Entity Type |Purpose + +|Model Server (InferenceService) +|Component +|`model-server` +|Represents a running, accessible AI model endpoint. See {rhoai-docs-link}/configuring_your_model-serving_platform/index[Configuring your model-serving platform]. + +|AI Model (Model Registry Version) +|Resource +|`ai-model` +|Represents the specific AI model artifact, for example, `Llama-3-8B`. + +|Model Server API Details +|API +|`openapi` (Default) +|Provides the OpenAPI/Swagger specification for the REST endpoint of the model. See https://access.redhat.com/articles/7047935[Red Hat OpenShifT AI: API Tiers] + +|Model Cards +|TechDocs +|N/A +|Model cards from the {rhoai-short} model catalog are associated with the Component and Resource entities. See {rhoai-docs-link}/working_with_the_model_catalog/registering-a-model-from-the-model-catalog_working-model-catalog#registering-a-model-from-the-model-catalog_working-model-catalog[Registering a model from the model catalog]. +|=== + +Once the {openshift-ai-connector-name-short} is installed and connected with {rhoai-short}, the transfer of information commences automatically. \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/ref-openshift-ai-model-registry-and-model-catalog-queries.adoc b/modules/openshift-ai-connector-for-rhdh/ref-openshift-ai-model-registry-and-model-catalog-queries.adoc new file mode 100644 index 0000000000..b87b896b2b --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/ref-openshift-ai-model-registry-and-model-catalog-queries.adoc @@ -0,0 +1,48 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-openshift-ai-model-registry-and-model-catalog-queries_{context}"] += OpenShift AI model registry and model catalog queries + +To access the same {rhoai-short} data as the connector, use `curl` to query the {rhoai-short} model registry and model catalog APIs, ensuring the `ServiceAccount` token has correct access control: + +* Example showing how to fetch registered models ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/registered_models | jq +---- + +* Example showing how to fetch model versions ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_versions | jq +---- + +* Example showing how to fetch model artifacts ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_artifacts | jq +---- + +* Example showing how to fetch inference services ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/inference_services | jq +---- + +* Example showing how to fetch serving environments ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/serving_environments | jq +---- + +* Example showing how to fetch catalog sources ++ +[source,bash] +---- +curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_CATALOG_URL/api/model_catalog/v1alpha1/sources | jq +---- \ No newline at end of file diff --git a/modules/openshift-ai-connector-for-rhdh/ref-out-of-the-box-details-from-rhoai.adoc b/modules/openshift-ai-connector-for-rhdh/ref-out-of-the-box-details-from-rhoai.adoc new file mode 100644 index 0000000000..451ef64726 --- /dev/null +++ b/modules/openshift-ai-connector-for-rhdh/ref-out-of-the-box-details-from-rhoai.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-out-of-the-box-details-from-rhoai_{context}"] += Out-of-the-Box as asset details synched from {rhoai-short} + +The connector propagates the following key data: + +* InferenceServices (Component type model-server): +** URL of the OpenShift Route (if exposed). +** URL of the Kubernetes Service. +** Authentication requirement status. +* Model registry (Resource type `ai-model`): +** Model description, artifact URIs, and author/owner information. +* Model catalog: +** Links to the Model Card (as {product-very-short} TechDocs). +** Model license URL. \ No newline at end of file diff --git a/titles/openshift-ai-connector-for-rhdh/artifacts b/titles/openshift-ai-connector-for-rhdh/artifacts new file mode 120000 index 0000000000..f30b6dea60 --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/artifacts @@ -0,0 +1 @@ +../../artifacts \ No newline at end of file diff --git a/titles/openshift-ai-connector-for-rhdh/assemblies b/titles/openshift-ai-connector-for-rhdh/assemblies new file mode 120000 index 0000000000..91646274db --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/assemblies @@ -0,0 +1 @@ +../../assemblies \ No newline at end of file diff --git a/titles/openshift-ai-connector-for-rhdh/docinfo.xml b/titles/openshift-ai-connector-for-rhdh/docinfo.xml new file mode 100644 index 0000000000..5f7fe2ebac --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/docinfo.xml @@ -0,0 +1,11 @@ +{title} +{product} +{product-version} +{subtitle} + + {abstract} + + + {company-name} Customer Content Services + + diff --git a/titles/openshift-ai-connector-for-rhdh/images b/titles/openshift-ai-connector-for-rhdh/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/titles/openshift-ai-connector-for-rhdh/master.adoc b/titles/openshift-ai-connector-for-rhdh/master.adoc new file mode 100644 index 0000000000..aeed943201 --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/master.adoc @@ -0,0 +1,23 @@ +include::artifacts/attributes.adoc[] +:context: +:imagesdir: images +:title: {openshift-ai-connector-for-rhdh-title} +:subtitle: Installing, configuring, and troubleshooting {openshift-ai-connector-name} +:abstract: As a developer, when you require access to centralized AI/ML services, you can integrate AI models and model servers from {rhoai-brand-name} directly into the {product} ({product-very-short}) Catalog, so that you can provide a single, consistent hub for discovering, managing, and consuming all components, accelerating time-to-market. += {title} + +include::modules/openshift-ai-connector-for-rhdh/con-understand-how-ai-assets-map-to-rhdh-catalog.adoc[leveloffset=+1] + +include::modules/openshift-ai-connector-for-rhdh/ref-model-to-entity-mapping.adoc[leveloffset=+2] + +include::modules/openshift-ai-connector-for-rhdh/ref-out-of-the-box-details-from-rhoai.adoc[leveloffset=+2] + +include::modules/openshift-ai-connector-for-rhdh/proc-setting-up-openshift-ai-connector-for-rhdh-with-rhoai.adoc[leveloffset=+1] + +include::modules/openshift-ai-connector-for-rhdh/ref-enrich-ai-model-metadata.adoc[leveloffset=+1] + +include::modules/openshift-ai-connector-for-rhdh/proc-populating-the-api-definition-tab.adoc[leveloffset=+2] + +include::modules/openshift-ai-connector-for-rhdh/proc-troubleshooting-connector-functionality.adoc[leveloffset=+1] + +include::modules/openshift-ai-connector-for-rhdh/ref-openshift-ai-model-registry-and-model-catalog-queries.adoc[leveloffset=+2] \ No newline at end of file diff --git a/titles/openshift-ai-connector-for-rhdh/modules b/titles/openshift-ai-connector-for-rhdh/modules new file mode 120000 index 0000000000..36719b9de7 --- /dev/null +++ b/titles/openshift-ai-connector-for-rhdh/modules @@ -0,0 +1 @@ +../../modules/ \ No newline at end of file