You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confidential computing is a technology for securing data in use. It uses a https://en.wikipedia.org/wiki/Trusted_execution_environment[Trusted Execution Environment] provided within the hardware of the processor to prevent access from others who have access to the system.
27
-
https://confidentialcontainers.org/[Confidential containers] is a project to standardize the consumption of confidential computing by making the security boundary for confidential computing to be a Kubernetes pod. [Kata containers](https://katacontainers.io/) is used to establish the boundary via a shim VM.
27
+
https://confidentialcontainers.org/[Confidential containers] is a project to standardize the consumption of confidential computing by making the security boundary for confidential computing to be a Kubernetes pod. https://katacontainers.io/[Kata containers] is used to establish the boundary via a shim VM.
28
28
29
29
A core goal of confidential computing is to use this technology to isolate the workload from both Kubernetes and hypervisor administrators.
Copy file name to clipboardExpand all lines: content/patterns/coco-pattern/coco-pattern-getting-started.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Logging into azure once the pods have been provisioned will show that each of th
42
42
43
43
=== `oc exec` testing
44
44
45
-
In a OpenShift cluster without confidential containers, Role Based Access Control (RBAC), may be used to prevent users from execing into a container to mutate it.
45
+
In a OpenShift cluster without confidential containers, Role Based Access Control (RBAC), may be used to prevent users from using `oc exec` to access a container container to mutate it.
46
46
However:
47
47
48
48
1. Cluster admins can always circumvent this capability
== About OpenShift cluster sizing for the {med-pattern}
17
-
20
+
{aws_node}
18
21
To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:
Use the instructions to add nodes with GPU in OpenShift cluster running in AWS cloud. Nodes with GPU will be tainted to allow only pods that required GPU to be scheduled to these nodes
8
+
By default, GPU nodes use the instance type `g5.2xlarge`. If you need to change the instance type—such as to address performance requirements, carry out these steps:
9
9
10
-
More details can be found in following documents [Openshift AI](https://ai-on-openshift.io/odh-rhoai/nvidia-gpus/), [NVIDIA on OpenShift](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html)
10
+
1. In your local branch of the `rag-llm-gitops` git repository change to the `ansible/playbooks/templates` directory.
11
11
12
-
## Add machineset
12
+
2. Edit the file `gpu-machine-sets.j2` changing the `instanceType` to for example `g5.4xlarge`. Save and exit.
13
13
14
-
The easiest way is to use existing machineset manifest and update certain elements. Use worker machineset manifest and modify some of the entries (naming conventions provided as reference only, use own if required.), keep other entries as is:
14
+
3. Push the changes to the origin remote repository by running the following command:
node-role.kubernetes.io/odh-notebook: ''<--- Put your label if needed
38
-
providerSpec:
39
-
value:
40
-
........................
41
-
instanceType: g5.2xlarge <---- Change vm type if needed
42
-
.............
43
-
taints:
44
-
- effect: NoSchedule
45
-
key: odh-notebook <--- Use own taint name or skip all together
46
-
value: 'true'
47
-
```
48
-
49
-
Use `kubectl` or `oc` command line to create new machineset `oc apply -f gpu_machineset.yaml`
50
-
51
-
Depending on type of EC2 instance creation of the new machines make take some time. Please note that all nodes with GPU will have labels(`node-role.kubernetes.io/odh-notebook`in our case) and taints (`odh-notebook `) that we have specified in machineset applied automatically
52
-
53
-
## Install Node Feature Operator
54
-
55
-
From OperatorHub install Node Feature Discovery Operator , accepting defaults . Once Operator has been installed , create `NodeFeatureDiscovery`instance . Use default entries unless you something specific is needed . Node Feature Discovery Operator will add labels to nodes based on available hardware resources
56
-
57
-
## Install NVIDIA GPU Operator
58
-
59
-
NVIDIA GPU Operator will provision daemonsets with drivers for the GPU to be used by workload running on these nodes . Detailed instructions are available in NVIDIA Documentation [NVIDIA on OpenShift](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html) . Following simplified steps for specific setup :
60
-
61
-
- Install NVIDIA GPU Operator from OperatorHub
62
-
- Once operator is ready create `ClusterPolicy` custom resource. Unless required you can use default settings with adding `tolerations` if machineset in first section has been created with taint. Failing to add `tolerations` will prevent drivers to be installed on GPU enabled node :
63
-
64
-
```yaml
65
-
apiVersion: nvidia.com/v1
66
-
kind: ClusterPolicy
67
-
metadata:
68
-
name: gpu-cluster-policy
69
-
spec:
70
-
vgpuDeviceManager:
71
-
enabled: true
72
-
migManager:
73
-
enabled: true
74
-
operator:
75
-
defaultRuntime: crio
76
-
initContainer: {}
77
-
runtimeClass: nvidia
78
-
use_ocp_driver_toolkit: true
79
-
dcgm:
80
-
enabled: true
81
-
gfd:
82
-
enabled: true
83
-
dcgmExporter:
84
-
config:
85
-
name: ''
86
-
enabled: true
87
-
serviceMonitor:
88
-
enabled: true
89
-
driver:
90
-
certConfig:
91
-
name: ''
92
-
enabled: true
93
-
kernelModuleConfig:
94
-
name: ''
95
-
licensingConfig:
96
-
configMapName: ''
97
-
nlsEnabled: false
98
-
repoConfig:
99
-
configMapName: ''
100
-
upgradePolicy:
101
-
autoUpgrade: true
102
-
drain:
103
-
deleteEmptyDir: false
104
-
enable: false
105
-
force: false
106
-
timeoutSeconds: 300
107
-
maxParallelUpgrades: 1
108
-
maxUnavailable: 25%
109
-
podDeletion:
110
-
deleteEmptyDir: false
111
-
force: false
112
-
timeoutSeconds: 300
113
-
waitForCompletion:
114
-
timeoutSeconds: 0
115
-
virtualTopology:
116
-
config: ''
117
-
devicePlugin:
118
-
config:
119
-
default: ''
120
-
name: ''
121
-
enabled: true
122
-
mig:
123
-
strategy: single
124
-
sandboxDevicePlugin:
125
-
enabled: true
126
-
validator:
127
-
plugin:
128
-
env:
129
-
- name: WITH_WORKLOAD
130
-
value: 'false'
131
-
nodeStatusExporter:
132
-
enabled: true
133
-
daemonsets:
134
-
rollingUpdate:
135
-
maxUnavailable: '1'
136
-
tolerations:
137
-
- effect: NoSchedule
138
-
key: odh-notebook
139
-
value: 'true'
140
-
updateStrategy: RollingUpdate
141
-
sandboxWorkloads:
142
-
defaultWorkload: container
143
-
enabled: false
144
-
gds:
145
-
enabled: false
146
-
vgpuManager:
147
-
enabled: false
148
-
vfioManager:
149
-
enabled: true
150
-
toolkit:
151
-
enabled: true
152
-
installDir: /usr/local/nvidia
153
-
```
154
-
155
-
Provisioning NVIDIA daemonsets and compiling drivers may take some time (5-10 minutes)
Copy file name to clipboardExpand all lines: content/patterns/rag-llm-gitops/_index.md
+52-10Lines changed: 52 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: AI Generation with LLM and RAG
3
3
date: 2024-07-25
4
4
tier: tested
5
-
summary: The goal of this demo is to demonstrate a Chatbot LLM application augmented with data from Red Hat product documentation running on Red Hat OpenShift. It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM. The application generates a project proposal for a Red Hat product.
5
+
summary: The goal of this demo is to showcase a Chatbot LLM application augmented with data from Red Hat product documentation running on Red Hat OpenShift. It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM. The application generates a project proposal for a Red Hat product.
6
6
rh_products:
7
7
- Red Hat OpenShift Container Platform
8
8
- Red Hat OpenShift GitOps
@@ -19,7 +19,7 @@ links:
19
19
ci: ai
20
20
---
21
21
22
-
# Document Generation Demo with LLM and RAG
22
+
# Document generation demo with LLM and RAG
23
23
24
24
## Introduction
25
25
@@ -34,16 +34,9 @@ The application uses either the [EDB Postgres for Kubernetes operator](https://c
34
34
(default), or Redis, to store embeddings of Red Hat product documentation, running on Red Hat
35
35
OpenShift Container Platform to generate project proposals for specific Red Hat products.
36
36
37
-
## Pre-requisites
38
-
39
-
- Podman
40
-
- Red Hat Openshift cluster running in AWS. Supported regions are us-west-2 and us-east-1.
41
-
- GPU Node to run Hugging Face Text Generation Inference server on Red Hat OpenShift cluster.
42
-
- Create a fork of the [rag-llm-gitops](https://github.com/validatedpatterns/rag-llm-gitops.git) git repository.
43
-
44
37
## Demo Description & Architecture
45
38
46
-
The goal of this demo is to demonstrate a Chatbot LLM application augmented with data from Red Hat product documentation running on [Red Hat OpenShift AI](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai). It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM.
39
+
The goal of this demo is to showcase a Chatbot LLM application augmented with data from Red Hat product documentation running on [Red Hat OpenShift AI](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai). It deploys an LLM application that connects to multiple LLM providers such as OpenAI, Hugging Face, and NVIDIA NIM.
47
40
The application generates a project proposal for a Red Hat product.
48
41
49
42
### Key Features
@@ -55,6 +48,55 @@ The application generates a project proposal for a Red Hat product.
55
48
- Monitoring dashboard to provide key metrics such as ratings.
56
49
- GitOps setup to deploy e2e demo (frontend / vector database / served models).
57
50
51
+
#### RAG Demo Workflow
52
+
53
+

54
+
55
+
_Figure 3. Schematic diagram for workflow of RAG demo with Red Hat OpenShift._
_Figure 6. Proposed demo architecture with OpenShift AI_
89
+
90
+
### Components deployed
91
+
92
+
-**Hugging Face Text Generation Inference Server:** The pattern deploys a Hugging Face TGIS server. The server deploys `mistral-community/Mistral-7B-v0.2` model. The server will require a GPU node.
93
+
-**EDB Postgres for Kubernetes / Redis Server:** A Vector Database server is deployed to store vector embeddings created from Red Hat product documentation.
94
+
-**Populate VectorDb Job:** The job creates the embeddings and populates the vector database.
95
+
-**LLM Application:** This is a Chatbot application that can generate a project proposal by augmenting the LLM with the Red Hat product documentation stored in vector db.
96
+
-**Prometheus:** Deploys a prometheus instance to store the various metrics from the LLM application and TGIS server.
97
+
-**Grafana:** Deploys Grafana application to visualize the metrics.
## Generate the proposal document using OpenAI provider
40
+
41
+
Follow the instructions in the section "Generate the proposal document" in [Getting Started](/rag-llm-gitops/getting-started/) to generate the proposal document using the OpenAI provider.
0 commit comments