|
149 | 149 |
|
150 | 150 | ## Task 3: Deploy the Oracle Database 23ai pod |
151 | 151 |
|
152 | | -1. |
| 152 | +1. Before creating the database, you'll need to create role-based access control (RBAC) for the node. Create a file called **node-rbac.yaml** and paste the following: |
| 153 | + |
| 154 | + ``` |
| 155 | + <copy> |
| 156 | + --- |
| 157 | + apiVersion: rbac.authorization.k8s.io/v1 |
| 158 | + kind: ClusterRole |
| 159 | + metadata: |
| 160 | + name: oracle-database-operator-manager-role-node |
| 161 | + rules: |
| 162 | + - apiGroups: |
| 163 | + - "" |
| 164 | + resources: |
| 165 | + - nodes |
| 166 | + verbs: |
| 167 | + - list |
| 168 | + - watch |
| 169 | + --- |
| 170 | + apiVersion: rbac.authorization.k8s.io/v1 |
| 171 | + kind: ClusterRoleBinding |
| 172 | + metadata: |
| 173 | + name: oracle-database-operator-manager-role-node-cluster-role-binding |
| 174 | + roleRef: |
| 175 | + apiGroup: rbac.authorization.k8s.io |
| 176 | + kind: ClusterRole |
| 177 | + name: oracle-database-operator-manager-role-node |
| 178 | + subjects: |
| 179 | + - kind: ServiceAccount |
| 180 | + name: default |
| 181 | + namespace: oracle-database-operator-system |
| 182 | + --- |
| 183 | + </copy> |
| 184 | + ``` |
| 185 | + |
| 186 | +2. Create a file called **db-admin-secret.yaml** that will be used to set the DB password upon deployment. Paste the follwing: |
| 187 | + |
| 188 | + ``` |
| 189 | + <copy> |
| 190 | + apiVersion: v1 |
| 191 | + kind: Secret |
| 192 | + metadata: |
| 193 | + name: freedb-admin-secret |
| 194 | + namespace: oracle23ai |
| 195 | + type: Opaque |
| 196 | + stringData: |
| 197 | + oracle_pwd: YOURPASSWORDHERE |
| 198 | + </copy> |
| 199 | + ``` |
| 200 | + |
| 201 | + >Note: Be sure to replace the **YOURPASSWORDHERE** above with a value of your own choosing. At least 15 characters, 2 upper case, 2 lower case, 2 numbers, and 2 special characters. |
| 202 | + |
| 203 | +3. Create a file called **db23ai-instance.yaml** and paste the following: |
| 204 | + |
| 205 | + ``` |
| 206 | + <copy> |
| 207 | + apiVersion: database.oracle.com/v1alpha1 |
| 208 | + kind: SingleInstanceDatabase |
| 209 | + metadata: |
| 210 | + name: nemo-23ai |
| 211 | + namespace: oracle23ai |
| 212 | + spec: |
| 213 | + sid: FREE |
| 214 | + edition: free |
| 215 | + adminPassword: |
| 216 | + secretName: freedb-admin-secret |
| 217 | + |
| 218 | + image: |
| 219 | + pullFrom: container-registry.oracle.com/database/free:latest |
| 220 | + prebuiltDB: true |
| 221 | + |
| 222 | + persistence: |
| 223 | + size: 50Gi |
| 224 | + storageClass: "oci-bv" |
| 225 | + accessMode: "ReadWriteOnce" |
| 226 | + |
| 227 | + replicas: 1 |
| 228 | + --- |
| 229 | + </copy> |
| 230 | + ``` |
| 231 | + |
| 232 | +4. Apply the manifests using the following command; this creates the RBAC, the password, and the DB pod. |
| 233 | + |
| 234 | + ```bash |
| 235 | + <copy> |
| 236 | + kubectl apply -n oracle23ai -f node-rbac.yaml,db-admin-secret.yaml,db23ai-instance.yaml |
| 237 | + </copy> |
| 238 | + ``` |
| 239 | + |
| 240 | +5. After the command completes, it may take 3-5 minutes for the DB instance to come online. You can check the status with the following command. Do not proceed until the status is **Healthy** |
| 241 | + |
| 242 | + ```bash |
| 243 | + <copy> |
| 244 | + kubectl get singleinstancedatabase -n oracle23ai |
| 245 | + </copy> |
| 246 | + ``` |
| 247 | + |
| 248 | + Output: |
| 249 | + ```bash |
| 250 | + kubectl get singleinstancedatabase -n oracle23ai |
| 251 | + NAME EDITION STATUS ROLE VERSION CONNECT STR TCPS CONNECT STR OEM EXPRESS URL |
| 252 | + nemo-23ai Free Healthy PRIMARY 23.4.0.24.05 10.0.10.246:31452/FREE Unavailable Unavailable |
| 253 | + ``` |
| 254 | + |
| 255 | +6. Run the following command to gather details about the DB instance and set them to environment variables. |
| 256 | + |
| 257 | + ```bash |
| 258 | + <copy> |
| 259 | + export ORA_PASS=$(kubectl get secret/freedb-admin-secret -n oracle23ai -o jsonpath='{.data.oracle_pwd}' | base64 -d) |
| 260 | + export ORACLE_SID=$(kubectl get singleinstancedatabase -n oracle23ai -o 'jsonpath={.items[0].metadata.name}') |
| 261 | + export ORA_POD=$(kubectl get pods -n oracle23ai -o jsonpath='{.items[0].metadata.name}') |
| 262 | + export ORA_CONN=$(kubectl get singleinstancedatabase ${ORACLE_SID} -n oracle23ai -o "jsonpath={.status.connectString}") |
| 263 | + </copy> |
| 264 | + ``` |
| 265 | + |
| 266 | + >Note: If you leave Cloud Shell and return later, you'll need to run the above commands again if you wish to connect to the DB instance directly. That said, after this section, all DB access should be done via Jupyter Notebooks. |
| 267 | +
|
| 268 | +7. Connect to the DB instance. |
| 269 | +
|
| 270 | + ```bash |
| 271 | + <copy> |
| 272 | + kubectl exec -it pods/${ORA_POD} -n oracle23ai -- sqlplus sys/${ORA_PASS}@${ORACLE_SID} as sysdba |
| 273 | + </copy> |
| 274 | + ``` |
| 275 | +
|
| 276 | +8. Create a vector DB user that will enable your Python code to access the vector data store. |
| 277 | +
|
| 278 | + ```bash |
| 279 | + create user c##vector identified by <enter a password here>; |
| 280 | + grant create session, db_developer_role, unlimited tablespace to c##vector container=ALL; |
| 281 | + ``` |
| 282 | +
|
| 283 | + >Note: You will need to run these commands one at at a time. **Don't forget** to specify your own password in the first command. *<enter password here>* making sure to remove the <> brackets. |
| 284 | + |
| 285 | +9. Type *exit* to leave the container. |
| 286 | + |
| 287 | +## Task 4: Prepare the NeMo deployment |
| 288 | +**TODO:** Clean up this section. |
| 289 | + |
| 290 | +23. Now to prep for the NeMo deployment. Create a new Kubernetes namespace. |
| 291 | + |
| 292 | +kubectl create ns embedding-nim |
| 293 | + |
| 294 | +24. Add your NGC API Key to an environment variable. |
| 295 | + |
| 296 | +export NGC_API_KEY=<your api key here> |
| 297 | + |
| 298 | +25. Confirm that your key gets you access to the NVCR container registry: |
| 299 | + |
| 300 | +echo "$NGC_API_KEY" | docker login nvcr.io --username '$oauthtoken' --password-stdin |
| 301 | + |
| 302 | +You should get Login Succeeded |
| 303 | + |
| 304 | +26. Create a docker-registry secret in Kubernetes. The kubelet will use this secret to download the container images needed to run pods. |
| 305 | + |
| 306 | +kubectl -n embedding-nim create secret docker-registry registry-secret --docker-server=nvcr.io --docker-username='$oauthtoken' --docker-password=$NGC_API_KEY |
| 307 | + |
| 308 | +27. Create a secret for your NGC API KEY that will be passed to your pod via environment variable later. |
| 309 | + |
| 310 | +kubectl -n embedding-nim create secret generic ngc-api-key --from-literal=ngc-api-key=”$NGC_API_KEY” |
| 311 | +28. You can check the value with the following command. |
| 312 | + |
| 313 | +kubectl -n embedding-nim get secret/ngc-api-key -o jsonpath='{.data.ngc-api-key}' | base64 -d |
| 314 | + |
| 315 | +29. Next, you’ll create three separate files to deploy the NeMo retriever microservices. |
| 316 | + a. llama3-8b-instruct.yaml |
| 317 | + |
| 318 | + ``` |
| 319 | + <copy> |
| 320 | + apiVersion: v1 |
| 321 | + kind: Pod |
| 322 | + metadata: |
| 323 | + name: nim-llama3-8b-instruct |
| 324 | + labels: |
| 325 | + name: nim-llama3-8b-instruct |
| 326 | + |
| 327 | + spec: |
| 328 | + containers: |
| 329 | + - name: nim-llama3-8b-instruct |
| 330 | + image: nvcr.io/nim/meta/llama3-8b-instruct:latest |
| 331 | + securityContext: |
| 332 | + privileged: true |
| 333 | + env: |
| 334 | + - name: NGC_API_KEY |
| 335 | + valueFrom: |
| 336 | + secretKeyRef: |
| 337 | + name: ngc-api-key |
| 338 | + key: ngc-api-key |
| 339 | + resources: |
| 340 | + limits: |
| 341 | + nvidia.com/gpu: 1 |
| 342 | + imagePullPolicy: Always |
| 343 | + |
| 344 | + hostNetwork: true |
| 345 | +
|
| 346 | + imagePullSecrets: |
| 347 | + - name: registry-secret |
| 348 | + </copy> |
| 349 | + ``` |
| 350 | + |
| 351 | + b. nv-embedqa-e5-v5.yaml |
| 352 | + |
| 353 | + ``` |
| 354 | + <copy> |
| 355 | + apiVersion: v1 |
| 356 | + kind: Pod |
| 357 | + metadata: |
| 358 | + name: nim-nv-embedqa-e5-v5 |
| 359 | + labels: |
| 360 | + name: nim-nv-embedqa-e5-v5 |
| 361 | + |
| 362 | + spec: |
| 363 | + containers: |
| 364 | + - name: nim-nv-embedqa-e5-v5 |
| 365 | + image: nvcr.io/nim/nvidia/nv-embedqa-e5-v5:1.0.1 |
| 366 | + securityContext: |
| 367 | + privileged: true |
| 368 | + env: |
| 369 | + - name: NGC_API_KEY |
| 370 | + valueFrom: |
| 371 | + secretKeyRef: |
| 372 | + name: ngc-api-key |
| 373 | + key: ngc-api-key |
| 374 | + resources: |
| 375 | + limits: |
| 376 | + nvidia.com/gpu: 1 |
| 377 | + imagePullPolicy: Always |
| 378 | + |
| 379 | + hostNetwork: true |
| 380 | +
|
| 381 | + imagePullSecrets: |
| 382 | + - name: registry-secret |
| 383 | + </copy> |
| 384 | + ``` |
| 385 | + |
| 386 | + c. nv-rerankqa-mistral-4b-v3.yaml |
| 387 | + |
| 388 | + ``` |
| 389 | + <copy> |
| 390 | + apiVersion: v1 |
| 391 | + kind: Pod |
| 392 | + metadata: |
| 393 | + name: nim-nv-ererankqa-mistral-4b-v3 |
| 394 | + labels: |
| 395 | + name: nim-nv-ererankqa-mistral-4b-v3 |
| 396 | + |
| 397 | + spec: |
| 398 | + containers: |
| 399 | + - name: nim-nv-ererankqa-mistral-4b-v3 |
| 400 | + image: nvcr.io/nim/nvidia/nv-rerankqa-mistral-4b-v3:1.0.1 |
| 401 | + securityContext: |
| 402 | + privileged: true |
| 403 | + env: |
| 404 | + - name: NGC_API_KEY |
| 405 | + valueFrom: |
| 406 | + secretKeyRef: |
| 407 | + name: ngc-api-key |
| 408 | + key: ngc-api-key |
| 409 | + resources: |
| 410 | + limits: |
| 411 | + nvidia.com/gpu: 1 |
| 412 | + imagePullPolicy: Always |
| 413 | + |
| 414 | + hostNetwork: true |
| 415 | +
|
| 416 | + imagePullSecrets: |
| 417 | + - name: registry-secret |
| 418 | + </copy> |
| 419 | + ``` |
| 420 | + |
| 421 | +30. Apply the 3 manifest files to your Kubernetes cluster. |
| 422 | + |
| 423 | + ```bash |
| 424 | + <copy> |
| 425 | + kubectl -n embedding-nim apply -f llama3-8b-instruct.yaml,nv-embedqa-e5-v5.yaml,nv-rerankqa-mistral-4b-v3.yaml |
| 426 | + </copy> |
| 427 | + ``` |
| 428 | + |
| 429 | +31. View the pods to ensure they are all running. |
| 430 | + |
| 431 | + ```bash |
| 432 | + <copy> |
| 433 | + kubectl -n embedding-nim get pods -o wide |
| 434 | + </copy> |
| 435 | + ``` |
| 436 | + |
| 437 | + Output: |
| 438 | + |
| 439 | + ```bash |
| 440 | + NAME READY STATUS RESTARTS AGE IP NODE |
| 441 | + nim-llama3-8b-instruct 1/1 Running 0 3m 10.0.10.7 10.0.10.7 |
| 442 | + nim-nv-embedqa-e5-v5 1/1 Running 0 3m 10.0.10.11 10.0.10.11 |
| 443 | + nim-nv-rerankqa-mistral-4b-v3 1/1 Running 0 3m 10.0.10.18 10.0.10.18 |
| 444 | + ``` |
| 445 | + |
| 446 | + |
| 447 | +32. Now that everything is up and running, you can return to your Jupyter-hub web page and launch a new notebook. |
| 448 | + |
| 449 | +33. Within the notebood, install the oracledb libraries |
| 450 | + |
| 451 | +pip install oracledb |
| 452 | + |
| 453 | + |
| 454 | + |
| 455 | + |
| 456 | + |
| 457 | + |
| 458 | + |
| 459 | + |
| 460 | + |
153 | 461 |
|
154 | 462 |
|
155 | 463 |
|
|
0 commit comments