Skip to content

Commit 41fb4fe

Browse files
authored
Update README.md
1 parent 19fb908 commit 41fb4fe

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ After you install OCI AI Blueprints to an OKE cluster in your tenancy, you can d
5252
| [**Multi-node Inference with RDMA and vLLM**](./docs/multi_node_inference) | Deploy Llama-405B sized LLMs across multiple nodes with RDMA using H100 nodes with vLLM and LeaderWorkerSet. |
5353
| [**Scaled Inference with vLLM**](./docs/auto_scaling) | Serve LLMs with auto-scaling using KEDA, which scales to multiple GPUs and nodes using application metrics like inference latency.|
5454
| [**LLM Inference with MIG**](./docs/mig_multi_instance_gpu) | Deploy LLMs to a fraction of a GPU with Nvidia’s multi-instance GPUs and serve them with vLLM. |
55-
| [**Health Check**](./docs/sample_blueprints/gpu-health-check) | Comprehensive evaluation of GPU performance to ensure optimal hardware readiness before initiating any intensive computational workload. |
55+
| [**Job Queuing**](./docs/sample_blueprints/teams) | Take advantage of job queuing and enforce resource quotas and fair sharing between teams. |
5656

5757
## Support & Contact
5858

0 commit comments

Comments
 (0)