Skip to content

Commit 4403720

Browse files
authored
Merge branch 'main' into amyles
2 parents 66cb937 + 996344d commit 4403720

File tree

149 files changed

+1765
-2094
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

149 files changed

+1765
-2094
lines changed
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# Calling multiple vLLM inference servers using LiteLLM
2+
3+
In this tutorial we explain how to use a LiteLLM Proxy Server to call multiple LLM inference endpoints from a single interface. LiteLLM interacts will 100+ LLMs such as OpenAI, Cohere, NVIDIA Triton and NIM, etc. Here we will use two vLLM inference servers.
4+
5+
<!-- ![Hybrid shards](assets/images/litellm.png "LiteLLM") -->
6+
7+
# When to use this asset?
8+
9+
To run the inference tutorial with local deployments of Mistral 7B Instruct v0.3 using a vLLM inference server powered by an NVIDIA A10 GPU and a LiteLLM Proxy Server on top.
10+
11+
# How to use this asset?
12+
13+
These are the prerequisites to run this tutorial:
14+
* An OCI tenancy with A10 quota
15+
* A Huggingface account with a valid Auth Token
16+
* A valid OpenAI API Key
17+
18+
## Introduction
19+
20+
LiteLLM provides a proxy server to manage auth, loadbalancing, and spend tracking across 100+ LLMs. All in the OpenAI format.
21+
vLLM is a fast and easy-to-use library for LLM inference and serving.
22+
The first step will be to deploy two vLLM inference servers on NVIDIA A10 powered virtual machine instances. In the second step, we will create a LiteLLM Proxy Server on a third no-GPU instance and explain how we can use this interface to call the two LLM from a single location. For the sake of simplicity, all 3 instances will reside in the same public subnet here.
23+
24+
![Hybrid shards](assets/images/litellm-architecture.png "LiteLLM")
25+
26+
## vLLM inference servers deployment
27+
28+
For each of the inference nodes a VM.GPU.A10.2 instance (2 x NVIDIA A10 GPU 24GB) is used in combination with the NVIDIA GPU-Optimized VMI image from the OCI marketplace. This Ubuntu-based image comes with all the necessary libraries (Docker, NVIDIA Container Toolkit) preinstalled. It is a good practice to deploy two instances in two different fault domains to ensure a higher availability.
29+
30+
The vLLM inference server is deployed using the vLLM official container image.
31+
```
32+
docker run --gpus all \
33+
-e HF_TOKEN=$HF_TOKEN -p 8000:8000 \
34+
--ipc=host \
35+
vllm/vllm-openai:latest \
36+
--host 0.0.0.0 \
37+
--port 8000 \
38+
--model mistralai/Mistral-7B-Instruct-v0.3 \
39+
--tensor-parallel-size 2 \
40+
--load-format safetensors \
41+
--trust-remote-code \
42+
--enforce-eager
43+
```
44+
where `$HF_TOKEN` is a valid HuggingFace token. In this case we use the 7B Instruct version of Mistral LLM. The vLLM endpoint can be directly called for verification with:
45+
```
46+
curl http://localhost:8000/v1/chat/completions \
47+
-H "Content-Type: application/json" \
48+
-d '{
49+
"model": "mistralai/Mistral-7B-Instruct-v0.3",
50+
"messages": [
51+
{"role": "user", "content": "Who won the world series in 2020?"}
52+
]
53+
}' | jq
54+
```
55+
56+
## LiteLLM server deployment
57+
58+
No GPU are required for LiteLLM. Therefore, a CPU based VM.Standard.E4.Flex instance (4 OCPUs, 64 GB Memory) with a standard Ubuntu 22.04 image is used. Here LiteLLM is used as a proxy server calling a vLLM endpoint. Install LiteLLM using `pip`:
59+
```
60+
pip install 'litellm[proxy]'
61+
```
62+
Edit the `config.yaml` file (OpenAI-Compatible Endpoint):
63+
```
64+
model_list:
65+
- model_name: Mistral-7B-Instruct
66+
litellm_params:
67+
model: openai/mistralai/Mistral-7B-Instruct-v0.3
68+
api_base: http://xxx.xxx.xxx.xxx:8000/v1
69+
api_key: sk-0123456789
70+
- model_name: Mistral-7B-Instruct
71+
litellm_params:
72+
model: openai/mistralai/Mistral-7B-Instruct-v0.3
73+
api_base: http://xxx.xxx.xxx.xxx:8000/v1
74+
api_key: sk-0123456789
75+
```
76+
where `sk-0123456789` is a valid OpenAI API key and `xxx.xxx.xxx.xxx` are the two GPU instances public IP addresses.
77+
78+
Start the LiteLLM Proxy Server with the following command:
79+
```
80+
litellm --config /path/to/config.yaml
81+
```
82+
Once the the Proxy Server is ready call the vLLM endpoint through LiteLLM with:
83+
```
84+
curl http://localhost:4000/chat/completions \
85+
-H 'Authorization: Bearer sk-0123456789' \
86+
-H "Content-Type: application/json" \
87+
-d '{
88+
"model": "Mistral-7B-Instruct",
89+
"messages": [
90+
{"role": "user", "content": "Who won the world series in 2020?"}
91+
]
92+
}' | jq
93+
```
94+
95+
## Documentation
96+
97+
* [LiteLLM documentation](https://litellm.vercel.app/docs/providers/openai_compatible)
98+
* [vLLM documentation](https://docs.vllm.ai/en/latest/serving/deploying_with_docker.html)
99+
* [MistralAI](https://mistral.ai/)
24.6 KB
Loading
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
model_list:
2+
- model_name: Mistral-7B-Instruct
3+
litellm_params:
4+
model: openai/mistralai/Mistral-7B-Instruct-v0.3
5+
api_base: http://public_ip_1:8000/v1
6+
api_key: sk-0123456789
7+
- model_name: Mistral-7B-Instruct
8+
litellm_params:
9+
model: openai/mistralai/Mistral-7B-Instruct-v0.3
10+
api_base: http://public_ip_2:8000/v1
11+
api_key: sk-0123456789
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
2+
# OpenShift on OCI
3+
4+
Red Hat OpenShift can be hosted on OCI as a self-run platform. Oracle provides terraform templates for easy implementation and platform integration.
5+
6+
7+
# Useful Links
8+
9+
- [Red Hat OpenShift documentation - installing on OCI](https://docs.openshift.com/container-platform/4.16/installing/installing_oci/installing-oci-assisted-installer.html))
10+
- [Oracle Cloud documentation - Getting started with OpenShift on OCI](https://docs.oracle.com/en-us/iaas/Content/openshift-on-oci/overview.htm)
11+
12+
# Team Publications
13+
14+
- [Using OCI Object storage for the OpenShift Internal Registry](enable-image-registry/README.md)
15+
16+
17+
# Reusable Assets Overview
18+
19+
- [Terraform script to provision OpenShift on OCI](https://github.com/oracle-quickstart/oci-openshift)
20+
21+
22+
# License
23+
24+
Copyright (c) 2024 Oracle and/or its affiliates.
25+
26+
Licensed under the Universal Permissive License (UPL), Version 1.0.
27+
28+
See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.
29+
30+
[def]: #useful-links
31+
[def2]: def
Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# Setting up OpenShift Image Registry to use OCI Object Storage Bucket
2+
3+
## Prerequisites
4+
You need to have the OpenShift CLI tool installed and properly configured.
5+
6+
https://docs.openshift.com/container-platform/4.16/cli_reference/openshift_cli/getting-started-cli.html
7+
8+
## 1. What is the OpenShift Image Registry?
9+
The OpenShift Image Registry is a built-in, containerized, enterprise-grade registry that stores Docker-formatted container images in a Red Hat OpenShift Container Platform cluster. It is a critical component for managing container images within the OpenShift environment, providing secure storage and efficient retrieval of container images required for deployments.
10+
11+
After you have created an OpenShift Cluster on OCI, the image registry is not yet configured with the right storage settings. This will result in errors when you are trying to deploy your projects. You will see error messages like
12+
13+
```Error starting build: an image stream cannot be used as build output because the integrated image registry is not configured```
14+
15+
<img src="files/1.NoRegistrySetup.png" width=600x>
16+
17+
## 2. Configure OCI Object Storage for S3 Compatibility
18+
Oracle Cloud Infrastructure (OCI) Object Storage can be configured to work as an S3-compatible storage backend for the OpenShift Image Registry. This compatibility allows OpenShift to store container images directly in an OCI Object Storage bucket
19+
20+
### a. Setup the correct compartment you want to use for Object Storage S3 compatability
21+
OCI Object Storage is S3-compatible by default, so no additional configuration is needed for basic S3 API operations. However, you may need to set the right compartment you want to use for S3 compatible buckets.
22+
23+
Go to your Tenancy Details in the Governance & Administration menu and click on <b>Edit Object Storage settings</b>
24+
25+
<img src="files/2.OCI-setup-OS-AWS-Compartment.png" width=500x>
26+
27+
Create a bucket in the selected compartment.
28+
29+
<img src="files/3.OCICreateBucket.png" width=500x>
30+
31+
### b. Create a S3 Access and Secret key
32+
In the OCI console navigate to your profile (top right corner) and go to the <B>Customer Secret Keys</b> section.
33+
34+
Create a new secret and make sure you note the Secret shown, as this is only one time displayed! After the sectet is created you will also see the access key.
35+
36+
## 3. Create a secret for the Image Registry
37+
Now that you have you S3 Compatible Access and Secret key, you can create this secret for image registry. This secret needs to have the name of <b>image-registry-private-configuration-user</b>
38+
39+
You can create the secret by running the following command, using the OpenShift CLI
40+
41+
```oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=[your_access_key] --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=[your_secret_key] --namespace openshift-image-registry```
42+
43+
## 4. Configure the Image Registry to use the S3 Object Storage
44+
Last you need to configure the OpenShift internal image registry to use the OCI S3 Compatible object storage.
45+
46+
You can do this by running:
47+
48+
```oc edit configs.imageregistry.operator.openshift.io/cluster```
49+
50+
You should see that currently your storage is not configured.
51+
52+
<img src="files/4.Config_default.png" width=500x>
53+
54+
Remove the {} behind the storage item and create the fields for S3 object storage
55+
```
56+
storage:
57+
s3:
58+
bucket: os-cluster
59+
region: [your-oci-region]
60+
regionEndpoint: https://[yournamespace].compat.objectstorage.[your-oci-region].oraclecloud.com
61+
```
62+
63+
Replace the [yournamespace] with your own object storage name space. You can find this namespace on the OCI Tenancy Details page.
64+
65+
Replace the 2x [your-oci-region] with the OCI region you are using, for example: eu-frankfurt-1
66+
67+
Finally, change the <b>managementState</b> from <b>Removed</b> to <b>Managed</b>
68+
69+
<img src="files/5.Config_OCI-objectstorage.png" width=500x>
70+
71+
Save and close the file and OpenShift will automatically update the image registry.
72+
73+
## 5. Check the Image Registry operator
74+
You can now check if the image registry is properly configured. You can rerun the ```oc edit configs.imageregistry.operator.openshift.io/cluster``` and scroll down to the status section. You should see there a reference to the S3 opbject storage.
75+
76+
Alternatively you can navigate to the cluster settings page under administration on your OpenShift console. Click on <b>ClusterOperators</b> and select the <b>image-registry</b>.
77+
78+
Under the conditions you should see that the registry is ready.
79+
80+
<img src="files/6.Configured.png" width=500x>
81+
82+
## 6. Ready for deployment of your applications
83+
Your image registry should now be able to store images and you are now ready to start deploying applications and templates.
84+
85+
86+
87+
# License
88+
Copyright (c) 2024 Oracle and/or its affiliates.
89+
Licensed under the Universal Permissive License (UPL), Version 1.0.
90+
See [LICENSE](https://github.com/oracle-devrel/technology-engineering/blob/main/LICENSE) for more details.
67.5 KB
Loading
Loading
35.4 KB
Loading
51.1 KB
Loading
90.9 KB
Loading

0 commit comments

Comments
 (0)