Skip to content

Commit baedc2f

Browse files
committed
rename provider service to docker model provider
1 parent 957cd42 commit baedc2f

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

samples/managed-llm-provider/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,24 @@
1-
# Managed LLM with Provider
1+
# Managed LLM with Docker Model Provider
22

33
[![1-click-deploy](https://raw.githubusercontent.com/DefangLabs/defang-assets/main/Logos/Buttons/SVG/deploy-with-defang.svg)](https://portal.defang.dev/redirect?url=https%3A%2F%2Fgithub.com%2Fnew%3Ftemplate_name%3Dsample-managed-llm-provider-template%26template_owner%3DDefangSamples)
44

5-
This sample application demonstrates using Managed LLMs with a Provider Service, deployed with Defang.
5+
This sample application demonstrates using Managed LLMs with a Docker Model Provider, deployed with Defang.
66

7-
> Note: This version uses a [Docker Provider Service](https://docs.docker.com/compose/how-tos/model-runner/#provider-services) for managing LLMs. For the version with Defang's [OpenAI Access Gateway](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway), please see our [*Managed LLM Sample*](https://github.com/DefangLabs/samples/tree/main/samples/managed-llm) instead.
7+
> Note: This version uses a [Docker Model Provider](https://docs.docker.com/compose/how-tos/model-runner/#provider-services) for managing LLMs. For the version with Defang's [OpenAI Access Gateway](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway), please see our [*Managed LLM Sample*](https://github.com/DefangLabs/samples/tree/main/samples/managed-llm) instead.
88
9-
The Provider Service allows users to use AWS Bedrock or Google Cloud Vertex AI models with their application.
9+
The Docker Model Provider allows users to use AWS Bedrock or Google Cloud Vertex AI models with their application. It is a service in the `compose.yaml` file.
1010

1111
You can configure the `MODEL` and `ENDPOINT_URL` for the LLM separately for local development and production environments.
1212
* The `MODEL` is the LLM Model ID you are using.
1313
* The `ENDPOINT_URL` is the bridge that provides authenticated access to the LLM model.
1414

1515
Ensure you have the necessary permissions to access the model you intend to use. To do this, you can check your [AWS Bedrock model access](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html) or [GCP Vertex AI model access](https://cloud.google.com/vertex-ai/generative-ai/docs/control-model-access).
1616

17-
### Provider Service
17+
### Docker Model Provider
1818

19-
In the `compose.yaml` file, the `llm` service is used to route requests to the LLM API model using a [Provider Service](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#docker-provider-services).
19+
In the `compose.yaml` file, the `llm` service will route requests to the LLM API model using a [Docker Model Provider](https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#docker-provider-services).
2020

21-
The `x-defang-llm` property on the `llm` service must be set to `true` in order to use the Provider Service when deploying with Defang.
21+
The `x-defang-llm` property on the `llm` service must be set to `true` in order to use the Docker Model Provider when deploying with Defang.
2222

2323
## Prerequisites
2424

@@ -64,10 +64,10 @@ If you want to deploy to your own cloud account, you can [use Defang BYOC](https
6464

6565
---
6666

67-
Title: Managed LLM with Provider
67+
Title: Managed LLM with Docker Model Provider
6868

69-
Short Description: An app using Managed LLMs with a Provider Service, deployed with Defang.
69+
Short Description: An app using Managed LLMs with a Docker Model Provider, deployed with Defang.
7070

71-
Tags: LLM, Python, Bedrock, Vertex
71+
Tags: LLM, Python, Bedrock, Vertex, Docker Model Provider
7272

7373
Languages: Python

0 commit comments

Comments
 (0)