Skip to content

Latest commit

 

History

History
73 lines (44 loc) · 3.42 KB

File metadata and controls

73 lines (44 loc) · 3.42 KB

Managed LLM with Docker Model Provider

1-click-deploy

This sample application demonstrates using Managed LLMs with a Docker Model Provider, deployed with Defang.

Note: This version uses a Docker Model Provider for managing LLMs. For the version with Defang's OpenAI Access Gateway, please see our Managed LLM Sample instead.

The Docker Model Provider allows users to use AWS Bedrock or Google Cloud Vertex AI models with their application. It is a service in the compose.yaml file.

You can configure the MODEL and ENDPOINT_URL for the LLM separately for local development and production environments.

  • The MODEL is the LLM Model ID you are using.
  • The ENDPOINT_URL is the bridge that provides authenticated access to the LLM model.

Ensure you have enabled model access for the model you intend to use. To do this, you can check your AWS Bedrock model access or GCP Vertex AI model access.

Docker Model Provider

In the compose.yaml file, the llm service will route requests to the LLM API model using a Docker Model Provider.

The x-defang-llm property on the llm service must be set to true in order to use the Docker Model Provider when deploying with Defang.

Prerequisites

  1. Download Defang CLI
  2. (Optional) If you are using Defang BYOC authenticate with your cloud provider account
  3. (Optional for local development) Docker CLI

Development

To run the application locally, you can use the following command:

docker compose -f compose.dev.yaml up --build

Configuration

For this sample, you will need to provide the following configuration:

Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.

MODEL

The Model ID of the LLM you are using for your application. For example, anthropic.claude-3-5-sonnet-20241022-v2:0.

defang config set MODEL

Deployment

Note

Download Defang CLI

Defang Playground

Deploy your application to the Defang Playground by opening up your terminal and typing:

defang compose up

BYOC

If you want to deploy to your own cloud account, you can use Defang BYOC.


Title: Managed LLM with Docker Model Provider

Short Description: An app using Managed LLMs with a Docker Model Provider, deployed with Defang.

Tags: LLM, Python, Bedrock, Vertex, Docker Model Provider

Languages: Python