Skip to content

Commit 418771e

Browse files
authored
Apply suggestions from code review
1 parent be3be60 commit 418771e

File tree

4 files changed

+14
-14
lines changed

4 files changed

+14
-14
lines changed

docs/concepts/managed-llms/managed-language-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,4 @@ Assume you have a web service like the following, which uses the cloud native SD
3939

4040
## Deploying OpenAI-compatible apps
4141

42-
If you already have an OpenAI-compatible application, Defang makes it easy to deploy on your favourite cloud's managed LLM service. See our [OpenAI Access Gateway](/docs/concepts/managed-llms/openai-access-gateway)
42+
If you already have an OpenAI-compatible application, Defang makes it easy to deploy on your favourite cloud's managed LLM service. See our [OpenAI Access Gateway](/docs/concepts/managed-llms/openai-access-gateway).

docs/concepts/managed-llms/openai-access-gateway.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ sidebar_position: 3000
99
Defang makes it easy to deploy on your favourite cloud's managed LLM service with our [OpenAI Access Gateway](https://github.com/DefangLabs/openai-access-gateway). This service sits between your application and the cloud service and acts as a compatibility layer.
1010
It handles incoming OpenAI requests, translates those requests to the appropriate cloud-native API, handles the native response, and re-constructs an OpenAI-compatible response.
1111

12-
See [our tutorial](/docs/tutorials/deploy-openai-apps) which describes how to configure the OpenAI Access Gateway for your application
12+
See [our tutorial](/docs/tutorials/deploy-openai-apps) which describes how to configure the OpenAI Access Gateway for your application.
1313

1414
## Docker Provider Services
1515

@@ -37,7 +37,7 @@ The `x-defang-llm` extension is used to configure the appropriate roles and perm
3737

3838
## Model Mapping
3939

40-
Defang supports model mapping through the [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway) on AWS and GCP. This takes a model with a Docker naming convention (e.g. ai/lama3.3) and maps it to the closest matching model name on the target platform. If no such match can be found it can fallback onto a known existing model (e.g. ai/mistral). These environment variables are USE_MODEL_MAPPING (default to true) and FALLBACK_MODEL (no default), respectively.
40+
Defang supports model mapping through the [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway) on AWS and GCP. This takes a model with a Docker naming convention (e.g. `ai/llama3.3`) and maps it to the closest matching model name on the target platform. If no such match can be found it can fallback onto a known existing model (e.g. `ai/mistral`). These environment variables are `USE_MODEL_MAPPING` (default to true) and `FALLBACK_MODEL` (no default), respectively.
4141

4242
## Current Support
4343

docs/tutorials/deploy-openai-apps-aws-bedrock.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
---
2-
title: Deploying your OpenAI Application to AWS Bedrock
2+
title: Deploy OpenAI Apps to AWS Bedrock
33
sidebar_position: 50
44
---
55

6-
# Deploying Your OpenAI Application to AWS Bedrock
6+
# Deploy OpenAI Apps to AWS Bedrock
77

88
Let's assume you have an app that uses an OpenAI client library and you want to deploy it to the cloud on **AWS Bedrock**.
99

@@ -50,7 +50,7 @@ Add **Defang's [openai-access-gateway](https://github.com/DefangLabs/openai-acce
5050
- The container image is based on [aws-samples/bedrock-access-gateway](https://github.com/aws-samples/bedrock-access-gateway), with enhancements.
5151
- `x-defang-llm: true` signals to **Defang** that this service should be configured to use target platform AI services.
5252
- New environment variables:
53-
- `REGION` is the zone where the services runs (for AWS this is the equvilent of AWS_REGION)
53+
- `REGION` is the zone where the services runs (for AWS, this is the equivalent of AWS_REGION)
5454

5555
:::tip
5656
**OpenAI Key**
@@ -110,9 +110,9 @@ Choose the correct `MODEL` depending on which cloud provider you are using.
110110
- For **AWS Bedrock**, use a Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`) [See available Bedrock models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
111111
:::
112112

113-
Alternatively, Defang supports model mapping through the openai-access-gateway. This takes a model with a Docker naming convention (e.g. `ai/lama3.3`) and maps it to
114-
the closest equilavent on the target platform. If no such match can be found a fallback can be defined to use a known existing model (e.g. ai/mistral). These environment
115-
variables are USE_MODEL_MAPPING (default to true) and FALLBACK_MODEL (no default), respectively.
113+
Alternatively, Defang supports model mapping through the [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway). This takes a model with a Docker naming convention (e.g. `ai/llama3.3`) and maps it to
114+
the closest equivalent on the target platform. If no such match can be found ,a fallback can be defined to use a known existing model (e.g. `ai/mistral`). These environment
115+
variables are `USE_MODEL_MAPPING` (default to true) and `FALLBACK_MODEL` (no default), respectively.
116116

117117

118118
:::info
@@ -151,7 +151,7 @@ services:
151151
| Variable | AWS Bedrock |
152152
|--------------------|-------------|
153153
| `REGION` | Required|
154-
| `MODEL` | Bedrock model ID or Docker model name, for example `meta.llama3-3-70b-instruct-v1:0` or `ai/lama3.3` |
154+
| `MODEL` | Bedrock model ID or Docker model name, for example `meta.llama3-3-70b-instruct-v1:0` or `ai/llama3.3` |
155155

156156
---
157157

docs/tutorials/deploy-openai-apps-gcp-vertex.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ services:
2828
2929
## Add an LLM Service to Your Compose File
3030
31-
You need to add a new service that acts as a proxy between your app and the backend LLM provider (Vertex).
31+
You need to add a new service that acts as a proxy between your app and the backend LLM provider (Vertex AI).
3232
3333
Add **Defang's [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway)** service:
3434
@@ -112,11 +112,11 @@ To do this, you can check your [AWS Bedrock model access](https://docs.aws.amazo
112112
:::info
113113
**Choosing the Right Model**
114114

115-
- For **GCP Vertex AI**, use a full model path (e.g., `google/gemini-2.5-pro-preview-03-25`) [See available Vertex models](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-vertex-using-openai-library#client-setup)
115+
- For **GCP Vertex AI**, use a full model path (e.g., `google/gemini-2.5-pro-preview-03-25`). [See available Vertex AI models](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-vertex-using-openai-library#client-setup).
116116
:::
117117

118-
Alternatively, Defang supports model mapping through the openai-access-gateway. This takes a model with a Docker naming convention (e.g. ai/lama3.3) and maps it to
119-
the closest matching one on the target platform. If no such match can be found it can fallback onto a known existing model (e.g. ai/mistral). These environment
118+
Alternatively, Defang supports model mapping through the [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway). This takes a model with a Docker naming convention (e.g. `ai/llama3.3`) and maps it to
119+
the closest matching one on the target platform. If no such match can be found, it can fallback onto a known existing model (e.g. `ai/mistral`). These environment
120120
variables are `USE_MODEL_MAPPING` (default to true) and `FALLBACK_MODEL` (no default), respectively.
121121

122122

0 commit comments

Comments
 (0)