Skip to content

Commit e6fbc32

Browse files
committed
move llm pages into folders
1 parent 418771e commit e6fbc32

File tree

5 files changed

+31
-28
lines changed

5 files changed

+31
-28
lines changed

docs/tutorials/deploy-openai-apps.mdx

Lines changed: 0 additions & 11 deletions
This file was deleted.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
{
2+
"label": "Deploy OpenAI Apps on Managed LLMs",
3+
"position": 45,
4+
"collapsible": true
5+
}

docs/tutorials/deploy-openai-apps-aws-bedrock.mdx renamed to docs/tutorials/deploy-openai-apps/aws-bedrock.mdx

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,17 @@
11
---
2-
title: Deploy OpenAI Apps to AWS Bedrock
2+
title: AWS Bedrock
33
sidebar_position: 50
44
---
5+
import React from 'react';
6+
import {useColorMode} from '@docusaurus/theme-common';
57

68
# Deploy OpenAI Apps to AWS Bedrock
79

810
Let's assume you have an app that uses an OpenAI client library and you want to deploy it to the cloud on **AWS Bedrock**.
911

1012
This tutorial shows you how **Defang** makes it easy.
1113

12-
Suppose you start with a compose file like this:
14+
Suppose you start with a Compose file like this:
1315

1416
```yaml
1517
services:
@@ -107,16 +109,15 @@ Choose the correct `MODEL` depending on which cloud provider you are using.
107109
:::info
108110
**Choosing the Right Model**
109111

110-
- For **AWS Bedrock**, use a Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`) [See available Bedrock models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
112+
- For **AWS Bedrock**, use a Bedrock model ID (e.g., `anthropic.claude-3-sonnet-20240229-v1:0`). [See available Bedrock models](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html).
111113
:::
112114

113115
Alternatively, Defang supports model mapping through the [openai-access-gateway](https://github.com/DefangLabs/openai-access-gateway). This takes a model with a Docker naming convention (e.g. `ai/llama3.3`) and maps it to
114116
the closest equivalent on the target platform. If no such match can be found ,a fallback can be defined to use a known existing model (e.g. `ai/mistral`). These environment
115117
variables are `USE_MODEL_MAPPING` (default to true) and `FALLBACK_MODEL` (no default), respectively.
116118

117-
118-
:::info
119-
# Complete Example Compose File
119+
120+
## Complete Example Compose File
120121

121122
```yaml
122123
services:
@@ -146,7 +147,7 @@ services:
146147
147148
---
148149
149-
# Environment Variable Matrix
150+
## Environment Variable Matrix
150151
151152
| Variable | AWS Bedrock |
152153
|--------------------|-------------|
@@ -160,5 +161,3 @@ You now have a single app that can:
160161
- Talk to **AWS Bedrock**
161162
- Use the same OpenAI-compatible client code
162163
- Easily switch cloud providers by changing a few environment variables
163-
:::
164-
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
title: Deploy OpenAI Apps on Managed LLMs
3+
sidebar_position: 45
4+
---
5+
6+
# Deploy OpenAI Apps on Managed LLMs
7+
8+
Defang currently supports using Managed LLMs with AWS Bedrock and GCP Vertex AI. Follow the link below for your specific platform.
9+
10+
- [AWS Bedrock](/docs/tutorials/deploy-openai-apps/aws-bedrock/)
11+
- [GCP Vertex AI](/docs/tutorials/deploy-openai-apps/gcp-vertex/)

docs/tutorials/deploy-openai-apps-gcp-vertex.mdx renamed to docs/tutorials/deploy-openai-apps/gcp-vertex.mdx

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,17 @@
11
---
2-
title: Deploy OpenAI Apps to GCP Vertex AI
2+
title: GCP Vertex AI
33
sidebar_position: 50
44
---
5+
import React from 'react';
6+
import {useColorMode} from '@docusaurus/theme-common';
57

68
# Deploy OpenAI Apps to GCP Vertex AI
79

810
Let's assume you have an application that uses an OpenAI client library and you want to deploy it to the cloud using **GCP Vertex AI**.
911

1012
This tutorial shows you how **Defang** makes it easy.
1113

12-
Suppose you start with a compose file like this:
14+
Suppose you start with a Compose file like this:
1315

1416
```yaml
1517
services:
@@ -120,8 +122,7 @@ the closest matching one on the target platform. If no such match can be found,
120122
variables are `USE_MODEL_MAPPING` (default to true) and `FALLBACK_MODEL` (no default), respectively.
121123

122124

123-
:::info
124-
# Complete Example Compose File
125+
## Complete Example Compose File
125126

126127
```yaml
127128
services:
@@ -152,13 +153,13 @@ services:
152153
153154
---
154155
155-
# Environment Variable Matrix
156+
## Environment Variable Matrix
156157
157158
| Variable | GCP Vertex AI |
158159
|--------------------|---------------|
159160
| `GCP_PROJECT_ID` | Required |
160161
| `REGION` | Required |
161-
| `MODEL` | Vertex model or Docker model name, for example `publishers/meta/models/llama-3.3-70b-instruct-maas` or `ai/llama3.3` |
162+
| `MODEL` | Vertex model ID or Docker model name, for example `publishers/meta/models/llama-3.3-70b-instruct-maas` or `ai/llama3.3` |
162163

163164
---
164165

@@ -167,5 +168,3 @@ You now have a single app that can:
167168
- Talk to **GCP Vertex AI**
168169
- Use the same OpenAI-compatible client code
169170
- Easily switch cloud providers by changing a few environment variables
170-
:::
171-

0 commit comments

Comments
 (0)