Skip to content

Commit 011266d

Browse files
committed
add model comment in compose file
1 parent c0fac2e commit 011266d

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

samples/managed-llm-provider/compose.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ services:
99
environment:
1010
- ENDPOINT_URL=http://llm/api/v1/chat/completions # endpoint to the Provider Service
1111
- MODEL=us.amazon.nova-micro-v1:0 # LLM model ID used in the Provider Service
12+
# For other models, see https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#model-mapping
1213
healthcheck:
1314
test: ["CMD", "python3", "-c", "import sys, urllib.request; urllib.request.urlopen(sys.argv[1]).read()", "http://localhost:8000/"]
1415
interval: 30s

samples/managed-llm/compose.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,8 @@ services:
88
restart: always
99
environment:
1010
- ENDPOINT_URL=http://llm/api/v1/chat/completions # endpoint to the gateway service
11-
- MODEL=us.amazon.nova-micro-v1:0 # LLM model ID used for the gateway
11+
- MODEL=us.amazon.nova-micro-v1:0 # LLM model ID used for the gateway.
12+
# For other models, see https://docs.defang.io/docs/concepts/managed-llms/openai-access-gateway#model-mapping
1213
- OPENAI_API_KEY=FAKE_TOKEN # the actual value will be ignored when using the gateway, but it should match the one in the llm service
1314
healthcheck:
1415
test: ["CMD", "python3", "-c", "import sys, urllib.request; urllib.request.urlopen(sys.argv[1]).read()", "http://localhost:8000/"]

0 commit comments

Comments
 (0)