@@ -22,7 +22,7 @@ Before starting, ensure you have:
2222
2323## Quick start
2424
25- ### 1. Enable the chat completions feature
25+ ### Enable the chat completions feature
2626
2727First, enable the chat completions experimental feature:
2828
3636 }'
3737```
3838
39- ### 2. Configure a chat completions workspace
39+ ### Configure a chat completions workspace
4040
4141Create a workspace with your LLM provider settings. Here are examples for different providers:
4242
4949 -H ' Content-Type: application/json' \
5050 --data-binary ' {
5151 "source": "openAi",
52- "apiKey": "sk-...",
52+ "apiKey": "sk-abc ...",
5353 "prompts": {
5454 "system": "You are a helpful assistant. Answer questions based only on the provided context."
5555 }
@@ -105,7 +105,7 @@ curl \
105105 -H ' Authorization: Bearer MASTER_KEY' \
106106 -H ' Content-Type: application/json' \
107107 --data-binary ' {
108- "source": "vllm ",
108+ "source": "vLlm ",
109109 "baseUrl": "http://localhost:8000",
110110 "prompts": {
111111 "system": "You are a helpful assistant. Answer questions based only on the provided context."
@@ -115,7 +115,7 @@ curl \
115115
116116</CodeGroup >
117117
118- ### 3. Send your first chat completions request
118+ ### Send your first chat completions request
119119
120120Now you can start a conversation:
121121
@@ -147,7 +147,6 @@ Workspaces allow you to create isolated chat configurations for different use ca
147147Each workspace maintains its own:
148148- LLM provider configuration
149149- System prompt
150- - Access permissions
151150
152151## Building a chat interface with OpenAI SDK
153152
0 commit comments