Skip to content

Commit 5c2c04b

Browse files
archerarcher
authored andcommitted
docs(i18n): translate final 9 files
1 parent 27e7575 commit 5c2c04b

File tree

12 files changed

+1644
-1
lines changed

12 files changed

+1644
-1
lines changed

document/content/docs/introduction/development/docker.en.mdx

Lines changed: 410 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 393 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,393 @@
1+
---
2+
title: Private Deployment FAQ
3+
description: FastGPT private deployment common issues
4+
---
5+
6+
## 1. Error Troubleshooting
7+
8+
Check [Issues](https://github.com/labring/FastGPT/issues) first, or create a new one. For private deployment errors, provide detailed steps, logs, and screenshots—otherwise it's hard to diagnose.
9+
10+
### Get Backend Errors
11+
12+
1. `docker ps -a` shows all container statuses. Check if all are running. If not, try `docker logs container_name` to view logs.
13+
2. If containers are running normally, use `docker logs container_name` to view error logs.
14+
15+
### Frontend Errors
16+
17+
When the frontend crashes, the page will show an error and prompt you to check console logs. Open browser console and check the `console` logs. Click the log hyperlinks to see the specific error file. Provide these detailed error messages for troubleshooting.
18+
19+
### OneAPI Errors
20+
21+
Errors with `requestId` are from OneAPI, usually caused by model API errors. See [Common OneAPI Errors](/docs/introduction/development/faq/#3-common-oneapi-errors)
22+
23+
## 2. General Issues
24+
25+
### Frontend Page Crash
26+
27+
1. 90% of cases: incorrect model configuration. Ensure each model type has at least one enabled; check if `object` parameters in models are abnormal (arrays and objects)—if empty, try giving an empty array or empty object.
28+
2. Some cases: browser compatibility issues. The project uses advanced syntax that may not work in older browsers. Provide specific steps and console errors in an issue.
29+
3. Disable browser translation. If translation is enabled, it may crash the page.
30+
31+
### Does Sealos deployment have fewer limitations than local deployment?
32+
33+
![](/imgs/faq1.png)
34+
This is the index model's length limit—it's the same regardless of deployment method. Different index models have different configs, which you can modify in the backend.
35+
36+
### How to Mount Mini Program Config Files
37+
38+
Mount the verification file to the specified location: /app/projects/app/public/xxxx.txt
39+
40+
Then restart. For example:
41+
42+
![](/imgs/faq2.png)
43+
44+
### Database Port 3306 Already in Use, Service Fails to Start
45+
46+
![](/imgs/faq3.png)
47+
48+
Change the port mapping to something like 3307, e.g., 3307:3306.
49+
50+
### Local Deployment Limitations
51+
52+
See details at https://fael3z0zfze.feishu.cn/wiki/OFpAw8XzAi36Guk8dfucrCKUnjg.
53+
54+
### Can It Run Fully Offline?
55+
56+
Yes. You need to prepare vector models and LLM models.
57+
58+
### Other Models Can't Do Question Classification/Content Extraction
59+
60+
1. Check logs. If you see "JSON invalid" or "not support tool", the model doesn't support tool calling or function calling. Set `toolChoice=false` and `functionCall=false` to use prompt mode. Built-in prompts are only tested for commercial model APIs. Question classification mostly works, content extraction doesn't work well.
61+
2. If configured correctly with no error logs, the prompts may not suit the model. Customize prompts via `customCQPrompt`.
62+
63+
### Page Crash
64+
65+
1. Disable translation
66+
2. Check if config file loaded properly. If not, system info will be missing, causing null pointer errors in some operations.
67+
68+
- 95% of cases: incorrect config file. Will show "xxx undefined"
69+
- "URI malformed" error: report the specific operation and page in an issue. This is caused by special character encoding parsing errors.
70+
71+
3. Some API compatibility issues (rare)
72+
73+
### Slow Response After Enabling Content Completion
74+
75+
1. Question completion requires an AI generation round.
76+
2. Performs 3-5 query rounds. If database performance is insufficient, there will be noticeable impact.
77+
78+
### Page Works Fine, API Errors
79+
80+
The page uses stream=true mode, so the API also needs stream=true for testing. Some model APIs (mostly domestic) have poor non-stream compatibility.
81+
Same as the previous issue—test with curl.
82+
83+
### Knowledge Base Indexing Has No Progress/Very Slow
84+
85+
Check error logs first. Several scenarios:
86+
87+
1. Can chat but no indexing progress: vector model (vectorModels) not configured
88+
2. Can't chat or index: API call failed. May not be connected to OneAPI or OpenAI
89+
3. Has progress but very slow: API key issue. OpenAI free accounts have 3 or 60 requests per minute, 200 per day limit.
90+
91+
### Connection Error
92+
93+
Network issue. Domestic servers can't reach OpenAI—check if AI model connection is normal.
94+
95+
Or FastGPT can't reach OneAPI (not on the same network).
96+
97+
### Modified vectorModels But No Effect
98+
99+
1. Restart container, ensure model config loaded (check logs or when creating new knowledge base)
100+
2. Refresh browser once
101+
3. For existing knowledge bases, delete and recreate. Vector model is bound at creation time and won't update dynamically.
102+
103+
## 3. Common OneAPI Errors
104+
105+
Errors with requestId are from OneAPI.
106+
107+
### insufficient_user_quota user quota is not enough
108+
109+
OneAPI account balance insufficient. Default root user only has $200, can be manually modified.
110+
111+
Path: Open OneAPI → Users → Edit root user → Increase remaining balance
112+
113+
### xxx Channel Not Found
114+
115+
The model in FastGPT's config file must match the model in OneAPI channels, or you'll get this error. Check:
116+
117+
1. Model channel not configured in OneAPI, or disabled
118+
2. FastGPT config has models not configured in OneAPI. If OneAPI doesn't have the model, don't add it to config.
119+
3. Created knowledge base with old vector model, then updated vector model. Delete old knowledge bases and recreate.
120+
121+
If OneAPI doesn't have the corresponding model, don't configure it in `config.json`, or errors will occur.
122+
123+
### Model Test Click Fails
124+
125+
OneAPI only tests the first model in a channel, and only tests chat models. Vector models can't be auto-tested—manually send requests to test. [View test model command examples](/docs/introduction/development/faq/#how-to-check-model-issues)
126+
127+
### get request url failed: Post `"https://xxx"` dial tcp: xxxx
128+
129+
OneAPI can't reach the model network—check network configuration.
130+
131+
### Incorrect API key provided: sk-xxxx.You can find your api Key at xxx
132+
133+
OneAPI API Key configured incorrectly. Modify `OPENAI_API_KEY` environment variable and restart container (first `docker-compose down`, then `docker-compose up -d`).
134+
135+
Use `exec` to enter container, then `env` to check if environment variables took effect.
136+
137+
### bad_response_status_code bad response status code 503
138+
139+
1. Model service unavailable
140+
2. Model API parameters abnormal (temperature, max token, etc. may not be compatible)
141+
3. ...
142+
143+
### Tiktoken Download Failed
144+
145+
OneAPI downloads a tiktoken dependency from the network at startup. If the network fails, startup fails. See [OneAPI Offline Deployment](https://blog.csdn.net/wanh/article/details/139039216) for solutions.
146+
147+
## 4. Common Model Issues
148+
149+
### How to Check Model Availability
150+
151+
1. For self-hosted models, first confirm the deployed model is working.
152+
2. Use CURL to directly test if upstream model is running normally (test both cloud and private models)
153+
3. Use CURL to request OneAPI to test if model is normal.
154+
4. Test the model in FastGPT.
155+
156+
Here are some test CURL examples:
157+
158+
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}>
159+
<Tab value="LLM Model">
160+
```bash
161+
curl https://api.openai.com/v1/chat/completions \
162+
-H "Content-Type: application/json" \
163+
-H "Authorization: Bearer $OPENAI_API_KEY" \
164+
-d '{
165+
"model": "gpt-4o",
166+
"messages": [
167+
{
168+
"role": "system",
169+
"content": "You are a helpful assistant."
170+
},
171+
{
172+
"role": "user",
173+
"content": "Hello!"
174+
}
175+
]
176+
}'
177+
```
178+
</Tab>
179+
<Tab value="Embedding Model">
180+
```bash
181+
curl https://api.openai.com/v1/embeddings \
182+
-H "Authorization: Bearer $OPENAI_API_KEY" \
183+
-H "Content-Type: application/json" \
184+
-d '{
185+
"input": "The food was delicious and the waiter...",
186+
"model": "text-embedding-ada-002",
187+
"encoding_format": "float"
188+
}'
189+
```
190+
</Tab>
191+
<Tab value="Rerank Model">
192+
```bash
193+
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
194+
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
195+
--header 'Content-Type: application/json' \
196+
--data-raw '{
197+
"model": "bge-rerank-m3",
198+
"query": "导演是谁",
199+
"documents": [
200+
"你是谁?\\n我是电影《铃芽之旅》助手"
201+
]
202+
}'
203+
```
204+
</Tab>
205+
<Tab value="TTS Model">
206+
```bash
207+
curl https://api.openai.com/v1/audio/speech \
208+
-H "Authorization: Bearer $OPENAI_API_KEY" \
209+
-H "Content-Type: application/json" \
210+
-d '{
211+
"model": "tts-1",
212+
"input": "The quick brown fox jumped over the lazy dog.",
213+
"voice": "alloy"
214+
}' \
215+
--output speech.mp3
216+
```
217+
</Tab>
218+
<Tab value="Whisper Model">
219+
```bash
220+
curl https://api.openai.com/v1/audio/transcriptions \
221+
-H "Authorization: Bearer $OPENAI_API_KEY" \
222+
-H "Content-Type: multipart/form-data" \
223+
-F file="@/path/to/file/audio.mp3" \
224+
-F model="whisper-1"
225+
```
226+
</Tab>
227+
</Tabs>
228+
229+
### Error - Model Response Empty/Model Error
230+
231+
This error occurs when oneapi ends the stream request in stream mode without returning any content.
232+
233+
Version 4.8.10 added error logging—when errors occur, the actual Body parameters sent are printed in logs. Copy these parameters and use curl to send requests to oneapi for testing.
234+
235+
Since oneapi can't properly catch errors in stream mode, sometimes you can set `stream=false` to get precise errors.
236+
237+
Possible error causes:
238+
239+
1. Domestic models hit content moderation
240+
2. Unsupported model parameters: keep only messages and necessary parameters for testing, remove others
241+
3. Parameters don't meet model requirements: e.g., some models don't support temperature 0, some don't support two decimal places. max_tokens exceeded, context too long, etc.
242+
4. Model deployment issues, stream mode incompatible
243+
244+
Test example—copy request body from error logs for testing:
245+
246+
```bash
247+
curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
248+
--header 'Authorization: Bearer sk-xxxx' \
249+
--header 'Content-Type: application/json' \
250+
--data-raw '{
251+
"model": "xxx",
252+
"temperature": 0.01,
253+
"max_tokens": 1000,
254+
"stream": true,
255+
"messages": [
256+
{
257+
"role": "user",
258+
"content": " 你是饿"
259+
}
260+
]
261+
}'
262+
```
263+
264+
### How to Test if Model Supports Tool Calling
265+
266+
Both the model provider and oneapi must support tool calling. Test method:
267+
268+
##### 1. Use `curl` to send first round stream mode tool test to `oneapi`
269+
270+
```bash
271+
curl --location --request POST 'https://oneapi.xxx/v1/chat/completions' \
272+
--header 'Authorization: Bearer sk-xxxx' \
273+
--header 'Content-Type: application/json' \
274+
--data-raw '{
275+
"model": "gpt-5",
276+
"temperature": 0.01,
277+
"max_tokens": 8000,
278+
"stream": true,
279+
"messages": [
280+
{
281+
"role": "user",
282+
"content": "几点了"
283+
}
284+
],
285+
"tools": [
286+
{
287+
"type": "function",
288+
"function": {
289+
"name": "hCVbIY",
290+
"description": "获取用户当前时区的时间。",
291+
"parameters": {
292+
"type": "object",
293+
"properties": {},
294+
"required": []
295+
}
296+
}
297+
}
298+
],
299+
"tool_choice": "auto"
300+
}'
301+
```
302+
303+
##### 2. Check Response Parameters
304+
305+
If tool calling works, it returns corresponding `tool_calls` parameters.
306+
307+
```json
308+
{
309+
"id": "chatcmpl-A7kwo1rZ3OHYSeIFgfWYxu8X2koN3",
310+
"object": "chat.completion.chunk",
311+
"created": 1726412126,
312+
"model": "gpt-5",
313+
"system_fingerprint": "fp_483d39d857",
314+
"choices": [
315+
{
316+
"index": 0,
317+
"id": "call_0n24eiFk8OUyIyrdEbLdirU7",
318+
"type": "function",
319+
"function": {
320+
"name": "mEYIcFl84rYC",
321+
"arguments": ""
322+
}
323+
}
324+
],
325+
"refusal": null
326+
},
327+
"logprobs": null,
328+
"finish_reason": null
329+
}
330+
],
331+
"usage": null
332+
}
333+
```
334+
335+
##### 3. Use `curl` to send second round stream mode tool test to `oneapi`
336+
337+
The second round sends tool results to the model. After sending, you'll get the model's response.
338+
339+
```bash
340+
curl --location --request POST 'https://oneapi.xxxx/v1/chat/completions' \
341+
--header 'Authorization: Bearer sk-xxx' \
342+
--header 'Content-Type: application/json' \
343+
--data-raw '{
344+
"model": "gpt-5",
345+
"temperature": 0.01,
346+
"max_tokens": 8000,
347+
"stream": true,
348+
"messages": [
349+
{
350+
"role": "user",
351+
"content": "几点了"
352+
},
353+
{
354+
"role": "assistant",
355+
"tool_calls": [
356+
{
357+
"id": "kDia9S19c4RO",
358+
"type": "function",
359+
"function": {
360+
"name": "hCVbIY",
361+
"arguments": "{}"
362+
}
363+
}
364+
]
365+
},
366+
{
367+
"tool_call_id": "kDia9S19c4RO",
368+
"role": "tool",
369+
"name": "hCVbIY",
370+
"content": "{\\n \\\"time\\\": \\\"2024-09-14 22:59:21 Sunday\\\"\\n}"
371+
}
372+
],
373+
"tools": [
374+
{
375+
"type": "function",
376+
"function": {
377+
"name": "hCVbIY",
378+
"description": "获取用户当前时区的时间。",
379+
"parameters": {
380+
"type": "object",
381+
"properties": {},
382+
"required": []
383+
}
384+
}
385+
}
386+
],
387+
"tool_choice": "auto"
388+
}'
389+
```
390+
391+
### Vector Retrieval Score Greater Than 1
392+
393+
Caused by model not being normalized. Currently only normalized models are supported.

0 commit comments

Comments
 (0)