You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "this is a test request, write a short poem"}]}'
52
52
```
53
53
54
+
## Debug single request
55
+
56
+
Pass in `litellm_request_debug=True` in the request body
57
+
58
+
```bash showLineNumbers
59
+
curl -L -X POST 'http://0.0.0.0:4000/chat/completions' \
60
+
-H 'Content-Type: application/json' \
61
+
-H 'Authorization: Bearer sk-1234' \
62
+
-d '{
63
+
"model":"fake-openai-endpoint",
64
+
"messages": [{"role": "user","content": "How many r in the word strawberry?"}],
65
+
"litellm_request_debug": true
66
+
}'
67
+
```
68
+
69
+
This will emit the raw request sent by LiteLLM to the API Provider and raw response received from the API Provider for **just** this request in the logs.
70
+
71
+
72
+
```bash showLineNumbers
73
+
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
-d '{'model': 'fake', 'messages': [{'role': 'user', 'content': 'How many r in the word strawberry?'}], 'stream': False}'
81
+
82
+
83
+
20:14:06 - LiteLLM:WARNING: litellm_logging.py:1015 - RAW RESPONSE:
84
+
{"id":"chatcmpl-817fc08f0d6c451485d571dab39b26a1","object":"chat.completion","created":1677652288,"model":"gpt-3.5-turbo-0301","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"message":{"role":"assistant","content":"\n\nHello there, how may I assist you today?"},"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":9,"completion_tokens":12,"total_tokens":21}}
85
+
86
+
87
+
INFO: 127.0.0.1:56155 - "POST /chat/completions HTTP/1.1" 200 OK
88
+
89
+
```
90
+
91
+
54
92
## JSON LOGS
55
93
56
94
Set `JSON_LOGS="True"` in your env:
57
95
58
-
```bash
96
+
```bash showLineNumbers
59
97
export JSON_LOGS="True"
60
98
```
61
99
**OR**
62
100
63
101
Set `json_logs: true` in your yaml:
64
102
65
-
```yaml
103
+
```yaml showLineNumbers
66
104
litellm_settings:
67
105
json_logs: true
68
106
```
69
107
70
108
Start proxy
71
109
72
-
```bash
110
+
```bash showLineNumbers
73
111
$ litellm
74
112
```
75
113
@@ -80,7 +118,7 @@ The proxy will now all logs in json format.
80
118
Turn off fastapi's default 'INFO' logs
81
119
82
120
1. Turn on 'json logs'
83
-
```yaml
121
+
```yaml showLineNumbers
84
122
litellm_settings:
85
123
json_logs: true
86
124
```
@@ -89,20 +127,20 @@ litellm_settings:
89
127
90
128
Only get logs if an error occurs.
91
129
92
-
```bash
130
+
```bash showLineNumbers
93
131
LITELLM_LOG="ERROR"
94
132
```
95
133
96
134
3. Start proxy
97
135
98
136
99
-
```bash
137
+
```bash showLineNumbers
100
138
$ litellm
101
139
```
102
140
103
141
Expected Output:
104
142
105
-
```bash
143
+
```bash showLineNumbers
106
144
# no info statements
107
145
```
108
146
@@ -119,14 +157,14 @@ This can be caused due to all your models hitting rate limit errors, causing the
0 commit comments