-
Notifications
You must be signed in to change notification settings - Fork 374
上下文太长报错 #87
Copy link
Copy link
Open
Description
2025-09-01 13:37:24 - app.domain.services.agent_task_runner - ERROR - Agent 1cf90a5b418e43ea task encountered exception: Error code: 400 - {'error': {'message': "This model's maximum context length is 131072 tokens. However, you requested 137610 tokens (129418 in the messages, 8192 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}
backend-1 | Traceback (most recent call last):
backend-1 | File "/app/app/domain/services/agent_task_runner.py", line 228, in run
backend-1 | async for event in self._run_flow(message_obj):
backend-1 | File "/app/app/domain/services/agent_task_runner.py", line 258, in _run_flow
backend-1 | async for event in self._flow.run(message):
backend-1 | File "/app/app/domain/services/flows/plan_act.py", line 151, in run
backend-1 | async for event in self.executor.execute_step(self.plan, step, message):
backend-1 | File "/app/app/domain/services/agents/execution.py", line 71, in execute_step
backend-1 | async for event in self.execute(message):
backend-1 | File "/app/app/domain/services/agents/base.py", line 87, in execute
backend-1 | message = await self.ask(request, format)
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/app/app/domain/services/agents/base.py", line 195, in ask
backend-1 | return await self.ask_with_messages([
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/app/app/domain/services/agents/base.py", line 166, in ask_with_messages
backend-1 | message = await self.llm.ask(self.memory.get_messages(),
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/app/app/infrastructure/external/llm/openai_llm.py", line 44, in ask
backend-1 | response = await self.client.chat.completions.create(
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2583, in create
backend-1 | return await self._post(
backend-1 | ^^^^^^^^^^^^^^^^^
backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post
backend-1 | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request
backend-1 | raise self._make_status_error_from_response(err.response) from None
backend-1 | openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 131072 tokens. However, you requested 137610 tokens (129418 in the messages, 8192 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}
上下文太长的时候,貌似大模型就报错了,有没有办法压缩上下文呢?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels