-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
Describe the bug
The retry loop in create_chat_completion does not actually perform retries as intended.
In the current implementation in gpt_researcher/utils/llm.py, the function returns immediately after the first successful call, and exceptions raised by provider.get_chat_response(...) are not retried unless explicitly handled. As a result, the loop with multiple attempts is effectively not working as a real retry mechanism.
To Reproduce
Steps to reproduce the behavior:
- Open
gpt_researcher/utils/llm.py - Locate the
create_chat_completionfunction - Check the retry loop around
provider.get_chat_response(...) - Observe that the function returns immediately after the first successful call
- Observe that exceptions are not retried unless wrapped in
try/except
Expected behavior
The function should retry failed LLM requests up to the configured maximum number of attempts, especially for transient provider or network errors. It should also handle empty responses consistently.
Screenshots
Not applicable.
Desktop (please complete the following information):
- OS: Windows 11
- Browser: Edge
- Version: 145.0.7632.160
Smartphone (please complete the following information):
Not applicable.
Additional context
I am preparing a fix that:
- adds actual retry handling with
try/except - retries on empty responses
- uses exponential backoff between attempts
- avoids repeated retries for streaming websocket responses