Replies: 1 comment
-
|
Hi, thanks a lot for this project! I can confirm the same behavior with a local Ollama setup using vision models: the Ollama provider uses a 60s timeout, which can cause image analysis to fail on slower or currently loaded hardware (e.g., J4012 Jetson Orin). Symptoms (LLM Vision logs)
Root cause: In For local vision workloads (e.g., Workaround (worked immediately for me)
After this change, Suggestion: If possible, it would be great to make this timeout configurable as proposed above (integration option / provider option / service parameter), or to increase the default for local providers (especially for vision models). BR |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Please allow the timeout in providers.py for the API call to be configurable.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions