Skip to content

OpenWebUI failed: 'NoneType' object has no attribute 'get' #591

@jonny190

Description

@jonny190

Checklist

  • I'm running the newest version of LLM Vision https://github.com/valentinfrlch/ha-llmvision/releases/latest
  • I have enabled debug logging for the integration.
  • I have filled out the issue template to the best of my ability.
  • This issue only contains 1 issue (if you have multiple issues, open one issue for each issue).
  • This is a bug and not a feature request.
  • I have searched open issues for my problem.

Describe the issue

Trying to use the Actions, Stream analyzer to test AI after notification errors, supplied, Provider, Model, Prompt and camera

Image

Reproduction steps

See screenshot and title above

Debug logs

2026-02-16 17:25:17.261 INFO (MainThread) [homeassistant.helpers.script.websocket_api_script] websocket_api script: Running websocket_api script
2026-02-16 17:25:17.262 INFO (MainThread) [homeassistant.helpers.script.websocket_api_script] websocket_api script: Executing step call service
2026-02-16 17:25:17.262 INFO (MainThread) [custom_components.llmvision.media_handlers] Recording doorcam for 5 seconds
2026-02-16 17:25:17.262 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetching http://192.168.15.10:8123/api/camera_proxy/camera.doorcam?token=da704a9975dbb2eeb4d6583b4251f7ecc6e5462f2b29c2695335f0fbe42ab694 (attempt 1/2)
2026-02-16 17:25:17.330 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetched camera.doorcam in 0.07 seconds
2026-02-16 17:25:17.478 INFO (MainThread) [custom_components.llmvision.media_handlers] Preprocessing took: 0.15 seconds
2026-02-16 17:25:17.478 INFO (MainThread) [custom_components.llmvision.media_handlers] First iteration took: 0.22 seconds, interval adjusted to: 1.7847695350646973
2026-02-16 17:25:18.643 INFO (MainThread) [hass_nabucasa.google_report_state] Timeout while waiting to receive message
2026-02-16 17:25:19.212 INFO (MainThread) [homeassistant.components.automation.front_door_person_night] Front Door Person Night: Running automation actions
2026-02-16 17:25:19.212 INFO (MainThread) [homeassistant.components.automation.front_door_person_night] Front Door Person Night: Executing step call service
2026-02-16 17:25:19.228 INFO (MainThread) [homeassistant.components.automation.front_door_person_night] Front Door Person Night: Executing step delay 0:00:05
2026-02-16 17:25:19.263 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetching http://192.168.15.10:8123/api/camera_proxy/camera.doorcam?token=da704a9975dbb2eeb4d6583b4251f7ecc6e5462f2b29c2695335f0fbe42ab694 (attempt 1/2)
2026-02-16 17:25:19.327 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetched camera.doorcam in 0.06 seconds
2026-02-16 17:25:19.655 INFO (MainThread) [custom_components.llmvision.media_handlers] Preprocessing took: 0.33 seconds
2026-02-16 17:25:21.264 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetching http://192.168.15.10:8123/api/camera_proxy/camera.doorcam?token=da704a9975dbb2eeb4d6583b4251f7ecc6e5462f2b29c2695335f0fbe42ab694 (attempt 1/2)
2026-02-16 17:25:21.402 INFO (MainThread) [custom_components.llmvision.media_handlers] Fetched camera.doorcam in 0.14 seconds
2026-02-16 17:25:21.737 INFO (MainThread) [custom_components.llmvision.media_handlers] Preprocessing took: 0.33 seconds
2026-02-16 17:25:24.247 INFO (MainThread) [homeassistant.components.automation.front_door_person_night] Front Door Person Night: Executing step call service
2026-02-16 17:25:24.602 DEBUG (MainThread) [custom_components.llmvision.memory] Memory(['This is a picture of Jonny at the door', 'This is a picture of Alex at the door'], ['/config/www/people/d4e3c10c-133e-486a-a0d3-90b8aa20d045.jpg', '/config/www/people/9ea831c9-8728-496b-91d1-7c89a3f478b0.jpg'], 0)
2026-02-16 17:25:24.667 DEBUG (MainThread) [custom_components.llmvision.providers] Fallback provider: 01JV4EMK7C5SDYE5T5BXH1SZV2
2026-02-16 17:25:24.667 DEBUG (MainThread) [custom_components.llmvision.providers] Provider initialized: Openai(model=llama3.2-vision:11b, endpoint={'base_url': 'https://ai.***.***:443/api/chat/completions'})
2026-02-16 17:25:24.668 DEBUG (MainThread) [custom_components.llmvision.providers] Request data: {'model': 'llama3.2-vision:11b', 'messages': [{'role': 'system', 'content': "Your task is to analyze a series of images and provide a concise event description based on user instructions. Focus on identifying and describing the actions of people, pet and dynamic objects (e.g., vehicles) rather than static background details. When multiple images are provided, track and summarize movements or changes over time (e.g., 'A person walks to the front door' or 'A car pulls out of the driveway'). Keep responses brief objective, and aligned with the user's prompt. Avoid speculation and prioritize observable activity. The length of the summary must be less than 255 characters, so you must summarise it to the best readability within 255 chaaracters."}, {'role': 'user', 'content': [{'type': 'text', 'text': 'camera0-frame-0:'}, {'type': 'image_url', 'image_url': {'url': '<long_string>'}}, {'type': 'text', 'text': 'camera0-frame-1:'}, {'type': 'image_url', 'image_url': {'url': '<long_string>'}}, {'type': 'text', 'text': 'camera0-frame-2:'}, {'type': 'image_url', 'image_url': {'url': '<long_string>'}}, {'type': 'text', 'text': "The attached images are frames from a live camera feed. Your task is to analyze a series of images and provide a concise event description based on user instructions. Focus on identifying and describing the actions of people, pet and dynamic objects (e.g., vehicles) rather than static background details. When multiple images are provided, track and summarize movements or changes over time (e.g., 'A person walks to the front door' or 'A car pulls out of the driveway'). Keep responses brief objective, and aligned with the user's prompt. Avoid speculation and prioritize observable activity. The length of the summary must be less than 255 characters, so you must summarise it to the best readability within 255 chaaracters."}]}], 'max_completion_tokens': 3000, 'temperature': 0.5, 'top_p': 0.9}
2026-02-16 17:25:24.668 DEBUG (MainThread) [custom_components.llmvision.providers] Posting to https://ai.***.***:443/api/chat/completions
2026-02-16 17:25:24.821 DEBUG (MainThread) [custom_components.llmvision.providers] Response data: None
2026-02-16 17:25:24.821 ERROR (MainThread) [custom_components.llmvision.providers] Provider OpenWebUI failed: 'NoneType' object has no attribute 'get'
2026-02-16 17:25:24.822 INFO (MainThread) [homeassistant.helpers.script.websocket_api_script] websocket_api script: Stop script sequence: done

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions