[Support]: Crashing/Freezing Docker Frigate #22057
Replies: 3 comments 8 replies
-
|
I've identified several configuration issues that could be contributing to your crashes: Critical Issues1. Device Mapping Mismatch Your Docker Compose maps LIBVA_DEVICE: "/dev/dri/renderD129"This should reference the container path instead: 2. Shared Memory Size Your This yields approximately 100MB minimum, but with logs now stored in 3. FFmpeg Retry Interval The default ffmpeg:
retry_interval: 304. Hardware Acceleration Configuration You're using go2rtc:
ffmpeg:
h264_vaapi: >-
-init_hw_device vaapi=intel:/dev/dri/renderD128
-filter_hw_device intel
-c:v h264_vaapi -g 50 -bf 0 -profile:v high -an
-vf fps=5,format=nv12,hwupload,scale_vaapi=w=640:h=360
-level:v 4.15. Stop Grace Period Your Recommendations
The 7-day crash pattern coinciding with your retention period suggests the issue may be triggered during the cleanup of old recordings, which could be exacerbating the resource exhaustion when combined with these configuration issues. 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
I suspect, the reason for ffmpeg restarting might be due to the high IO issue happening. But I have no way to be for sure at this given time. |
Beta Was this translation helpful? Give feedback.
-
|
The issue reported here very much matches with exactly what I'm seeing > #19646 (comment) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe the problem you are having
Frigate becomes unresponsive and crashes approximately every 7 days due to cascading stream failures and resource exhaustion.
Symptoms:
All 3 cameras simultaneously lose RTSP streams with timeout/CSeq errors
FFmpeg processes hang and won't exit gracefully (require force kill after 20-30+ seconds)
New FFmpeg processes spawn while old ones remain stuck, causing process count to climb from normal (~40) to 434 PIDs
Corrupt recording segments accumulate in /tmp/cache (100+ files)
System attempts to probe hundreds of corrupt segments, causing disk I/O to max out at 500MB/s
Memory exhaustion (6GB RAM consumed, 1.8GB swap thrashing)
System becomes completely unresponsive, SSH unusable
Requires hard reset to recover
Pattern:
Runs normally for ~6-7 days
Crashes predictably around day 7 (coinciding with 7-day recording retention period)
After reset, cycle repeats
Configuration:
3 cameras at 640x360 @ 5fps detection
OpenVINO GPU detector with VA-API hardware encoding
go2rtc restreaming (HD stream for recording, Sub stream for detection)
7 days recording retention with 30 days event retention
Hardwired ethernet cameras
Version
0.16.4-4131252
What browser(s) are you using?
No response
Frigate config file
Relevant Frigate log output
Relevant go2rtc log output
FFprobe output from your camera
Frigate stats
No response
Install method
Docker Compose
docker-compose file or Docker CLI command
Object Detector
OpenVino
Network connection
Wired
Camera make and model
TP Link's
Screenshots of the Frigate UI's System metrics pages
No response
Any other information that may be helpful
Recently enabled
- /mnt/SeagateHDD/DockerData/Frigate/logs:/dev/shm/logsafter latest crash, as I lose logs when resetting the VMBeta Was this translation helpful? Give feedback.
All reactions