Frigate Tip: Best Practices for Training Face and Custom Classification Models #21374
Replies: 3 comments 3 replies
-
|
Thanks for very good tips, but as a man of course I was reading the manual after implementing this feature, so now I know that I have done everything wrong... From the documents (https://docs.frigate.video/configuration/face_recognition/) I see that it's possible to start over again, but I'm missing the /media/frigate/clips/faces directory. The closest I have is /media/frigate/clips/person. I can't find any other options to delete all the faces |
Beta Was this translation helpful? Give feedback.
-
|
Thank you again for the great work! I am currently exploring the possibility to replace doubletake+rekognition with Frigate Face Recognition, after few weeks of testing plus reading the documentation and the tips here I have a few questions.
Million thanks! |
Beta Was this translation helpful? Give feedback.
-
|
I tried to create this tool to pull diverse images from Immich using their defined faces (which I believe uses clustering) and then selects the most diverse subset based on Farthest Point Sampling of the image embeddings: https://github.com/ds-sebastian/if_curator In some testing it worked really well getting images where I made different facial expressions, etc. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The Frigate Tips series is a collection of posts where the Frigate developers give tips on how to best configure, use, and maintain your Frigate installation. Frigate 0.16 introduced Face Recognition and Frigate 0.17 beta introduces Custom Classification for state and object classification. This post’s tip focuses on how to train these models effectively and, just as importantly, what not to do.
Training AI models: more images is not always better
A common instinct when training any AI model is to select as many images as possible, especially when the UI presents a large batch of examples. While this feels intuitive, it often leads to worse results for both face recognition and state/object classification in Frigate.
The key idea to understand is this:
Selecting dozens of nearly identical images at once is one of the fastest ways to degrade model performance. This is why Frigate does not implement a way to bulk train faces or classification images.
Face recognition: avoid bulk batches of similar images
Frigate’s face recognition uses an ArcFace-based model, which learns to represent faces as embeddings in a feature space. These models are very good at learning discriminative features, but they can also overfit when trained on narrow, repetitive data.
Why bulk training hurts face recognition
If you select a large batch of highly similar images—such as sequential frames from the same clip with the same lighting, pose, and expression—the model has very little incentive to learn robust facial features. Instead, it learns to rely on incidental details that only exist in that capture scenario.
In practice, this means:
This is classic overfitting behavior and is well-documented for ArcFace-style embedding models.
Best practices for Face Recognition training
The biggest thing to keep in mind is that training is a process, not a one-time event.
State & object classification: the same principle applies
State and object classification models in Frigate use MobileNetV2, a lightweight convolutional neural network designed for fast CPU inference. While this model is very different from ArcFace, it shares a critical property with all CNN-based classifiers:
Why bulk training hurts state and object classification
State classification models are trained on fixed camera crops, and object classification models are trained on tracked object crops. When you select many images that all look nearly identical—same lighting, same shadows, same background noise—you are not teaching the model what defines a state or class. You are teaching it what that exact moment looked like.
This often leads to models that:
Because MobileNetV2 learns visual patterns directly from pixel data, lack of intra-class variation causes the model to rely on fragile cues that don’t generalize well.
Best practices for state & object classification
In most cases, models begin working well with surprisingly few examples and improve naturally over time.
The wizard is just the starting point
The classification wizard is designed to get a model up and running quickly using readily available data. It is not meant to force you to find and label every possible state or class upfront.
Going back through historical recordings to locate rare conditions would add significant UI and workflow complexity for limited benefit, which is why we've chosen not to implement it.
Instead, Frigate is designed to work iteratively:
FAQ
Why does my model score 100% but still behave incorrectly?
A 100% score only means the model is very confident based on what it has already learned. If the training data lacked diversity, the model may be confidently wrong under new conditions. This is a common sign of overfitting. Delete some images from your classes and start again with the suggestions above.
Why does my state classification flip between states at night or during weather changes?
This usually means the model has not been trained on those conditions. Nighttime, rain, snow, or seasonal lighting changes often introduce visual patterns the model has never seen. Adding a small number of representative examples from those conditions usually stabilizes the model.
Why don’t I see all states or classes during the wizard?
The wizard only shows a sampling of images that exist in your recordings. Some states or classes may not have occurred yet. This is expected. Don't worry about finding images for every class as you progress through the wizard. If something is obvious, then select it. If you're unsure, you can leave it unselected.
Why don’t I see updates coming into the Recent Classifications tab? I've set up an
intervalin my config, shouldn't I be seeing them everynseconds?Images are not saved when the state is stable (detected state matches current state) and the score is 100%. This prevents unnecessary storage of redundant high-confidence classifications. There are a number of other conditions as well, see the official docs.
How do I train missing states or classes later?
Use the Recent Classifications view. When a new state or class appears, label it there and retrain the model. These images are often more valuable than those shown during the initial wizard.
Official Frigate documentation
Find more tips, suggestions, and configuration options in the official documentation.
If this has been a helpful post, give it an upvote and thumbs up. If you have any questions, leave them down below for us!
— The Frigate dev team (Blake @blakeblackshear, Nick @NickM-27, Josh @hawkeye217)
Beta Was this translation helpful? Give feedback.
All reactions