[Feature Request] Add ability to filter images with annotation vs. model prediction mismatches #448
Replies: 5 comments
-
Hi Mathieu, have you tried performing a model test on with your model on your dataset? You can find the documentation for this here. The left bucket will show images where your model had a lower accuracy than the one selected via the slider. One thing that we have considered (and are working towards) is being able to show the annotations and/or predictions on top of the media that are shown on the media page. Let us know if this is something you'd like us to prioritize. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Makes sense! One quick thing we can do is change that edit button to be a link so that you can open the annotator for that image in a new tab. This wouldn't be a great UX but might help if you don't have too many incorrect images. I think we can also look into making this filter accessible from the annotator view. This will require more work though (updating both the UI and REST APIs). |
Beta Was this translation helpful? Give feedback.
-
Thanks for sharing this feedback Mathieu. In addition to the quick workaround that Mark mentioned, we will discuss with the team how we can enable users to 1. easily filter images where there is a mismatch between the ground truth and the model's predictions, and 2. review & adjust those annotations while preserving the filter settings to prevent back-and-forth. We will get back to you when we have updates on the plan. |
Beta Was this translation helpful? Give feedback.
-
Hi Mathieu. Coming back to this feature request: we discussed different short-term approaches, and I would like to get your thoughts on a proposal to enable you with a UX that allows for ground truth vs. prediction mismatch review without the current back-and-forth. As Mark proposed earlier, the best way to achieve this now is to preserve the filter for the incorrect images when you click 'Edit' from the Test screen, so that you can review & edit all images of which the ground truth doesn't match with the model's prediction in the Annotator view - I added a draft visual below to illustrate this. Let us know if this improvement would help you out. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
✨ Is your feature request related to a problem? Please describe.
Yes. When reviewing datasets on the Intel® Geti™ platform, it's currently difficult to identify where user-provided annotations differ from the model’s predictions. This makes it harder to detect annotation errors or cases where the model struggles.
💡 Describe the solution you'd like
I'd like a filtering option in the dataset view to highlight or isolate images where there is a mismatch between the ground truth (user annotations) and the model's predictions. Ideally, this would support filtering for false positives, false negatives, and label mismatches.
🔄 Describe alternatives you've considered
Manually comparing annotations and predictions image-by-image, which is inefficient and not scalable for large datasets.
📄 Additional context
This feature would be extremely useful for:
Identifying mislabelled data.
Improving annotation quality.
Investigating model failure modes.
Streamlining active learning workflows.
Thanks for considering this!
Beta Was this translation helpful? Give feedback.
All reactions