|
31 | 31 | ) |
32 | 32 |
|
33 | 33 | LONG_DESCRIPTION = """ |
34 | | -Combine results of detection model with classification results performed separately for |
35 | | -each and every bounding box. |
| 34 | +Replace class labels of detection bounding boxes with classes predicted by a classification model applied to cropped regions, combining generic detection results with specialized classification predictions to enable two-stage detection workflows, fine-grained classification, and class refinement workflows where generic detections are refined with specific class labels from specialized classifiers. |
36 | 35 |
|
37 | | -Bounding boxes without top class predicted by classification model are discarded, |
38 | | -for multi-label classification results, most confident label is taken as bounding box |
39 | | -class. |
| 36 | +## How This Block Works |
| 37 | +
|
| 38 | +This block combines results from a detection model (with bounding boxes and generic classes) with classification predictions (from a specialized classifier applied to cropped regions) to replace generic class labels with specific ones. The block: |
| 39 | +
|
| 40 | +1. Receives two inputs with different dimensionality levels: |
| 41 | + - `object_detection_predictions`: Detection results (dimensionality level 1) containing bounding boxes with generic classes (e.g., "dog", "person", "vehicle") |
| 42 | + - `classification_predictions`: Classification results (dimensionality level 2) from a classifier applied to cropped regions of each detection (e.g., "Golden Retriever", "Labrador" for dog detections) |
| 43 | +2. Matches classifications to detections: |
| 44 | + - Uses `PARENT_ID_KEY` (detection_id) in classification predictions to link each classification result to its source detection |
| 45 | + - Creates a mapping from detection IDs to classification results |
| 46 | +3. Extracts leading class from each classification prediction: |
| 47 | +
|
| 48 | + **For single-label classifications:** |
| 49 | + - Uses the "top" class (predicted class) from the classification result |
| 50 | + - Extracts class name, class ID, and confidence from the classification prediction |
| 51 | +
|
| 52 | + **For multi-label classifications:** |
| 53 | + - Finds the class with the highest confidence score |
| 54 | + - Uses the most confident label as the replacement class |
| 55 | + - Extracts class name, class ID, and confidence from the highest-confidence prediction |
| 56 | +
|
| 57 | +4. Handles missing classifications: |
| 58 | + - Detections without corresponding classification predictions are discarded by default |
| 59 | + - If `fallback_class_name` is provided, detections without classifications use the fallback class instead of being discarded |
| 60 | + - Fallback class ID is set to the provided value, or `sys.maxsize` if not specified or negative |
| 61 | +5. Filters detections: |
| 62 | + - Keeps only detections that have classification results (or fallback if specified) |
| 63 | + - Removes detections that cannot be matched to classification predictions |
| 64 | +6. Replaces class information: |
| 65 | + - Replaces class names in detections with classification class names |
| 66 | + - Replaces class IDs in detections with classification class IDs |
| 67 | + - Replaces confidence scores in detections with classification confidence scores |
| 68 | + - Updates all detection metadata to reflect the new class information |
| 69 | +7. Generates new detection IDs: |
| 70 | + - Creates new unique detection IDs for updated detections (prevents ID conflicts) |
| 71 | + - Ensures detection IDs are unique after class replacement |
| 72 | +8. Returns updated detections: |
| 73 | + - Outputs detections with replaced classes, maintaining bounding box coordinates and other properties |
| 74 | + - Output dimensionality matches input detection predictions (dimensionality level 1) |
| 75 | +
|
| 76 | +The block enables two-stage detection workflows where a generic detection model locates objects and a specialized classification model provides fine-grained labels. This is useful when you need generic localization (e.g., "dog") combined with specific classification (e.g., "Golden Retriever", "German Shepherd") without losing spatial information. |
| 77 | +
|
| 78 | +## Common Use Cases |
| 79 | +
|
| 80 | +- **Two-Stage Detection and Classification**: Combine generic detection with specialized classification for fine-grained labeling (e.g., detect "dog" then classify breed, detect "vehicle" then classify type, detect "person" then classify age group), enabling two-stage detection workflows |
| 81 | +- **Class Refinement**: Refine generic class labels with specific classifications from specialized models (e.g., refine "animal" to specific species, refine "vehicle" to specific models, refine "food" to specific dishes), enabling class refinement workflows |
| 82 | +- **Multi-Model Workflows**: Combine detection and classification models to leverage the strengths of both (e.g., use generic detector for localization and specialist classifier for identification, combine coarse and fine-grained models, leverage specialized classifiers with general detectors), enabling multi-model workflows |
| 83 | +- **Hierarchical Classification**: Apply hierarchical classification where detection provides high-level classes and classification provides detailed sub-classes (e.g., detect "mammal" then classify species, detect "plant" then classify variety, detect "structure" then classify type), enabling hierarchical classification workflows |
| 84 | +- **Crop-Based Classification**: Use classification results from cropped regions to enhance detection results (e.g., classify crops to improve detection labels, apply specialized classifiers to detected regions, refine detections with crop classifications), enabling crop-based classification workflows |
| 85 | +- **Fine-Grained Object Recognition**: Enable fine-grained recognition by combining localization and detailed classification (e.g., recognize specific product models, identify specific animal breeds, classify specific vehicle types), enabling fine-grained recognition workflows |
| 86 | +
|
| 87 | +## Connecting to Other Blocks |
| 88 | +
|
| 89 | +This block receives detection and classification predictions and produces detections with replaced classes: |
| 90 | +
|
| 91 | +- **After detection and classification model blocks** to combine generic detection with specialized classification (e.g., object detection + classification to refined detections, detection model + classifier to labeled detections), enabling detection-classification fusion workflows |
| 92 | +- **After crop blocks** that create crops from detections for classification (e.g., crop detections then classify crops, create crops for classification then replace classes), enabling crop-classification workflows |
| 93 | +- **Before visualization blocks** to display detections with refined classes (e.g., visualize refined detections, display detections with specific labels, show classification-enhanced detections), enabling refined detection visualization workflows |
| 94 | +- **Before filtering blocks** to filter detections with refined classes (e.g., filter by specific classes, filter refined detections, apply filters to classified detections), enabling refined detection filtering workflows |
| 95 | +- **Before analytics blocks** to perform analytics on refined detections (e.g., analyze specific classes, perform analytics on classified detections, track refined detection metrics), enabling refined detection analytics workflows |
| 96 | +- **In workflow outputs** to provide refined detections as final output (e.g., two-stage detection outputs, classification-enhanced detection outputs, refined detection results), enabling refined detection output workflows |
| 97 | +
|
| 98 | +## Requirements |
| 99 | +
|
| 100 | +This block requires object detection predictions (with bounding boxes) and classification predictions from crops of those bounding boxes. The classification predictions must have `PARENT_ID_KEY` (detection_id) to link classifications to their source detections. The block accepts different dimensionality levels: detection predictions at level 1 and classification predictions at level 2 (from crops). For single-label classifications, the "top" class is used. For multi-label classifications, the most confident class is selected. Detections without classification results are discarded unless `fallback_class_name` is provided. The block outputs detections with replaced classes, class IDs, and confidences, with new detection IDs generated. Output dimensionality matches input detection predictions (level 1). |
40 | 101 | """ |
41 | 102 |
|
42 | 103 | SHORT_DESCRIPTION = "Replace classes of detections with classes predicted by a chained classification model." |
@@ -70,26 +131,31 @@ class BlockManifest(WorkflowBlockManifest): |
70 | 131 | ] |
71 | 132 | ) = Field( |
72 | 133 | title="Regions of Interest", |
73 | | - description="The output of a detection model describing the bounding boxes that will have classes replaced.", |
74 | | - examples=["$steps.my_object_detection_model.predictions"], |
| 134 | + description="Detection predictions (object detection, instance segmentation, or keypoint detection) containing bounding boxes with generic class labels that will be replaced with classification results. These detections should correspond to the regions that were cropped and classified. Detections must have detection IDs that match the PARENT_ID_KEY in classification predictions. Detections at dimensionality level 1.", |
| 135 | + examples=[ |
| 136 | + "$steps.object_detection_model.predictions", |
| 137 | + "$steps.instance_segmentation_model.predictions", |
| 138 | + ], |
75 | 139 | ) |
76 | 140 | classification_predictions: Selector(kind=[CLASSIFICATION_PREDICTION_KIND]) = Field( |
77 | 141 | title="Classification results for crops", |
78 | | - description="The output of classification model for crops taken based on RoIs pointed as the other parameter", |
79 | | - examples=["$steps.my_classification_model.predictions"], |
| 142 | + description="Classification predictions from a classifier applied to cropped regions of the detections. Each classification result must have PARENT_ID_KEY (detection_id) linking it to its source detection. Supports both single-label (uses 'top' class) and multi-label (uses most confident class) classifications. Classification results at dimensionality level 2 (one classification per crop/detection).", |
| 143 | + examples=[ |
| 144 | + "$steps.classification_model.predictions", |
| 145 | + "$steps.breed_classifier.predictions", |
| 146 | + ], |
80 | 147 | ) |
81 | 148 | fallback_class_name: Union[Optional[str], Selector(kind=[STRING_KIND])] = Field( |
82 | 149 | default=None, |
83 | 150 | title="Fallback class name", |
84 | | - description="The class name to be used as a fallback if no class is predicted for a bounding box", |
85 | | - examples=["unknown"], |
| 151 | + description="Optional class name to use for detections that don't have corresponding classification predictions. If not provided (default None), detections without classifications are discarded. If provided, detections without classifications use this fallback class name instead of being removed. Useful for preserving detections when classification fails or is unavailable.", |
| 152 | + examples=[None, "unknown", "unclassified"], |
86 | 153 | ) |
87 | 154 | fallback_class_id: Union[Optional[int], Selector(kind=[INTEGER_KIND])] = Field( |
88 | 155 | default=None, |
89 | 156 | title="Fallback class id", |
90 | | - description="The class id to be used as a fallback if no class is predicted for a bounding box;" |
91 | | - f"if not specified or negative, the class id will be set to {sys.maxsize}", |
92 | | - examples=[77], |
| 157 | + description="Optional class ID to use with fallback_class_name for detections without classification predictions. If not specified or negative, the class ID is set to sys.maxsize. Only used when fallback_class_name is provided. Should match the class ID mapping used in your model.", |
| 158 | + examples=[None, 77, 999], |
93 | 159 | ) |
94 | 160 |
|
95 | 161 | @classmethod |
|
0 commit comments