diff --git a/docs/docs/Feed-Forward-Guide.md b/docs/docs/Feed-Forward-Guide.md
index a9548f6c66e7..5385d5a69dde 100644
--- a/docs/docs/Feed-Forward-Guide.md
+++ b/docs/docs/Feed-Forward-Guide.md
@@ -310,3 +310,25 @@ a 1-to-1 correspondence with a MOG motion track.
Refer to `runMogThenOcvFaceFeedForwardRegionTest()` in the
[`TestSystemOnDiff`](https://github.com/openmpf/openmpf/blob/master/trunk/mpf-system-tests/src/test/java/org/mitre/mpf/mst/TestSystemOnDiff.java)
class for a system test that demonstrates this behavior.
+
+
+# Feed Forward All Tracks
+
+
EXPERIMENTAL: This feature is not fully implemented.
+
+The default feed-forward behavior will result in generating one sub-job per track generated in the previous stage.
+Consider a scenario where you need to implement a tracking component that takes individual detections from a stage and
+groups them into tracks. That component needs to accept all tracks from the previous stage as an input to the same
+sub-job.
+
+Setting `FEED_FORWARD_ALL_TRACKS` to true will result in generating one sub-job that contains all the tracks generated
+in the previous stage. Refer to the
+[component.get_detections_from_all_video_tracks(video_job)](Python-Batch-Component-API.md#componentget_detections_from_all_video_tracksvideo_job)
+section of the Python Batch Component API for more details. This property works in conjunction with the other
+feed-forward properties discussed in the [Feed Forward Properties](#feed-forward-properties) section.
+
+Known limitations:
+
+- Only Python supported.
+- Only video supported.
+- Not tested with [triggers](Trigger-Guide.md).
diff --git a/docs/docs/Python-Batch-Component-API.md b/docs/docs/Python-Batch-Component-API.md
index 7502567d6c54..e44a5f8c26be 100644
--- a/docs/docs/Python-Batch-Component-API.md
+++ b/docs/docs/Python-Batch-Component-API.md
@@ -646,7 +646,7 @@ a static method, or a class method.
#### mpf_component_api.VideoJob
-Class containing data used for detection of objects in a video file.
+Class containing data used for detection of objects in a video file. Contains at most one feed-forward track.
* Members:
@@ -713,7 +713,7 @@ Class containing data used for detection of objects in a video file.
feed_forward_track |
None or mpf_component_api.VideoTrack |
- An mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide. |
+ An optional mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide. |
@@ -733,6 +733,65 @@ they should only be used to specify properties that will not change throughout t
of the service (e.g. Docker container).
+#### component.get_detections_from_all_video_tracks(video_job)
+
+EXPERIMENTAL: This feature is not fully implemented.
+
+Similar to `component.get_detections_from_video(video_job)`, but able to process multiple feed-forward tracks at once.
+Refer to the [Feed Forward All Tracks](Feed-Forward-Guide.md#feed-forward-all-tracks) section of the Feed Forward Guide
+to learn about the `FEED_FORWARD_ALL_TRACKS` property and how it affects feed-forward behavior.
+
+Known limitation: No multi-track `mpf_component_util.VideoCapture` support.
+
+* Method Definition:
+```python
+class MyComponent:
+ def get_detections_from_all_video_tracks(self, video_job):
+ return [mpf_component_api.VideoTrack(...), ...]
+```
+
+`get_detections_from_all_video_tracks`, like all get_detections_from_\* methods, can be implemented either as an
+instance method, a static method, or a class method.
+
+* Parameters:
+
+| Parameter | Data Type | Description |
+|-----------|---------------------------------------|-------------|
+| video_job | `mpf_component_api.AllVideoTracksJob` | Object containing details about the work to be performed.
+
+* Returns: An iterable of `mpf_component_api.VideoTrack`
+
+
+#### mpf_component_api.AllVideoTracksJob
+
+EXPERIMENTAL: This feature is not fully implemented.
+
+Class containing data used for detection of objects in a video file. May contain multiple feed-forward tracks.
+
+Members are the same as `mpf_component_api.VideoJob` with the exception that `feed_forward_track` is replaced by
+`feed_forward_tracks`.
+
+* Members:
+
+
+
+
+ Member |
+ Data Type |
+ Description |
+
+
+
+
+ feed_forward_tracks |
+ None or List[mpf_component_api.VideoTrack] |
+ An optional list of mpf_component_api.VideoTrack objects from the previous pipeline stage. Provided when feed forward is enabled and FEED_FORWARD_ALL_TRACKS is true. See Feed Forward Guide. |
+
+
+
+
+
+
#### mpf_component_api.VideoTrack
Class used to store the location of detected objects in a video file.
diff --git a/docs/site/Feed-Forward-Guide/index.html b/docs/site/Feed-Forward-Guide/index.html
index 0da4f9854a64..45f0e6f297e5 100644
--- a/docs/site/Feed-Forward-Guide/index.html
+++ b/docs/site/Feed-Forward-Guide/index.html
@@ -124,6 +124,9 @@
Feed Forward Pipeline Examples
+ Feed Forward All Tracks
+
+
@@ -523,6 +526,24 @@ Feed Forward Pipeline Examples
Refer to runMogThenOcvFaceFeedForwardRegionTest()
in the
TestSystemOnDiff
class for a system test that demonstrates this behavior.
+Feed Forward All Tracks
+EXPERIMENTAL: This feature is not fully implemented.
+
+The default feed-forward behavior will result in generating one sub-job per track generated in the previous stage.
+Consider a scenario where you need to implement a tracking component that takes individual detections from a stage and
+groups them into tracks. That component needs to accept all tracks from the previous stage as an input to the same
+sub-job.
+Setting FEED_FORWARD_ALL_TRACKS
to true will result in generating one sub-job that contains all the tracks generated
+in the previous stage. Refer to the
+component.get_detections_from_all_video_tracks(video_job)
+section of the Python Batch Component API for more details. This property works in conjunction with the other
+feed-forward properties discussed in the Feed Forward Properties section.
+Known limitations:
+
+- Only Python supported.
+- Only video supported.
+- Not tested with triggers.
+
diff --git a/docs/site/Python-Batch-Component-API/index.html b/docs/site/Python-Batch-Component-API/index.html
index 620cbec38f64..2f03183ac4ce 100644
--- a/docs/site/Python-Batch-Component-API/index.html
+++ b/docs/site/Python-Batch-Component-API/index.html
@@ -881,7 +881,7 @@ component.get_detections_fr
Returns: An iterable of mpf_component_api.VideoTrack
mpf_component_api.VideoJob
-Class containing data used for detection of objects in a video file.
+Class containing data used for detection of objects in a video file. Contains at most one feed-forward track.
@@ -948,7 +948,7 @@ mpf_component_api.VideoJob
feed_forward_track |
None or mpf_component_api.VideoTrack |
- An mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide. |
+ An optional mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide. |
@@ -967,6 +967,70 @@ mpf_component_api.VideoJob
possible to change the value of properties set via environment variables at runtime and therefore
they should only be used to specify properties that will not change throughout the entire lifetime
of the service (e.g. Docker container).
+component.get_detections_from_all_video_tracks(video_job)
+EXPERIMENTAL: This feature is not fully implemented.
+
+Similar to component.get_detections_from_video(video_job)
, but able to process multiple feed-forward tracks at once.
+Refer to the Feed Forward All Tracks section of the Feed Forward Guide
+to learn about the FEED_FORWARD_ALL_TRACKS
property and how it affects feed-forward behavior.
+Known limitation: No multi-track mpf_component_util.VideoCapture
support.
+
+class MyComponent:
+ def get_detections_from_all_video_tracks(self, video_job):
+ return [mpf_component_api.VideoTrack(...), ...]
+
+get_detections_from_all_video_tracks
, like all get_detections_from_* methods, can be implemented either as an
+instance method, a static method, or a class method.
+
+
+
+
+Parameter |
+Data Type |
+Description |
+
+
+
+
+video_job |
+mpf_component_api.AllVideoTracksJob |
+Object containing details about the work to be performed. |
+
+
+
+
+- Returns: An iterable of
mpf_component_api.VideoTrack
+
+mpf_component_api.AllVideoTracksJob
+EXPERIMENTAL: This feature is not fully implemented.
+
+Class containing data used for detection of objects in a video file. May contain multiple feed-forward tracks.
+Members are the same as mpf_component_api.VideoJob
with the exception that feed_forward_track
is replaced by
+feed_forward_tracks
.
+
+
+
+
+ Member |
+ Data Type |
+ Description |
+
+
+
+
+ feed_forward_tracks |
+ None or List[mpf_component_api.VideoTrack] |
+ An optional list of mpf_component_api.VideoTrack objects from the previous pipeline stage. Provided when feed forward is enabled and FEED_FORWARD_ALL_TRACKS is true. See Feed Forward Guide. |
+
+
+
+
mpf_component_api.VideoTrack
Class used to store the location of detected objects in a video file.
diff --git a/docs/site/index.html b/docs/site/index.html
index 528c29c93d47..28e22a0867c2 100644
--- a/docs/site/index.html
+++ b/docs/site/index.html
@@ -408,5 +408,5 @@ Overview
diff --git a/docs/site/search/search_index.json b/docs/site/search/search_index.json
index 092d13157363..f781857d289d 100644
--- a/docs/site/search/search_index.json
+++ b/docs/site/search/search_index.json
@@ -362,7 +362,7 @@
},
{
"location": "/Feed-Forward-Guide/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nchosen with the \nQUALITY_SELECTION_PROPERTY\n. It consists of a string that is a semi-colon separated list of detection\nproperties. For example, you may want to use something other than \nCONFIDENCE\n for the \nQUALITY_SELECTION_PROPERTY\n, but\nyou also want to include the detection with the highest confidence in your feed-forward track. If the component executing\nin the first stage of the pipeline adds a \nBEST_CONFIDENCE\n property to the detection with highest confidence in each track,\nyou can then set the \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to \nBEST_CONFIDENCE\n, and the detections with\nthat property will be added to the feed-forward track.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nFeed forward is an optional behavior of OpenMPF that allows tracks from one detection stage of the pipeline to be\ndirectly \u201cfed into\u201d the next stage. It differs from the default segmenting behavior in the following major ways:\n\n\n\n\n\n\nThe next stage will only look at the frames that had detections in the previous stage. The default segmenting\n behavior results in \u201cfilling the gaps\u201d so that the next stage looks at all the frames between the start and end\n frames of the feed forward track, regardless of whether a detection was actually found in those frames.\n\n\n\n\n\n\nThe next stage can be configured to only look at the detection regions for the frames in the feed forward track. The\n default segmenting behavior does not pass the detection region information to the next stage, so the next stage looks\n at the whole frame region for every frame in the segment.\n\n\n\n\n\n\nThe next stage will process one sub-job per track generated in the previous stage. If the previous stage generated\n more than one track in a frame, say 3 tracks, then the next stage will process that frame a total of 3 times. Feed\n forward can be configured such that only the detection regions for those tracks are processed. If they are\n non-overlapping then there is no duplication of work. The default segmenting behavior will result in one sub-job that\n captures the frame associated with all 3 tracks.\n\n\n\n\n\n\nMotivation\n\n\nConsider using feed forward for the following reasons:\n\n\n\n\n\n\nYou have an algorithm that isn\u2019t capable of breaking down a frame into regions of interest. For example, face\n detection can take a whole frame and generate a separate detection region for each face in the frame. On the other\n hand, performing classification with the OpenCV Deep Neural Network (DNN) component will take that whole frame and\n generate a single detection that\u2019s the size of the frame\u2019s width and height. The OpenCV DNN component will produce\n better results if it operates on smaller regions that only capture the desired object to be classified. Using feed\n forward, you can create a pipeline so that OpenCV DNN component only processes regions with motion in them.\n\n\n\n\n\n\nYou wish to reduce processing time by creating a pipeline in which algorithms are chained from fastest to slowest.\n For example, a pipeline that starts with motion detection will only feed regions with motion to the next stage, which\n may be a compute-intensive face detection algorithm. Reducing the amount of data that algorithm needs to process will\n speed up run times.\n\n\n\n\n\n\n\n\nNOTE:\n Enabling feed forward results in more sub-jobs and more message passing between the Workflow Manager and\ncomponents than the default segmenting behavior. Generally speaking, the more feed forward tracks, the greater the\noverhead cost. The cost may be outweighed by how feed forward can \u201cfilter out\u201d pixel data that doesn\u2019t need to be\nprocessed. Often, the greater the media resolution, the more pixel data is filtered out, and the greater the benefit.\n\n\n\n\nThe output of a feed forward pipeline is the intersection of each stage's output. For example, running a feed forward\npipeline that contains a motion detector and a face detector will ultimately output detections where motion was detected\nin the first stage and a face was detected in the second stage.\n\n\nFirst Stage and Combining Properties\n\n\nWhen feed forward is enabled on a job, there is no change in behavior for the first stage of the pipeline because there\nis no track to feed in. In other words, the first stage will process the media file as though feed forward was not\nenabled. The tracks generated by the first stage will be passed to the second stage which will then be able to take\nadvantage of the feed forward behavior.\n\n\n\n\nNOTE:\n When \nFEED_FORWARD_TYPE\n is set to anything other than \nNONE\n, the following properties will be ignored:\n\nFRAME_INTERVAL\n, \nUSE_KEY_FRAMES\n, \nSEARCH_REGION_*\n.\n\n\n\n\nIf you wish to use the above properties, then you can configure them for the first stage of the pipeline, making sure\nthat \nFEED_FORWARD_TYPE\n is set to \nNONE\n, or not specified, for the first stage. You can then configure each subsequent\nstage to use feed forward. Because only the frames with detections, and those detection regions, are passed forward from\nthe first stage, the subsequent stages will inherit the effects of those properties set on the first stage. \n\n\nFeed Forward Properties\n\n\nComponents that support feed forward have three algorithm properties that control the feed forward behavior:\n\nFEED_FORWARD_TYPE\n, \nFEED_FORWARD_TOP_QUALITY_COUNT\n, and \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n.\n\n\nFEED_FORWARD_TYPE\n can be set to the following values:\n\n\n\n\nNONE\n: Feed forward is disabled (default setting).\n\n\nFRAME\n: For each detection in the feed forward track, search the entire frame associated with that detection. The\n track's detection regions are ignored.\n\n\nSUPERSET_REGION\n: Using the feed forward track, generate a superset region (minimum area rectangle) that captures all\n of the detection regions in that track across all of the frames in that track. Refer to the \nSuperset\n Region\n section for more details. For each detection in the feed forward track, search the superset\n region.\n\n\nREGION\n: For each detection in the feed forward track, search the exact detection region.\n\n\n\n\n\n\nNOTE:\n When using \nREGION\n, the location of the region within the frame, and the size of the region, may be\ndifferent for each detection in the feed forward track. Thus, \nREGION\n should not be used by algorithms that perform\nregion tracking and require a consistent coordinate space from detection to detection. For those algorithms, use\n\nSUPERSET_REGION\n instead. That will ensure that each detection region is relative to the upper right corner of the\nsuperset region for that track.\n\n\n\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n allows you to drop low quality detections from feed forward tracks. Setting the\nproperty to a value less than or equal to 0 has no effect. In that case all detections in the feed forward track will be\nprocessed.\n\n\nWhen \nFEED_FORWARD_TOP_QUALITY_COUNT\n is set to a number greater than 0, say 5, then the top 5 highest quality\ndetections in the feed forward track will be processed. Determination of quality is based on the job property\n\nQUALITY_SELECTION_PROPERTY\n, which defaults to \nCONFIDENCE\n, but may be set to a different detection property. Refer to\nthe \nQuality Selection Guide\n. If the track contains less than 5 detections then all\nof the detections in the track will be processed. If one or more detections have the same quality value, then the\ndetection(s) with the lower frame index take precedence.\n\n\nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n allows you to include detections based on properties in addition to those\nchosen with the \nQUALITY_SELECTION_PROPERTY\n. It consists of a string that is a semi-colon separated list of detection\nproperties. For example, you may want to use something other than \nCONFIDENCE\n for the \nQUALITY_SELECTION_PROPERTY\n, but\nyou also want to include the detection with the highest confidence in your feed-forward track. If the component executing\nin the first stage of the pipeline adds a \nBEST_CONFIDENCE\n property to the detection with highest confidence in each track,\nyou can then set the \nFEED_FORWARD_BEST_DETECTION_PROP_NAMES_LIST\n property to \nBEST_CONFIDENCE\n, and the detections with\nthat property will be added to the feed-forward track.\n\n\nSuperset Region\n\n\nA \u201csuperset region\u201d is the smallest region of interest that contains all of the detections for all of the frames in a\ntrack. This is also known as a \u201cunion\u201d or \n\u201cminimum bounding\nrectangle\"\n.\n\n\n\n\nFor example, consider a track representing a person moving from the upper left to the lower right. The track consists of\n3 frames that have the following detection regions:\n\n\n\n\nFrame 0: \n(x = 10, y = 10, width = 10, height = 10)\n\n\nFrame 1: \n(x = 15, y = 15, width = 10, height = 10)\n\n\nFrame 2: \n(x = 20, y = 20, width = 10, height = 10)\n\n\n\n\nEach detection region is drawn with a solid green line in the above diagram. The blue line represents the full frame\nregion. The superset region for the track is \n(x = 10, y = 10, width = 20, height = 20)\n, and is drawn with a dotted red\nline.\n\n\nThe major advantage of using a superset region is constant size. Some algorithms require the search space in each frame\nto be a constant size in order to successfully track objects.\n\n\nA disadvantage is that the superset region will often be larger than any specific detection region, so the search space\nis not restricted to the smallest possible size in each frame; however, in many cases the search space will be\nsignificantly smaller than the whole frame.\n\n\nIn the worst case, a feed forward track might, for example, capture a person moving from the upper left corner of a\nvideo to the lower right corner. In that case the superset region will be the entire width and height of the frame, so\n\nSUPERSET_REGION\n devolves into \nFRAME\n.\n\n\nIn a more typical case, a feed forward track might capture a person moving in the upper left quadrant of a video. In\nthat case \nSUPERSET_REGION\n is able to filter out 75% of the rest of the frame data. In the example shown in the above\ndiagram, \nSUPERSET_REGION\n is able to filter out 83% of the rest of the frame data.\n\n\n\n \n\n \n\n \nYour browser does not support the embedded video tag.\n\n \nClick here to download the video.\n\n \n\n\n\n\n\nThe above video shows three faces. For each face there is an inner bounding box that moves and an outer bounding box\nthat does not. The inner bounding box represents the face detection in that frame, while the outer bounding box\nrepresents the superset region for the track associated with that face. Note that the bounding box for each face uses a\ndifferent color. The colors are not related to those used in the above diagram.\n\n\nMPFVideoCapture and MPFImageReader Tools\n\n\nWhen developing a component, the \nC++ Batch Component API\n and \nPython Batch\nComponent API\n include utilities that make it easier to support feed forward in\nyour components. They work similarly, but only the C++ tools will be discussed here. The \nMPFVideoCapture\n class is a\nwrapper around OpenCV's \ncv::VideoCapture\n class. \nMPFVideoCapture\n works very similarly to \ncv::VideoCapture\n, except\nthat it might modify the video frames based on job properties. From the point of view of someone using\n\nMPFVideoCapture\n, these modifications are mostly transparent. \nMPFVideoCapture\n makes it look like you are reading the\noriginal video file.\n\n\nConceptually, consider generating a new video from a feed forward track. The new video would have fewer frames (unless\nthere was a detection in every frame) and possibly a smaller frame size.\n\n\nFor example, the original video file might be 30 frames long with 640x480 resolution. If the feed forward track found\ndetections in frames 4, 7, and 10, then \nMPFVideoCapture\n will make it look like the video only has those 3 frames. If\nthe feed forward type is \nSUPERSET_REGION\n or \nREGION,\n and each detection is 30x50 pixels, then \nMPFVideoCapture\n will\nmake it look like the video's original resolution was 30x50 pixels.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the modified\nvideo, not the original. To make the detections relative to the original video the\n\nMPFVideoCapture::ReverseTransform(MPFVideoTrack &videoTrack)\n function must be used.\n\n\nThe general pattern for using \nMPFVideoCapture\n is as follows:\n\n\nstd::vector OcvDnnDetection::GetDetections(const MPFVideoJob &job) {\n\nstd::vector tracks;\n MPFVideoCapture video_cap(job);\n\n cv::Mat frame;\n while (video_cap.Read(frame)) {\n // Process frames and detections to tracks vector\n }\n\n for (MPFVideoTrack &track : tracks) {\n video_cap.ReverseTransform(track);\n }\n\n return tracks;\n}\n\n\n\nMPFVideoCapture\n makes it look like the user is processing the original video, when in reality they are processing a\nmodified version. To avoid confusion, this means that \nMPFVideoCapture\n should always be returning frames that are the\nsame size because most users expect each frame of a video to be the same size.\n\n\nWhen using \nSUPERSET_REGION\n this is not an issue, since one bounding box is used for the entire track. However, when\nusing \nREGION\n, each detection can be a different size, so it is not possible for \nMPFVideoCapture\n to return frames\nthat are always the same size. Since this is a deviation from the expected behavior, and breaks the transparency of\n\nMPFVideoCapture\n, \nSUPERSET_REGION\n should usually be preferred over \nREGION\n. The \nREGION\n setting should only be used\nwith components that explicitly state they support it (e.g. OcvDnnDetection). Those components may not perform region\ntracking, so processing frames of various sizes is not a problem.\n\n\nThe \nMPFImageReader\n class is similar to \nMPFVideoCapture\n, but it works on images instead of videos. \nMPFImageReader\n\nmakes it look like the user is processing an original image, when in reality they are processing a modified version\nwhere the frame region is generated based on a detection (\nMPFImageLocation\n) fed forward from the previous stage of a\npipeline. Note that \nSUPERSET_REGION\n and \nREGION\n have the same effect when working with images. \nMPFImageReader\n also\nhas a reverse transform function.\n\n\nOpenCV DNN Component Tracking\n\n\nThe OpenCV DNN component does not generate detection regions of its own when performing classification. Its tracking\nbehavior depends on whether feed forward is enabled or not. When feed forward is disabled, the component will process\nthe entire region of each frame of a video. If one or more consecutive frames has the same highest confidence\nclassification, then a new track is generated that contains those frames.\n\n\nWhen feed forward is enabled, the OpenCV DNN component will process the region of each frame of feed forward track\naccording to the \nFEED_FORWARD_TYPE\n. It will generate one track that contains the same frames as the feed forward\ntrack. If \nFEED_FORWARD_TYPE\n is set to \nREGION\n then the OpenCV DNN track will contain (inherit) the same detection\nregions as the feed forward track. In any case, the \ndetectionProperties\n map for the detections in the OpenCV DNN track\nwill include the \nCLASSIFICATION\n entries and possibly other OpenCV DNN component properties.\n\n\nFeed Forward Pipeline Examples\n\n\nGoogLeNet Classification with MOG Motion Detection and Feed Forward Region\n\n\nFirst, create the following action:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION\n\n\n\nThen create the following task:\n\n\nCAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nCAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n\n\n\nRunning this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN \nCLASSIFICATION\n entry. Each track has a 1-to-1 correspondence with a\nMOG motion track.\n\n\nRefer to \nrunMogThenCaffeFeedForwardExactRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior. Refer to \nrunMogThenCaffeFeedForwardSupersetRegionTest()\n in\nthat class for a system test that uses \nSUPERSET_REGION\n instead. Refer to \nrunMogThenCaffeFeedForwardFullFrameTest()\n\nfor a system test that uses \nFRAME\n instead.\n\n\n\n\nNOTE:\n Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the \nMERGE_TRACKS\n, \nMIN_GAP_BETWEEN_TRACKS\n, and \nMIN_TRACK_LENGTH\n properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks.\n\n\nNOTE:\n It doesn\u2019t make sense to use \nFEED_FORWARD_TOP_QUALITY_COUNT\n on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n(\nCONFIDENCE\n being the default value for the \nQUALITY_SELECTION_PROPERTY\n job property). Instead,\n\nFEED_FORWARD_TOP_QUALITY_COUNT\n could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested \nQUALITY_SELECTION_PROPERTY\n set.\n\n\n\n\nOCV Face Detection with MOG Motion Detection and Feed Forward Superset Region\n\n\nFirst, create the following action:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION\n\n\n\nThen create the following task:\n\n\nOCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n\n\n\nThen create the following pipeline:\n\n\nOCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n\n\n\nRunning this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track.\n\n\nRefer to \nrunMogThenOcvFaceFeedForwardRegionTest()\n in the\n\nTestSystemOnDiff\n\nclass for a system test that demonstrates this behavior.\n\n\nFeed Forward All Tracks\n\n\nEXPERIMENTAL:\n This feature is not fully implemented.\n\n\n\nThe default feed-forward behavior will result in generating one sub-job per track generated in the previous stage.\nConsider a scenario where you need to implement a tracking component that takes individual detections from a stage and\ngroups them into tracks. That component needs to accept all tracks from the previous stage as an input to the same\nsub-job.\n\n\nSetting \nFEED_FORWARD_ALL_TRACKS\n to true will result in generating one sub-job that contains all the tracks generated\nin the previous stage. Refer to the\n\ncomponent.get_detections_from_all_video_tracks(video_job)\n\nsection of the Python Batch Component API for more details. This property works in conjunction with the other\nfeed-forward properties discussed in the \nFeed Forward Properties\n section.\n\n\nKnown limitations:\n\n\n\n\nOnly Python supported.\n\n\nOnly video supported.\n\n\nNot tested with \ntriggers\n.",
"title": "Feed Forward Guide"
},
{
@@ -405,6 +405,11 @@
"text": "GoogLeNet Classification with MOG Motion Detection and Feed Forward Region First, create the following action: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION\n+ Algorithm: DNNCV\n+ MODEL_NAME: googlenet\n+ SUBTRACT_BLUE_VALUE: 104.0\n+ SUBTRACT_GREEN_VALUE: 117.0\n+ SUBTRACT_RED_VALUE: 123.0\n+ FEED_FORWARD_TYPE: REGION Then create the following task: CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) ACTION Then create the following pipeline: CAFFE GOOGLENET DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ CAFFE GOOGLENET DETECTION (WITH FEED FORWARD REGION) TASK Running this pipeline will result in OpenCV DNN tracks that contain detections where there was MOG motion. Each\ndetection in each track will have an OpenCV DNN CLASSIFICATION entry. Each track has a 1-to-1 correspondence with a\nMOG motion track. Refer to runMogThenCaffeFeedForwardExactRegionTest() in the TestSystemOnDiff \nclass for a system test that demonstrates this behavior. Refer to runMogThenCaffeFeedForwardSupersetRegionTest() in\nthat class for a system test that uses SUPERSET_REGION instead. Refer to runMogThenCaffeFeedForwardFullFrameTest() \nfor a system test that uses FRAME instead. NOTE: Short and/or spurious MOG motion tracks will result in more overhead work when performing feed forward. To\nmitigate this, consider setting the MERGE_TRACKS , MIN_GAP_BETWEEN_TRACKS , and MIN_TRACK_LENGTH properties to\ngenerate longer motion tracks and discard short and/or spurious motion tracks. NOTE: It doesn\u2019t make sense to use FEED_FORWARD_TOP_QUALITY_COUNT on a pipeline stage that follows a MOG or\nSuBSENSE motion detection stage. That\u2019s because those motion detectors don\u2019t generate tracks with confidence values\n( CONFIDENCE being the default value for the QUALITY_SELECTION_PROPERTY job property). Instead, FEED_FORWARD_TOP_QUALITY_COUNT could potentially be used when feeding person tracks into a face detector, for\nexample, if the detections in those person tracks have the requested QUALITY_SELECTION_PROPERTY set. OCV Face Detection with MOG Motion Detection and Feed Forward Superset Region First, create the following action: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION\n+ Algorithm: FACECV\n+ FEED_FORWARD_TYPE: SUPERSET_REGION Then create the following task: OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) ACTION Then create the following pipeline: OCV FACE DETECTION (WITH MOG MOTION TRACKING AND FEED FORWARD SUPERSET REGION) PIPELINE\n+ MOG MOTION DETECTION (WITH TRACKING) TASK\n+ OCV FACE DETECTION (WITH FEED FORWARD SUPERSET REGION) TASK Running this pipeline will result in OCV face tracks that contain detections where there was MOG motion. Each track has\na 1-to-1 correspondence with a MOG motion track. Refer to runMogThenOcvFaceFeedForwardRegionTest() in the TestSystemOnDiff \nclass for a system test that demonstrates this behavior.",
"title": "Feed Forward Pipeline Examples"
},
+ {
+ "location": "/Feed-Forward-Guide/index.html#feed-forward-all-tracks",
+ "text": "EXPERIMENTAL: This feature is not fully implemented. The default feed-forward behavior will result in generating one sub-job per track generated in the previous stage.\nConsider a scenario where you need to implement a tracking component that takes individual detections from a stage and\ngroups them into tracks. That component needs to accept all tracks from the previous stage as an input to the same\nsub-job. Setting FEED_FORWARD_ALL_TRACKS to true will result in generating one sub-job that contains all the tracks generated\nin the previous stage. Refer to the component.get_detections_from_all_video_tracks(video_job) \nsection of the Python Batch Component API for more details. This property works in conjunction with the other\nfeed-forward properties discussed in the Feed Forward Properties section. Known limitations: Only Python supported. Only video supported. Not tested with triggers .",
+ "title": "Feed Forward All Tracks"
+ },
{
"location": "/Derivative-Media-Guide/index.html",
"text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nIntroduction\n\n\nThis guide covers the derivative media feature, which allows users to create pipelines where a component in one of\nthe initial stages of the pipeline generates one or more derivative (aka child) media from the source (aka parent)\nmedia. A common scenario is to extract images from PDFs or other document formats. Once extracted, the Workflow Manager\n(WFM) can perform the subsequent pipeline stages on the source media (if necessary) as well as the derivative media.\nThis differs from typical pipeline execution, which only acts on one or more pieces of source media.\n\n\nComponent actions can be configured to only be performed on source media or derivative media. This is often necessary\nbecause the source media has a different media type than the derivative media, and therefore different actions are\nrequired to process each type of media. For example, PDFs are assigned the \nUNKNOWN\n media type (since the WFM is not\ndesigned to handle them in any special way), while the images extracted from a PDF are assigned the \nIMAGE\n media type.\nAn action for the TikaTextDetection component can process the \nUNKNOWN\n source media to generate \nTEXT\n tracks by\ndetecting the embedded raw character data in the PDF itself, while an action for the TesseractOCRTextDetection component\ncan process the \nIMAGE\n derivative media to generate \nTEXT\n tracks by detecting text in the image data.\n\n\nText Detection Example\n\n\nConsider the following diagram which depicts a pipeline to accomplish generating \nTEXT\n tracks for PDFs which contain\nembedded raw character data and embedded images with text:\n\n\n\n\nEach block represents a single action performed in that stage of the pipeline. (Technically, a pipeline consists of\ntasks executed in sequence, but in this case each task consists of only one action, so we just show the actions.)\nActions that have \nSOURCE MEDIA ONLY\n in their name have the \nSOURCE_MEDIA_ONLY\n property set to \nTRUE\n, which will\nresult in completely skipping that action for derivative media. The component associated with the action will not\nreceive sub-job messages and there will be no representation of the action being executed on derivative media in the\nJSON output object.\n\n\nSimilarly, actions that have \nDERIVATIVE MEDIA ONLY\n in their name have the \nDERIVATIVE_MEDIA_ONLY\n property set\nto \nTRUE\n, which will result in completely skipping that action for source media. Note that setting both properties\nto \nTRUE\n will result in skipping the action for both derivative and source media, which means it will never be\nexecuted. Not setting either property will result in executing the action on both source and derivative media, as you\nsee in the diagram with the \nKEYWORD TAGGING\n action.\n\n\nNote that the actions shown in the source media flow and derivative media flow are \nnot\n executed at the same time.\nThe flows are shown in different rows in the diagram to illustrate the logical separation, not to illustrate\nconcurrency. To be clear, each action in the pipeline is executed sequentially. If an action is missing from a flow it\njust means that no sub-job messages are generated for that kind of media during that stage of the pipeline. If an action\nis shown in both flows then sub-jobs will be performed on both the source and derivative media during that stage.\n\n\nTo break down each stage of this pipeline:\n\n\n\n\nTIKA IMAGE DETECTION ACTION\n: The TikaImageDetection component will extract images from PDFs (or other document\n formats) and place them in \n$MPF_HOME/share/tmp/derivative-media/\n. One \nMEDIA\n track will be generated for\n each image and it will have \nDERIVATIVE_MEDIA_TEMP_PATH\n and \nPAGE_NUM\n track properties.\n\n\nIf remote storage is enabled, the WFM will upload the objects to the object store after this action is performed.\n Refer to the \nObject Storage Guide\n for more information.\n\n\nThe WFM will perform media inspection on the images at this time.\n\n\nEach piece of derivative media will have a parent media id set to the media id value of the source media. It will\n appear as \nmedia.parentMediaId\n in the JSON output object. For source media the value will be -1.\n\n\nEach piece of derivative media will have a \nmedia.mediaMetadata\n property of \nIS_DERIVATIVE_MEDIA\n set to \nTRUE\n.\n The metadata will also contain the \nPAGE_NUM\n property.\n \n\n\nTIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\n: The TikaTextDetection component will generate \nTEXT\n tracks by\n detecting the embedded raw character data in the PDF.\n \n\n\nEAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\n: The EastTextDetection component will generate \nTEXT REGION\n tracks\n for each text region in the extracted images.\n \n\n\nTESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\n: The TesseractOCRTextDetection component\n will generate \nTEXT\n tracks by performing OCR on the text regions passed forward from the previous EAST action.\n \n\n\nKEYWORD TAGGING (WITH FF REGIONS) ACTION\n: The KeywordTagging component will take the \nTEXT\n tracks from the\n previous \nTIKA TEXT\n and \nTESSERACT OCR\n actions and perform keyword tagging. This will add the \nTAGS\n\n , \nTRIGGER_WORDS\n, and \nTRIGGER_WORDS_OFFSET\n properties to each track.\n \n\n\nOCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\n: The Markup component will take the keyword-tagged \nTEXT\n tracks for\n the derivative media and draw bounding boxes on the extracted images.\n\n\n\n\nTask Merging\n\n\nThe large blue rectangles in the diagram represent tasks that are merged together. The purpose of task merging is to\nconsolidate how tracks are represented in the JSON output object by hiding redundant track information, and to make it\nappear that the behaviors of two or more actions are the result of a single algorithm.\n\n\nFor example, keyword tagging behavior is supplemental to the text detection behavior. It's more important that \nTEXT\n\ntracks are associated with the algorithm that performed text detection than the \nKEYWORDTAGGING\n algorithm. Note that in\nour pipeline only the \nKEYWORD TAGGING\n action has the \nOUTPUT_MERGE_WITH_PREVIOUS_TASK\n property set to \nTRUE\n. It has\na similar effect in the source media flow and derivative media flow.\n\n\nIn the source media flow the \nTIKA TEXT\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n action is\nat the end of the merge chain. The tracks generated by the action at the end of the merge chain inherit the algorithm\nand track type from the tracks at the beginning of the merge chain. The effect is that in the JSON output object the\ntracks from the \nTIKA TEXT\n action will not be shown. Instead that action will be listed under \nTRACKS MERGED\n. The\ntracks from the \nKEYWORD TAGGING\n action will be shown with the \nTIKATEXT\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"MEDIA\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nIn the derivative media flow the \nTESSERACT OCR\n action is at the start of the merge chain while the \nKEYWORD TAGGING\n\naction is at the end of the merge chain. The effect is that in the JSON output object the tracks from\nthe \nTESSERACT OCR\n action will not be shown. The tracks from the \nKEYWORD TAGGING\n action will be shown with\nthe \nTESSERACTOCR\n algorithm and \nTEXT\n track type:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS MERGED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ],\n \"TEXT REGION\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nNote that a \nMARKUP\n action will never generate new tracks. It simply fills out the \nmedia.markupResult\n field in the\nJSON output object (not shown above).\n\n\nOutput Last Task Only\n\n\nIf you want to omit all tracks from the JSON output object but the respective \nTEXT\n tracks for the source and\nderivative media, then in you can also set the \nOUTPUT_LAST_TASK_ONLY\n job property to \nTRUE\n. Note that the WFM only\nconsiders tasks that use \nDETECTION\n algorithms as the final task, so \nMARKUP\n is ignored. Setting this property will\nresult in the following JSON for the source media:\n\n\n\"output\": {\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION\",\n \"algorithm\": \"TIKAIMAGE\"\n },\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION\",\n \"algorithm\": \"TIKATEXT\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#TIKA IMAGE DETECTION ACTION#TIKA TEXT DETECTION SOURCE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TIKATEXT\", \n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nAnd the following JSON for the derivative media:\n\n\n\"output\": {\n \"NO TRACKS\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION#OCV GENERIC MARKUP DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"MARKUPCV\"\n }\n ],\n \"TRACKS SUPPRESSED\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"EAST\"\n },\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION\",\n \"algorithm\": \"TESSERACTOCR\"\n }\n ],\n \"TEXT\": [\n {\n \"source\": \"+#EAST TEXT DETECTION DERIVATIVE MEDIA ONLY ACTION#TESSERACT OCR TEXT DETECTION (WITH FF REGION) DERIVATIVE MEDIA ONLY ACTION#KEYWORD TAGGING (WITH FF REGION) ACTION\",\n \"algorithm\": \"TESSERACTOCR\",\n \"tracks\": [ ... ]\n }\n ]\n}\n\n\n\nDeveloping Media Extraction Components\n\n\nThe WFM is not limited to working only with the TikaImageDetection component. Any component can be designed to generate\nderivative media. The requirement is that it must generate \nMEDIA\n tracks, one piece of derivative media per track.\nMinimally, each track must have a \nDERIVATIVE_MEDIA_TEMP_PATH\n property set to the location of the media. By convention,\nthe media should be placed in a top-level directory of the form \n$MPF_HOME/share/tmp/derivative-media/\n. When\nthe job is done running, the media will be moved to persistent storage in \n$MPF_HOME/share/derivative-media/\n if\nremote storage is not enabled.\n\n\nSpecifically, TikaImageDetection uses paths of the\nform \n$MPF_HOME/share/tmp/derivative-media//tika-extracted//image.\n. The \n\n part ensures\nthat the results of two different actions executed within the same job on the same source media, or actions executed\nwithin the same job on different source media files, do not conflict with each other. A new \n\n is generated for\neach invocation of \nGetDetections()\n on the component.\n\n\nYour media extraction component can optionally include other track properties. These will get added to the derivative\nmedia metadata. For example, TikaImageDetection adds the \nPAGE_NUM\n property.\n\n\nNote that although this guide only talks about derivative images, your component can generate any kind of media. Be sure\nthat components in the subsequent pipeline stages can handle the media type detected by WFM media inspection.\n\n\nDefault Pipelines\n\n\nOpenMPF comes with some default pipelines for detecting text in documents and other pipelines for detecting faces in documents. Refer to the TikaImageDetection \ndescriptor.json\n.",
@@ -977,7 +982,7 @@
},
{
"location": "/Python-Batch-Component-API/index.html",
- "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-2.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.12.0.88-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.12. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof \n----.whl\n.\n\n--\n are called\n\ncompatibility tags\n. For example,\n\nmpf_component_api\n is pure Python, so the name of its wheel file is\n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will\nwork on any platform.\n\n\nThe acceptable Python version tags are:\n\n\n\n\ncp312\n (or lower)\n\n\npy312\n (or lower)\n\n\n\n\nThe \nONLY\n acceptable ABI tags are:\n\n\n\n\ncp312\n\n\nabi3\n\n\nnone\n\n\n\n\nThe acceptable platform tags are:\n\n\n\n\nany\n\n\nlinux_x86_64\n\n\nmanylinux1_x86_64\n\n\nmanylinux2010_x86_64\n\n\nmanylinux2014_x86_64\n\n\nmanylinux_2_5_x86_64\n through \nmanylinux_2_39_x86_64\n\n\n\n\nThe full list of compatible tags can be listed by running: \npip3 debug --verbose\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in\n\nlogging\n module.\n The component should\n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the\n\nlogging\n module for the component. The logger will write log messages to standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines.\n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
+ "text": "NOTICE:\n This software (or technical data) was produced for the U.S. Government under contract, and is subject to the\nRights in Data-General Clause 52.227-14, Alt. IV (DEC 2007). Copyright 2024 The MITRE Corporation. All Rights Reserved.\n\n\nAPI Overview\n\n\nIn OpenMPF, a \ncomponent\n is a plugin that receives jobs (containing media), processes that media, and returns results.\n\n\nThe OpenMPF Batch Component API currently supports the development of \ndetection components\n, which are used detect\nobjects in image, video, audio, or other (generic) files that reside on disk.\n\n\nUsing this API, detection components can be built to provide:\n\n\n\n\nDetection (Localizing an object)\n\n\nTracking (Localizing an object across multiple frames)\n\n\nClassification (Detecting the type of object and optionally localizing that object)\n\n\nTranscription (Detecting speech and transcribing it into text)\n\n\n\n\nHow Components Integrate into OpenMPF\n\n\nComponents are integrated into OpenMPF through the use of OpenMPF's \nComponent Executable\n.\nDevelopers create component libraries that encapsulate the component detection logic.\nEach instance of the Component Executable loads one of these libraries and uses it to service job requests\nsent by the OpenMPF Workflow Manager (WFM).\n\n\nThe Component Executable:\n\n\n\n\nReceives and parses job requests from the WFM\n\n\nInvokes methods on the component library to obtain detection results\n\n\nPopulates and sends the respective responses to the WFM\n\n\n\n\nThe basic pseudocode for the Component Executable is as follows:\n\n\ncomponent_cls = locate_component_class()\ncomponent = component_cls()\n\nwhile True:\n job = receive_job()\n\n if is_image_job(job) and hasattr(component, 'get_detections_from_image'):\n detections = component.get_detections_from_image(job)\n send_job_response(detections)\n\n elif is_video_job(job) and hasattr(component, 'get_detections_from_video'):\n detections = component.get_detections_from_video(job)\n send_job_response(detections)\n\n elif is_audio_job(job) and hasattr(component, 'get_detections_from_audio'):\n detections = component.get_detections_from_audio(job)\n send_job_response(detections)\n\n elif is_generic_job(job) and hasattr(component, 'get_detections_from_generic'):\n detections = component.get_detections_from_generic(job)\n send_job_response(detections)\n\n\n\nEach instance of a Component Executable runs as a separate process.\n\n\nThe Component Executable receives and parses requests from the WFM, invokes methods on the Component Logic to get\ndetection objects, and subsequently populates responses with the component output and sends them to the WFM.\n\n\nA component developer implements a detection component by creating a class that defines one or more of the\nget_detections_from_* methods. See the \nAPI Specification\n for more information.\n\n\nThe figures below present high-level component diagrams of the Python Batch Component API.\nThis figure shows the basic structure:\n\n\n\n\nThe Node Manager is only used in a non-Docker deployment. In a Docker deployment the Component Executor is started by the Docker container itself.\n\n\nThe Component Executor determines that it is running a Python component so it creates an instance of the\n\nPythonComponentHandle\n\nclass. The \nPythonComponentHandle\n class creates an instance of the component class and calls one of the\n\nget_detections_from_*\n methods on the component instance. The example\nabove is an image component, so \nPythonComponentHandle\n calls \nExampleImageFaceDetection.get_detections_from_image\n\non the component instance. The component instance creates an instance of\n\nmpf_component_util.ImageReader\n to access the image. Components that support video\nwould implement \nget_detections_from_video\n and use\n\nmpf_component_util.VideoCapture\n instead.\n\n\nThis figure show the structure when the mixin classes are used:\n\n\n\n\nThe figure above shows a video component, \nExampleVideoFaceDetection\n, that extends the\n\nmpf_component_util.VideoCaptureMixin\n class. \nPythonComponentHandle\n will\ncall \nget_detections_from_video\n on an instance of \nExampleVideoFaceDetection\n. \nExampleVideoFaceDetection\n does not\nimplement \nget_detections_from_video\n, so the implementation inherited from \nmpf_component_util.VideoCaptureMixin\n\ngets called. \nmpf_component_util.VideoCaptureMixin.get_detections_from_video\n creates an instance of\n\nmpf_component_util.VideoCapture\n and calls\n\nExampleVideoFaceDetection.get_detections_from_video_capture\n, passing in the \nmpf_component_util.VideoCapture\n it\njust created. \nExampleVideoFaceDetection.get_detections_from_video_capture\n is where the component reads the video\nusing the passed-in \nmpf_component_util.VideoCapture\n and attempts to find detections. Components that support images\nwould extend \nmpf_component_util.ImageReaderMixin\n, implement\n\nget_detections_from_image_reader\n, and access the image using the passed-in\n\nmpf_component_util.ImageReader\n.\n\n\nDuring component registration a \nvirtualenv\n is created for each component.\nThe virtualenv has access to the built-in Python libraries, but does not have access to any third party packages\nthat might be installed on the system. When creating the virtualenv for a setuptools-based component the only packages\nthat get installed are the component itself and any dependencies specified in the setup.cfg\nfile (including their transitive dependencies). When creating the virtualenv for a basic Python component the only\npackage that gets installed is \nmpf_component_api\n. \nmpf_component_api\n is the package containing the job classes\n(e.g. \nmpf_component_api.ImageJob\n,\n\nmpf_component_api.VideoJob\n) and detection result classes\n(e.g. \nmpf_component_api.ImageLocation\n,\n\nmpf_component_api.VideoTrack\n).\n\n\nHow to Create a Python Component\n\n\nThere are two types of Python components that are supported, setuptools-based components and basic Python components.\nBasic Python components are quicker to set up, but have no built-in support for dependency management.\nAll dependencies must be handled by the developer. Setuptools-based components are recommended since they use\nsetuptools and pip for dependency management.\n\n\nEither way, the end goal is to create a Docker image. This document describes the steps for developing a component\noutside of Docker. Many developers prefer to do that first and then focus on building and running their component\nwithin Docker after they are confident it works in a local environment. Alternatively, some developers feel confident\ndeveloping their component entirely within Docker. When you're ready for the Docker steps, refer to the\n\nREADME\n.\n\n\nGet openmpf-python-component-sdk\n\n\nIn order to create a Python component you will need to clone the\n\nopenmpf-python-component-sdk repository\n if you don't\nalready have it. While not technically required, it is recommended to also clone the\n\nopenmpf-build-tools repository\n.\nThe rest of the steps assume you cloned openmpf-python-component-sdk to\n\n~/openmpf-projects/openmpf-python-component-sdk\n. The rest of the steps also assume that if you cloned the\nopenmpf-build-tools repository, you cloned it to \n~/openmpf-projects/openmpf-build-tools\n.\n\n\nSetup Python Component Libraries\n\n\nThe component packaging steps require that wheel files for \nmpf_component_api\n, \nmpf_component_util\n, and\ntheir dependencies are available in the \n~/mpf-sdk-install/python/wheelhouse\n directory.\n\n\nIf you have openmpf-build-tools, then you can run:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk\n\n\n\nTo setup the libraries manually you can run:\n\n\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/api\npip3 wheel -w ~/mpf-sdk-install/python/wheelhouse ~/openmpf-projects/openmpf-python-component-sdk/detection/component_util\n\n\n\nHow to Create a Setuptools-based Python Component\n\n\nIn this example we create a setuptools-based video component named \"MyComponent\". An example of a setuptools-based\nPython component can be found\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 setup.cfg\n\u251c\u2500\u2500 component_name\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 component_name.py\n\u2514\u2500\u2500 plugin-files\n \u251c\u2500\u2500 descriptor\n \u2502 \u2514\u2500\u2500 descriptor.json\n \u2514\u2500\u2500 wheelhouse # optional\n \u2514\u2500\u2500 my_prebuilt_lib-0.1-py3-none-any.whl\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/my_component\nmkdir -p MyComponent/plugin-files/descriptor\ntouch MyComponent/pyproject.toml\ntouch MyComponent/setup.cfg\ntouch MyComponent/my_component/__init__.py\ntouch MyComponent/my_component/my_component.py\ntouch MyComponent/plugin-files/descriptor/descriptor.json\n\n\n\n2. Create pyproject.toml file in project's top-level directory:\n\n\npyproject.toml\n should contain the following content:\n\n\n[build-system]\nrequires = [\"setuptools\"]\nbuild-backend = \"setuptools.build_meta\"\n\n\n\n3. Create setup.cfg file in project's top-level directory:\n\n\nExample of a minimal setup.cfg file:\n\n\n[metadata]\nname = MyComponent\nversion = 0.1\n\n[options]\npackages = my_component\ninstall_requires =\n mpf_component_api>=0.1\n mpf_component_util>=0.1\n\n[options.entry_points]\nmpf.exported_component =\n component = my_component.my_component:MyComponent\n\n[options.package_data]\nmy_component=models/*\n\n\n\nThe \nname\n parameter defines the distribution name. Typically the distribution name matches the component name.\n\n\nAny dependencies that component requires should be listed in the \ninstall_requires\n field.\n\n\nThe Component Executor looks in the \nentry_points\n element and uses the \nmpf.exported_component\n field to determine\nthe component class. The right hand side of \ncomponent =\n should be the dotted module name, followed by a \n:\n,\nfollowed by the name of the class. The general pattern is\n\n'mpf.exported_component': 'component = .:'\n. In the above example,\n\nMyComponent\n is the class name. The module is listed as \nmy_component.my_component\n because the \nmy_component\n\npackage contains the \nmy_component.py\n file and the \nmy_component.py\n file contains the \nMyComponent\n class.\n\n\nThe \n[options.package_data]\n section is optional. It should be used when there are non-Python files\nin a package directory that should be included when the component is installed.\n\n\n4. Create descriptor.json file in MyComponent/plugin-files/descriptor:\n\n\nThe \nbatchLibrary\n field should match the distribution name from the setup.cfg file. In this example the\nfield should be: \n\"batchLibrary\" : \"MyComponent\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n5. Implement your component class:\n\n\nBelow is an example of the structure of a simple component. This component extends\n\nmpf_component_util.VideoCaptureMixin\n to simplify the use of\n\nmpf_component_util.VideoCapture\n. You would replace the call to\n\nrun_detection_algorithm_on_frame\n with your component-specific logic.\n\n\nimport logging\n\nimport mpf_component_api as mpf\nimport mpf_component_util as mpf_util\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent(mpf_util.VideoCaptureMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n for result_track in run_detection_algorithm_on_frame(frame_index, frame):\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield result_track\n\n\n\n6. Optional: Add prebuilt wheel files if not available on PyPi:\n\n\nIf your component depends on Python libraries that are not available on PyPi, the libraries can be manually added to\nyour project. The prebuilt libraries must be placed in your project's \nplugin-files/wheelhouse\n directory.\nThe prebuilt library names must be listed in your \nsetup.cfg\n file's \ninstall_requires\n field.\nIf any of the prebuilt libraries have transitive dependencies that are not available on PyPi, then those libraries\nmust also be added to your project's \nplugin-files/wheelhouse\n directory.\n\n\n7. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nMyComponent\n\u251c\u2500\u2500 descriptor\n\u2502 \u2514\u2500\u2500 descriptor.json\n\u2514\u2500\u2500 wheelhouse\n \u251c\u2500\u2500 MyComponent-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_api-0.1-py3-none-any.whl\n \u251c\u2500\u2500 mpf_component_util-0.1-py3-none-any.whl\n \u251c\u2500\u2500 numpy-2.2.6-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl\n \u2514\u2500\u2500 opencv_python-4.12.0.88-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -psdk ~/openmpf-projects/openmpf-python-component-sdk -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following commands:\n\n\nmkdir -p plugin-packages/MyComponent/wheelhouse\ncp -r MyComponent/plugin-files/* plugin-packages/MyComponent/\npip3 wheel -w plugin-packages/MyComponent/wheelhouse -f ~/mpf-sdk-install/python/wheelhouse -f plugin-packages/MyComponent/wheelhouse ./MyComponent/\ncd plugin-packages\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n8. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nHow to Create a Basic Python Component\n\n\nIn this example we create a basic Python component that supports video. An example of a basic Python component can be\nfound\n\nhere\n.\n\n\nThis is the recommended project structure:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\n1. Create directory structure:\n\n\nmkdir MyComponent\nmkdir MyComponent/descriptor\ntouch MyComponent/descriptor/descriptor.json\ntouch MyComponent/my_component.py\n\n\n\n2. Create descriptor.json file in MyComponent/descriptor:\n\n\nThe \nbatchLibrary\n field should be the full path to the Python file containing your component class.\nIn this example the field should be: \n\"batchLibrary\" : \"${MPF_HOME}/plugins/MyComponent/my_component.py\"\n.\nSee the \nComponent Descriptor Reference\n for details about\nthe descriptor format.\n\n\n3. Implement your component class:\n\n\nBelow is an example of the structure of a simple component that does not use\n\nmpf_component_util.VideoCaptureMixin\n. You would replace the call to\n\nrun_detection_algorithm\n with your component-specific logic.\n\n\nimport logging\n\nlogger = logging.getLogger('MyComponent')\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n logger.info('[%s] Received video job: %s', video_job.job_name, video_job)\n return run_detection_algorithm(video_job)\n\nEXPORT_MPF_COMPONENT = MyComponent\n\n\n\nThe Component Executor looks for a module-level variable named \nEXPORT_MPF_COMPONENT\n to specify which class\nis the component.\n\n\n4. Optional: Create the plugin package for non-Docker deployments:\n\n\nThe directory structure of the .tar.gz file will be:\n\n\nComponentName\n\u251c\u2500\u2500 component_name.py\n\u251c\u2500\u2500 dependency.py\n\u2514\u2500\u2500 descriptor\n \u2514\u2500\u2500 descriptor.json\n\n\n\nTo create the plugin packages you can run the build script as follows:\n\n\n~/openmpf-projects/openmpf-build-tools/build-openmpf-components/build_components.py -c MyComponent\n\n\n\nThe plugin package can also be built manually using the following command:\n\n\ntar -zcf MyComponent.tar.gz MyComponent\n\n\n\n5. Create the component Docker image:\n\n\nSee the \nREADME\n.\n\n\nAPI Specification\n\n\nAn OpenMPF Python component is a class that defines one or more of the get_detections_from_* methods.\n\n\ncomponent.get_detections_from_* methods\n\n\nAll get_detections_from_* methods are invoked through an instance of the component class. The only parameter passed\nin is an appropriate job object (e.g. \nmpf_component_api.ImageJob\n, \nmpf_component_api.VideoJob\n). Since the methods\nare invoked through an instance, instance methods and class methods end up with two arguments, the first is either the\ninstance or the class, respectively. All get_detections_from_* methods can be implemented either as an instance method,\na static method, or a class method.\nFor example:\n\n\ninstance method:\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nstatic method:\n\n\nclass MyComponent:\n @staticmethod\n def get_detections_from_image(image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nclass method:\n\n\nclass MyComponent:\n @classmethod\n def get_detections_from_image(cls, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nAll get_detections_from_* methods must return an iterable of the appropriate detection type\n(e.g. \nmpf_component_api.ImageLocation\n, \nmpf_component_api.VideoTrack\n). The return value is normally a list or generator,\nbut any iterable can be used.\n\n\nImage API\n\n\ncomponent.get_detections_from_image(image_job)\n\n\nUsed to detect objects in an image file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_image(self, image_job):\n return [mpf_component_api.ImageLocation(...), ...]\n\n\n\nget_detections_from_image\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nimage_job\n\n\nmpf_component_api.ImageJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.ImageLocation\n\n\n\n\nmpf_component_api.ImageJob\n\n\nClass containing data used for detection of objects in an image file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.jpg\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of the image in pixels\n\n \nFRAME_HEIGHT\n : the height of the image in pixels\n\n \n\n May include the following key-value pairs:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \nHORIZONTAL_FLIP\n : true if the image is mirrored across the Y-axis, otherwise false\n\n \nEXIF_ORIENTATION\n : the standard EXIF orientation tag; a value between 1 and 8\n\n \n\n \n\n \n\n \n\n \nfeed_forward_location\n\n \nNone\n or \nmpf_component_api.ImageLocation\n\n \nAn \nmpf_component_api.ImageLocation\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.ImageLocation\n\n\nClass used to store the location of detected objects in a image file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, x_left_upper, y_left_upper, width, height, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nx_left_upper\n\n\nint\n\n\nUpper left X coordinate of the detected object.\n\n\n\n\n\n\ny_left_upper\n\n\nint\n\n\nUpper left Y coordinate of the detected object.\n\n\n\n\n\n\nwidth\n\n\nint\n\n\nThe width of the detected object.\n\n\n\n\n\n\nheight\n\n\nint\n\n\nThe height of the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nSee here for information about rotation and horizontal flipping.\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\nmpf_component_api.ImageLocation(0, 0, 100, 100, 1.0, {'CLASSIFICATION': 'backpack'})\n\n\n\nmpf_component_util.ImageReader\n\n\nmpf_component_util.ImageReader\n is a utility class for accessing images. It is the image equivalent to\n\nmpf_component_util.VideoCapture\n. Like \nmpf_component_util.VideoCapture\n,\nit may modify the read-in frame data based on job_properties. From the point of view of someone using\n\nmpf_component_util.ImageReader\n, these modifications are mostly transparent. \nmpf_component_util.ImageReader\n makes\nit look like you are reading the original image file as though it has already been rotated, flipped, cropped, etc.\n\n\nOne issue with this approach is that the detection bounding boxes will be relative to the\nmodified frame data, not the original. To make the detections relative to the original image\nthe \nmpf_component_util.ImageReader.reverse_transform(image_location)\n method must be called on each\n\nmpf_component_api.ImageLocation\n. Since the use of \nmpf_component_util.ImageReader\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.ImageReader\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_image(image_job):\n image_reader = mpf_component_util.ImageReader(image_job)\n image = image_reader.get_image()\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_image_locations = run_component_specific_algorithm(image)\n for result in result_image_locations:\n image_reader.reverse_transform(result)\n yield result\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.ImageReaderMixin\n for a more concise way to use\n\nmpf_component_util.ImageReader\n below.\n\n\nmpf_component_util.ImageReaderMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.ImageReader\n.\n\nmpf_component_util.ImageReaderMixin\n takes care of initializing a \nmpf_component_util.ImageReader\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.ImageReaderMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.ImageReaderMixin\n.\n\n\nThe component must implement \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must read the image using the \nmpf_component_util.ImageReader\n\n that is passed in to \nget_detections_from_image_reader(image_job, image_reader)\n.\n\n\nThe component must NOT implement \nget_detections_from_image(image_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.ImageReader.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.ImageReaderMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.ImageReaderMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_image_reader(image_job, image_reader):\n image = image_reader.get_image()\n\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n return run_component_specific_algorithm(image)\n\n\n\nmpf_component_util.ImageReaderMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\n\n\nVideo API\n\n\ncomponent.get_detections_from_video(video_job)\n\n\nUsed to detect objects in a video file. Prior to being sent to the component, videos are split into logical \"segments\"\nof video data and each segment (containing a range of frames) is assigned to a different job. Components are not\nguaranteed to receive requests in any order. For example, the first request processed by a component might receive a\nrequest for frames 300-399 of a Video A, while the next request may cover frames 900-999 of a Video B.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_video(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_video\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.VideoJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.VideoJob\n\n\nClass containing data used for detection of objects in a video file. Contains at most one feed-forward track.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\".\n\n \n\n \n\n \nstart_frame\n\n \nint\n\n \nThe first frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \nstop_frame\n\n \nint\n\n \nThe last frame number (0-based index) of the video that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of video in milliseconds\n\n \nFPS\n : frames per second (averaged for variable frame rate video)\n\n \nFRAME_COUNT\n : the number of frames in the video\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \nFRAME_WIDTH\n : the width of a frame in pixels\n\n \nFRAME_HEIGHT\n : the height of a frame in pixels\n\n \nHAS_CONSTANT_FRAME_RATE\n : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined\n\n \n\n May include the following key-value pair:\n \n\n \nROTATION\n : A floating point value in the interval \n[0.0, 360.0)\n indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction.\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.VideoTrack\n\n \nAn optional \nmpf_component_api.VideoTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\n\n\nIMPORTANT:\n \nFRAME_INTERVAL\n is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98.\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\ncomponent.get_detections_from_all_video_tracks(video_job)\n\n\nEXPERIMENTAL:\n This feature is not fully implemented.\n\n\n\nSimilar to \ncomponent.get_detections_from_video(video_job)\n, but able to process multiple feed-forward tracks at once.\nRefer to the \nFeed Forward All Tracks\n section of the Feed Forward Guide\nto learn about the \nFEED_FORWARD_ALL_TRACKS\n property and how it affects feed-forward behavior.\n\n\nKnown limitation: No multi-track \nmpf_component_util.VideoCapture\n support.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_all_video_tracks(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...]\n\n\n\nget_detections_from_all_video_tracks\n, like all get_detections_from_* methods, can be implemented either as an\ninstance method, a static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nvideo_job\n\n\nmpf_component_api.AllVideoTracksJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.VideoTrack\n\n\n\n\nmpf_component_api.AllVideoTracksJob\n\n\nEXPERIMENTAL:\n This feature is not fully implemented.\n\n\n\nClass containing data used for detection of objects in a video file. May contain multiple feed-forward tracks.\n\n\nMembers are the same as \nmpf_component_api.VideoJob\n with the exception that \nfeed_forward_track\n is replaced by\n\nfeed_forward_tracks\n.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \nfeed_forward_tracks\n\n \nNone\n or \nList[mpf_component_api.VideoTrack]\n\n \nAn optional list of \nmpf_component_api.VideoTrack\n objects from the previous pipeline stage. Provided when feed forward is enabled and \nFEED_FORWARD_ALL_TRACKS\n is true. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nmpf_component_api.VideoTrack\n\n\nClass used to store the location of detected objects in a video file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_frame\n\n\nint\n\n\nThe first frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nstop_frame\n\n\nint\n\n\nThe last frame number (0-based index) that contained the detected object.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\nframe_locations\n\n\ndict[int, mpf_component_api.ImageLocation]\n\n\nA dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a \nmpf_component_api.ImageLocation\n calculated as if that frame was a still image. Note that a key-value pair is \nnot\n required for every frame between the track start frame and track stop frame.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.VideoTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\n\n\nExample:\n\n\n\n\nA component that performs generic object classification can add an entry to \ndetection_properties\n where the key is\n\nCLASSIFICATION\n and the value is the type of object detected.\n\n\ntrack = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())\n\n\n\nmpf_component_util.VideoCapture\n\n\nmpf_component_util.VideoCapture\n is a utility class for reading videos. \nmpf_component_util.VideoCapture\n works very\nsimilarly to \ncv2.VideoCapture\n, except that it might modify the video frames based on job properties. From the point\nof view of someone using \nmpf_component_util.VideoCapture\n, these modifications are mostly transparent.\n\nmpf_component_util.VideoCapture\n makes it look like you are reading the original video file as though it has already\nbeen rotated, flipped, cropped, etc. Also, if frame skipping is enabled, such as by setting the value of the\n\nFRAME_INTERVAL\n job property, it makes it look like you are reading the video as though it never contained the\nskipped frames.\n\n\nOne issue with this approach is that the detection frame numbers and bounding box will be relative to the\nmodified video, not the original. To make the detections relative to the original video\nthe \nmpf_component_util.VideoCapture.reverse_transform(video_track)\n method must be called on each\n\nmpf_component_api.VideoTrack\n. Since the use of \nmpf_component_util.VideoCapture\n is optional, the framework\ncannot automatically perform the reverse transform for the developer.\n\n\nThe general pattern for using \nmpf_component_util.VideoCapture\n is as follows:\n\n\nclass MyComponent:\n\n @staticmethod\n def get_detections_from_video(video_job):\n video_capture = mpf_component_util.VideoCapture(video_job)\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n video_capture.reverse_transform(track)\n yield track\n\n\n\nAlternatively, see the documentation for \nmpf_component_util.VideoCaptureMixin\n for a more concise way to use\n\nmpf_component_util.VideoCapture\n below.\n\n\nmpf_component_util.VideoCaptureMixin\n\n\nA mixin class that can be used to simplify the usage of \nmpf_component_util.VideoCapture\n.\n\nmpf_component_util.VideoCaptureMixin\n takes care of initializing a \nmpf_component_util.VideoCapture\n and\nperforming the reverse transform.\n\n\nThere are some requirements to properly use \nmpf_component_util.VideoCaptureMixin\n:\n\n\n\n\nThe component must extend \nmpf_component_util.VideoCaptureMixin\n.\n\n\nThe component must implement \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must read the video using the \nmpf_component_util.VideoCapture\n\n that is passed in to \nget_detections_from_video_capture(video_job, video_capture)\n.\n\n\nThe component must NOT implement \nget_detections_from_video(video_job)\n.\n\n\nThe component must NOT call \nmpf_component_util.VideoCapture.reverse_transform\n.\n\n\n\n\nThe general pattern for using \nmpf_component_util.VideoCaptureMixin\n is as follows:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin):\n\n @staticmethod # Can also be a regular instance method or a class method\n def get_detections_from_video_capture(video_job, video_capture):\n # If frame index is not required, you can just loop over video_capture directly\n for frame_index, frame in enumerate(video_capture):\n # run_component_specific_algorithm is a placeholder for this example.\n # Replace run_component_specific_algorithm with your component's detection logic\n result_tracks = run_component_specific_algorithm(frame_index, frame)\n for track in result_tracks:\n # Alternatively, while iterating through the video, add tracks to a list. When done, return that list.\n yield track\n\n\n\nmpf_component_util.VideoCaptureMixin\n is a mixin class so it is designed in a way that does not prevent the subclass\nfrom extending other classes. If a component supports both videos and images, and it uses\n\nmpf_component_util.VideoCaptureMixin\n, it should also use\n\nmpf_component_util.ImageReaderMixin\n.\nFor example:\n\n\nclass MyComponent(mpf_component_util.VideoCaptureMixin, mpf_component_util.ImageReaderMixin):\n\n @staticmethod\n def get_detections_from_video_capture(video_job, video_capture):\n ...\n\n @staticmethod\n def get_detections_from_image_reader(image_job, image_reader):\n ...\n\n\n\nAudio API\n\n\ncomponent.get_detections_from_audio(audio_job)\n\n\nUsed to detect objects in an audio file.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_audio(self, audio_job):\n return [mpf_component_api.AudioTrack(...), ...]\n\n\n\nget_detections_from_audio\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\naudio_job\n\n\nmpf_component_api.AudioJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.AudioTrack\n\n\n\n\nmpf_component_api.AudioJob\n\n\nClass containing data used for detection of objects in an audio file.\nCurrently, audio files are not logically segmented, so a job will contain the entirety of the audio file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.mp3\".\n\n \n\n \n\n \nstart_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the beginning of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \nstop_time\n\n \nint\n\n \nThe time (0-based index, in milliseconds) associated with the end of the segment of the audio file that should be processed to look for detections.\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pairs:\n \n\n \nDURATION\n : length of audio file in milliseconds\n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.AudioTrack\n\n \nAn \nmpf_component_api.AudioTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.AudioTrack\n\n\nClass used to store the location of detected objects in an audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, start_time, stop_time, confidence, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nstart_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event started.\n\n\n\n\n\n\nstop_time\n\n\nint\n\n\nThe time (0-based index, in ms) when the audio detection event stopped.\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\n\n\nNOTE:\n Currently, \nmpf_component_api.AudioTrack.detection_properties\n do not show up in the JSON output object or\nare used by the WFM in any way.\n\n\n\n\nGeneric API\n\n\ncomponent.get_detections_from_generic(generic_job)\n\n\nUsed to detect objects in files that are not video, image, or audio files. Such files are of the UNKNOWN type and\nhandled generically.\n\n\n\n\nMethod Definition:\n\n\n\n\nclass MyComponent:\n def get_detections_from_generic(self, generic_job):\n return [mpf_component_api.GenericTrack(...), ...]\n\n\n\nget_detections_from_generic\n, like all get_detections_from_* methods, can be implemented either as an instance method,\na static method, or a class method.\n\n\n\n\nParameters:\n\n\n\n\n\n\n\n\n\n\nParameter\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\ngeneric_job\n\n\nmpf_component_api.GenericJob\n\n\nObject containing details about the work to be performed.\n\n\n\n\n\n\n\n\n\n\nReturns: An iterable of \nmpf_component_api.GenericTrack\n\n\n\n\nmpf_component_api.GenericJob\n\n\nClass containing data used for detection of objects in a file that isn't a video, image, or audio file. The file is not\nlogically segmented, so a job will contain the entirety of the file.\n\n\n\n\nMembers:\n\n\n\n\n\n \n\n \n\n \nMember\n\n \nData Type\n\n \nDescription\n\n \n\n \n\n \n\n \n\n \njob_name\n\n \nstr\n\n \nA specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes.\n\n \n\n \n\n \ndata_uri\n\n \nstr\n\n \nThe URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.txt\".\n\n \n\n \n\n \njob_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the \nComponent Descriptor Reference\n. Values are determined when creating a pipeline or when submitting a job.\n \n\n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n\n \n\n \n\n \nmedia_properties\n\n \ndict[str, str]\n\n \n\n Contains a dict with keys and values of type \nstr\n of metadata about the media associated with the job.\n \n\n Includes the following key-value pair:\n \n\n \nMIME_TYPE\n : the MIME type of the media\n\n \n\n \n\n \n\n \n\n \nfeed_forward_track\n\n \nNone\n or \nmpf_component_api.GenericTrack\n\n \nAn \nmpf_component_api.GenericTrack\n from the previous pipeline stage. Provided when feed forward is enabled. See \nFeed Forward Guide\n.\n\n \n\n \n\n\n\n\n\nJob properties can also be set through environment variables prefixed with \nMPF_PROP_\n. This allows\nusers to set job properties in their\n\ndocker-compose files.\n\nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).\n\n\nmpf_component_api.GenericTrack\n\n\nClass used to store the location of detected objects in a file that is not a video, image, or audio file.\n\n\n\n\nConstructor:\n\n\n\n\ndef __init__(self, confidence=-1.0, detection_properties=None):\n ...\n\n\n\n\n\nMembers:\n\n\n\n\n\n\n\n\n\n\nMember\n\n\nData Type\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\nconfidence\n\n\nfloat\n\n\nRepresents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0.\n\n\n\n\n\n\ndetection_properties\n\n\ndict[str, str]\n\n\nA dict with keys and values of type \nstr\n containing optional additional information about the detected object. For best practice, keys should be in all CAPS.\n\n\n\n\n\n\n\n\nHow to Report Errors\n\n\nThe following is an example of how to throw an exception:\n\n\nimport mpf_component_api as mpf\n\n...\nraise mpf.DetectionError.MISSING_PROPERTY.exception(\n 'The REALLY_IMPORTANT property must be provided as a job property.')\n\n\n\nThe Python Batch Component API supports all of the same error types\nlisted \nhere\n for the C++ Batch Component API. Be sure to omit\nthe \nMPF_\n prefix. You can replace the \nMISSING_PROPERTY\n part in the above code with any other error type. When\ngenerating an exception, choose the type that best describes your error.\n\n\nPython Component Build Environment\n\n\nAll Python components must work with CPython 3.12. Also, Python components\nmust work with the Linux version that is used by the OpenMPF Component\nExecutable. At this writing, OpenMPF runs on\nUbuntu 20.04 (kernel version 5.13.0-30). Pure Python code should work on any\nOS, but incompatibility issues can arise when using Python libraries that\ninclude compiled extension modules. Python libraries are typically distributed\nas wheel files. The wheel format requires that the file name follows the pattern\nof \n----.whl\n.\n\n--\n are called\n\ncompatibility tags\n. For example,\n\nmpf_component_api\n is pure Python, so the name of its wheel file is\n\nmpf_component_api-0.1-py3-none-any.whl\n. \npy3\n means it will work with any\nPython 3 implementation because it does not use any implementation-specific\nfeatures. \nnone\n means that it does not use the Python ABI. \nany\n means it will\nwork on any platform.\n\n\nThe acceptable Python version tags are:\n\n\n\n\ncp312\n (or lower)\n\n\npy312\n (or lower)\n\n\n\n\nThe \nONLY\n acceptable ABI tags are:\n\n\n\n\ncp312\n\n\nabi3\n\n\nnone\n\n\n\n\nThe acceptable platform tags are:\n\n\n\n\nany\n\n\nlinux_x86_64\n\n\nmanylinux1_x86_64\n\n\nmanylinux2010_x86_64\n\n\nmanylinux2014_x86_64\n\n\nmanylinux_2_5_x86_64\n through \nmanylinux_2_39_x86_64\n\n\n\n\nThe full list of compatible tags can be listed by running: \npip3 debug --verbose\n\n\nComponents should be supplied as a tar file, which includes not only the component library, but any other libraries or\nfiles needed for execution. This includes all other non-standard libraries used by the component\n(aside from the standard Python libraries), and any configuration or data files.\n\n\nComponent Development Best Practices\n\n\nSingle-threaded Operation\n\n\nImplementations are encouraged to operate in single-threaded mode. OpenMPF will parallelize components through\nmultiple instantiations of the component, each running as a separate service.\n\n\nStateless Behavior\n\n\nOpenMPF components should be stateless in operation and give identical output for a provided input\n(i.e. when processing the same job).\n\n\nLogging\n\n\nIt recommended that components use Python's built-in\n\nlogging\n module.\n The component should\n\nimport logging\n and call \nlogging.getLogger('')\n to get a logger instance.\nThe component should not configure logging itself. The Component Executor will configure the\n\nlogging\n module for the component. The logger will write log messages to standard error and\n\n${MPF_LOG_PATH}/${THIS_MPF_NODE}/log/.log\n. Note that multiple instances of the\nsame component can log to the same file. Also, logging content can span multiple lines.\n\n\nThe following log levels are supported: \nFATAL, ERROR, WARN, INFO, DEBUG\n.\nThe \nLOG_LEVEL\n environment variable can be set to one of the log levels to change the logging\nverbosity. When \nLOG_LEVEL\n is absent, \nINFO\n is used.\n\n\nThe format of the log messages is:\n\n\nDATE TIME LEVEL [SOURCE_FILE:LINE_NUMBER] - MESSAGE\n\n\n\nFor example:\n\n\n2018-05-03 14:41:11,703 INFO [test_component.py:44] - Logged message",
"title": "Python Batch Component API"
},
{
@@ -1067,9 +1072,19 @@
},
{
"location": "/Python-Batch-Component-API/index.html#mpf_component_apivideojob",
- "text": "Class containing data used for detection of objects in a video file. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". \n \n \n start_frame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n feed_forward_track \n None or mpf_component_api.VideoTrack \n An mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
+ "text": "Class containing data used for detection of objects in a video file. Contains at most one feed-forward track. Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n job_name \n str \n A specific name given to the job by the OpenMPF framework. This value may be used, for example, for logging and debugging purposes. \n \n \n data_uri \n str \n The URI of the input media file to be processed. Currently, this is a file path. For example, \"/opt/mpf/share/remote-media/test-file.avi\". \n \n \n start_frame \n int \n The first frame number (0-based index) of the video that should be processed to look for detections. \n \n \n stop_frame \n int \n The last frame number (0-based index) of the video that should be processed to look for detections. \n \n \n job_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str which represent the property name and the property value. The key corresponds to the property name specified in the component descriptor file described in the Component Descriptor Reference . Values are determined when creating a pipeline or when submitting a job.\n \n Note: The job_properties dict may not contain the full set of job properties. For properties not contained in the dict, the component must use a default value.\n \n \n \n media_properties \n dict[str, str] \n \n Contains a dict with keys and values of type str of metadata about the media associated with the job.\n \n Includes the following key-value pairs:\n \n DURATION : length of video in milliseconds \n FPS : frames per second (averaged for variable frame rate video) \n FRAME_COUNT : the number of frames in the video \n MIME_TYPE : the MIME type of the media \n FRAME_WIDTH : the width of a frame in pixels \n FRAME_HEIGHT : the height of a frame in pixels \n HAS_CONSTANT_FRAME_RATE : set to true if the video has a constant frame rate; otherwise, omitted or set to false if the video has variable frame rate or the type of frame rate cannot be determined \n \n May include the following key-value pair:\n \n ROTATION : A floating point value in the interval [0.0, 360.0) indicating the orientation of the media in degrees in the counter-clockwise direction. In order to view the media in the upright orientation, it must be rotated the given number of degrees in the clockwise direction. \n \n \n \n \n feed_forward_track \n None or mpf_component_api.VideoTrack \n An optional mpf_component_api.VideoTrack from the previous pipeline stage. Provided when feed forward is enabled. See Feed Forward Guide . \n \n IMPORTANT: FRAME_INTERVAL is a common job property that many components support.\nFor frame intervals greater than 1, the component must look for detections starting with the first\nframe, and then skip frames as specified by the frame interval, until or before it reaches the stop frame.\nFor example, given a start frame of 0, a stop frame of 99, and a frame interval of 2, then the detection component\nmust look for objects in frames numbered 0, 2, 4, 6, ..., 98. Job properties can also be set through environment variables prefixed with MPF_PROP_ . This allows\nusers to set job properties in their docker-compose files. \nThese will take precedence over all other property types (job, algorithm, media, etc). It is not\npossible to change the value of properties set via environment variables at runtime and therefore\nthey should only be used to specify properties that will not change throughout the entire lifetime\nof the service (e.g. Docker container).",
"title": "mpf_component_api.VideoJob"
},
+ {
+ "location": "/Python-Batch-Component-API/index.html#componentget_detections_from_all_video_tracksvideo_job",
+ "text": "EXPERIMENTAL: This feature is not fully implemented. Similar to component.get_detections_from_video(video_job) , but able to process multiple feed-forward tracks at once.\nRefer to the Feed Forward All Tracks section of the Feed Forward Guide\nto learn about the FEED_FORWARD_ALL_TRACKS property and how it affects feed-forward behavior. Known limitation: No multi-track mpf_component_util.VideoCapture support. Method Definition: class MyComponent:\n def get_detections_from_all_video_tracks(self, video_job):\n return [mpf_component_api.VideoTrack(...), ...] get_detections_from_all_video_tracks , like all get_detections_from_* methods, can be implemented either as an\ninstance method, a static method, or a class method. Parameters: Parameter Data Type Description video_job mpf_component_api.AllVideoTracksJob Object containing details about the work to be performed. Returns: An iterable of mpf_component_api.VideoTrack",
+ "title": "component.get_detections_from_all_video_tracks(video_job)"
+ },
+ {
+ "location": "/Python-Batch-Component-API/index.html#mpf_component_apiallvideotracksjob",
+ "text": "EXPERIMENTAL: This feature is not fully implemented. Class containing data used for detection of objects in a video file. May contain multiple feed-forward tracks. Members are the same as mpf_component_api.VideoJob with the exception that feed_forward_track is replaced by feed_forward_tracks . Members: \n \n \n Member \n Data Type \n Description \n \n \n \n \n feed_forward_tracks \n None or List[mpf_component_api.VideoTrack] \n An optional list of mpf_component_api.VideoTrack objects from the previous pipeline stage. Provided when feed forward is enabled and FEED_FORWARD_ALL_TRACKS is true. See Feed Forward Guide .",
+ "title": "mpf_component_api.AllVideoTracksJob"
+ },
{
"location": "/Python-Batch-Component-API/index.html#mpf_component_apivideotrack",
"text": "Class used to store the location of detected objects in a video file. Constructor: def __init__(self, start_frame, stop_frame, confidence=-1.0, frame_locations=None, detection_properties=None):\n ... Members: Member Data Type Description start_frame int The first frame number (0-based index) that contained the detected object. stop_frame int The last frame number (0-based index) that contained the detected object. confidence float Represents the \"quality\" of the detection. The range depends on the detection algorithm. 0.0 is lowest quality. Higher values are higher quality. Using a standard range of [0.0 - 1.0] is advised. If the component is unable to supply a confidence value, it should return -1.0. frame_locations dict[int, mpf_component_api.ImageLocation] A dict of individual detections. The key for each entry is the frame number where the detection was generated, and the value is a mpf_component_api.ImageLocation calculated as if that frame was a still image. Note that a key-value pair is not required for every frame between the track start frame and track stop frame. detection_properties dict[str, str] A dict with keys and values of type str containing optional additional information about the detected object. For best practice, keys should be in all CAPS. NOTE: Currently, mpf_component_api.VideoTrack.detection_properties do not show up in the JSON output object or\nare used by the WFM in any way. Example: A component that performs generic object classification can add an entry to detection_properties where the key is CLASSIFICATION and the value is the type of object detected. track = mpf_component_api.VideoTrack(0, 1)\ntrack.frame_locations[0] = mpf_component_api.ImageLocation(0, 0, 100, 100, 0.75, {'CLASSIFICATION': 'backpack'})\ntrack.frame_locations[1] = mpf_component_api.ImageLocation(10, 10, 110, 110, 0.95, {'CLASSIFICATION': 'backpack'})\ntrack.confidence = max(il.confidence for il in track.frame_locations.itervalues())",
diff --git a/docs/site/sitemap.xml b/docs/site/sitemap.xml
index 5244a1a861f1..7951b12c741f 100644
--- a/docs/site/sitemap.xml
+++ b/docs/site/sitemap.xml
@@ -2,162 +2,162 @@
/index.html
- 2025-09-10
+ 2025-09-19
daily
/Release-Notes/index.html
- 2025-09-10
+ 2025-09-19
daily
/License-And-Distribution/index.html
- 2025-09-10
+ 2025-09-19
daily
/Acknowledgements/index.html
- 2025-09-10
+ 2025-09-19
daily
/Install-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Admin-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/User-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/OpenID-Connect-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Media-Segmentation-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Feed-Forward-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Derivative-Media-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Object-Storage-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Markup-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/TiesDb-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Trigger-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Roll-Up-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Health-Check-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Artifact-Extraction-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Quality-Selection-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Media-Selectors-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/REST-API/index.html
- 2025-09-10
+ 2025-09-19
daily
/Component-API-Overview/index.html
- 2025-09-10
+ 2025-09-19
daily
/Component-Descriptor-Reference/index.html
- 2025-09-10
+ 2025-09-19
daily
/CPP-Batch-Component-API/index.html
- 2025-09-10
+ 2025-09-19
daily
/Python-Batch-Component-API/index.html
- 2025-09-10
+ 2025-09-19
daily
/Java-Batch-Component-API/index.html
- 2025-09-10
+ 2025-09-19
daily
/GPU-Support-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Contributor-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Development-Environment-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Node-Guide/index.html
- 2025-09-10
+ 2025-09-19
daily
/Workflow-Manager-Architecture/index.html
- 2025-09-10
+ 2025-09-19
daily
/CPP-Streaming-Component-API/index.html
- 2025-09-10
+ 2025-09-19
daily
\ No newline at end of file