You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: dlclive/benchmark_pytorch.py
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -90,6 +90,7 @@ def benchmark(
90
90
model_type: str,
91
91
device: str,
92
92
single_animal: bool,
93
+
save_dir=None,
93
94
precision: str="FP32",
94
95
display=True,
95
96
pcutoff=0.5,
@@ -98,7 +99,6 @@ def benchmark(
98
99
cropping=None, # Adding cropping to the function parameters
99
100
dynamic=(False, 0.5, 10),
100
101
save_poses=False,
101
-
save_dir=None,
102
102
draw_keypoint_names=False,
103
103
cmap="bmy",
104
104
get_sys_info=True,
@@ -119,6 +119,9 @@ def benchmark(
119
119
Device to run the model on ('cpu' or 'cuda').
120
120
single_animal: bool
121
121
Whether the video contains only one animal (True) or multiple animals (False).
122
+
save_dir : str, optional
123
+
Directory to save output data and labeled video.
124
+
If not specified, will use the directory of video_path, by default None
122
125
precision : str, optional, default='FP32'
123
126
Precision type for the model ('FP32' or 'FP16').
124
127
display : bool, optional, default=True
@@ -135,9 +138,6 @@ def benchmark(
135
138
Parameters for dynamic cropping. If the state is true, then dynamic cropping will be performed. That means that if an object is detected (i.e. any body part > detectiontreshold), then object boundaries are computed according to the smallest/largest x position and smallest/largest y position of all body parts. This window is expanded by the margin and from then on only the posture within this crop is analyzed (until the object is lost, i.e. <detection treshold). The current position is utilized for updating the crop window for the next frame (this is why the margin is important and should be set large enough given the movement of the animal).
136
139
save_poses : bool, optional, default=False
137
140
Whether to save the detected poses to CSV and HDF5 files.
138
-
save_dir : str, optional
139
-
Directory to save output data and labeled video.
140
-
If not specified, will use the directory of video_path, by default None
0 commit comments