This is the version of PoseForge in its initial development stage (up until 2025-10-23). It relies on spotlight-tools v0.1.1 (this was before a major refactoring to the Spotlight postprocessing pipeline that merged part of what PoseForge does to spotlight-tools).
This version of poseforge and v0.1.1 of spotlight-control are meant to provide snapshots of both code bases that are compatible with one another and can reproduce results in PoseForge's initial development until now.
Data from poseforge/bulk_data for this version is backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025 (see readme.txt under it).
Use of Spotlight data in this version of PoseForge
- I postprocessed Spotlight recordings acquired on 2025-06-13 using
spotlight-control/tools/src/spotlight_tools/scripts/postprocess_recording.py(spotlight-control v0.1.1). These data are backed up atsmb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials, including postprocessing outputs. In this process,- Behavior recordings are decoded from pseudo-BGR into MKV videos at the original recording dimension. Muscle images are warped to be consistent with behavior images using the Spotlight calibration parameters, and they are saved as individual TIFF files.
- 2D pose estimation is run using the SLEAP model trained by Tommy for his setup. The model is backed up at
smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials/sleap_model/tl_bottom_cam_1024_250327_002024.tar.gz. This model detects 37 keypoints on the fly, but because it's trained only on Tommy's data, the predictions are not great.
- Nonetheless, I used the predicted 2D pose (where available) to detect the thorax position and orientation of the fly to align the behavior images. I rotated the images so that the fly always faces up, and cropped the images to a 900x900px square centered at the thorax. This is done in in
poseforge/spotlight/scripts/crop_and_align_frames.py. This generated data underposeforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/all/(a JPEG file for the cropped image and a JSON file for cropping metadata for each frame). Thealigned_and_croppedfolder is backed up atsmb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/behavior_images/spotlight_aligned_and_cropped.tar.gz. - I manually labeled frames where the fly is on the ceiling by making a
flippedfolder and anot_flippedfolder underposeforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/manual_label/and copying or symlinking the appropriate images inall/to these folders. Then, I usedposeforge/spotlight/scripts/train_flip_detection_model.pyto train a flip detection model. Finally, I ranposeforge/spotlight/scripts/detect_flipped_flies.pyto generate model predictions. This is done by symlinking data fromall/to a newmodel_prediction/folder, also containing aflippedand anot_flippedsubfolder.- A backup of the
spotlight_aligned_and_croppedfolder is at ``smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/behavior_images/spotlight_aligned_and_cropped.tar.gz` (though the manual labels are saved as TXT files containing filenames instead of folders containing symlinks).
- A backup of the
- I carried on training PoseForge models from
poseforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/model_prediction/. - After the models are trained, I extracted muscle traces from Spotlight data using
src/poseforge/spotlight/scripts/map_segmentation_to_muscle.py. This file expect warped muscle images from the Spotlight data to be at the original recording dimension of the behavior images (a behavior that will change in later versions).
Plan for refactoring
- Moving on, I will move the fly alignment step (rotation and cropping) from
poseforge.spotlighttospotlight-toolsunderspotlight-control. The postprocessing procedure inspotlight-toolswill therefore save cropped behavior frames already as a MKV file. Withinposeforge.spotlight, I think it still makes sense to convert the video to individual JPEGs for speed efficiency during model training. The flip detection training/inference code therefore need to be rewritten. - Instead of Tommy's 37-keypoint SLEAP model, I will train a tiny 3-keypoint, single-animal SLEAP model tracking only the neck, the thorax, and the abdomen tip. This already allows me to align the fly and make the crop.
Reproducibility for the 3-keypoint SLEAP model
The dependency is a bit circular:
- The training videos for the 3-keypoint SLEAP model are the full-size behavior video generated by the old
spotlight-toolspostprocessing pipeline. - The purpose of having the 3 keypoint positions is that it tells us the fly's position and orientation, so we can align and crop the behavior frames during postprocessing.
- Therefore, the full-size behavior video will no longer be saved in the new postprocessing pipeline.
... so technically spotlight-tools v0.1.1 is required to reproduce the 3-keypoint SLEAP model. However, in reality, the full-size videos are backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials/20250613-fly1b-*/processed/behavior_video.mkv.