Skip to content

Initial PoseForge development, using spotlight-tools v0.1.1

Latest

Choose a tag to compare

@sibocw sibocw released this 28 Oct 11:06
· 3 commits to main since this release

This is the version of PoseForge in its initial development stage (up until 2025-10-23). It relies on spotlight-tools v0.1.1 (this was before a major refactoring to the Spotlight postprocessing pipeline that merged part of what PoseForge does to spotlight-tools).

This version of poseforge and v0.1.1 of spotlight-control are meant to provide snapshots of both code bases that are compatible with one another and can reproduce results in PoseForge's initial development until now.

Data from poseforge/bulk_data for this version is backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025 (see readme.txt under it).

Use of Spotlight data in this version of PoseForge

  1. I postprocessed Spotlight recordings acquired on 2025-06-13 using spotlight-control/tools/src/spotlight_tools/scripts/postprocess_recording.py (spotlight-control v0.1.1). These data are backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials, including postprocessing outputs. In this process,
    1. Behavior recordings are decoded from pseudo-BGR into MKV videos at the original recording dimension. Muscle images are warped to be consistent with behavior images using the Spotlight calibration parameters, and they are saved as individual TIFF files.
    2. 2D pose estimation is run using the SLEAP model trained by Tommy for his setup. The model is backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials/sleap_model/tl_bottom_cam_1024_250327_002024.tar.gz. This model detects 37 keypoints on the fly, but because it's trained only on Tommy's data, the predictions are not great.
  2. Nonetheless, I used the predicted 2D pose (where available) to detect the thorax position and orientation of the fly to align the behavior images. I rotated the images so that the fly always faces up, and cropped the images to a 900x900px square centered at the thorax. This is done in in poseforge/spotlight/scripts/crop_and_align_frames.py. This generated data under poseforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/all/ (a JPEG file for the cropped image and a JSON file for cropping metadata for each frame). The aligned_and_cropped folder is backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/behavior_images/spotlight_aligned_and_cropped.tar.gz.
  3. I manually labeled frames where the fly is on the ceiling by making a flipped folder and a not_flipped folder under poseforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/manual_label/ and copying or symlinking the appropriate images in all/ to these folders. Then, I used poseforge/spotlight/scripts/train_flip_detection_model.py to train a flip detection model. Finally, I ran poseforge/spotlight/scripts/detect_flipped_flies.py to generate model predictions. This is done by symlinking data from all/ to a new model_prediction/ folder, also containing a flipped and a not_flipped subfolder.
    • A backup of the spotlight_aligned_and_cropped folder is at ``smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/behavior_images/spotlight_aligned_and_cropped.tar.gz` (though the manual labels are saved as TXT files containing filenames instead of folders containing symlinks).
  4. I carried on training PoseForge models from poseforge/bulk_data/behavior_images/spotlight_aligned_and_cropped/<recording_trial_name>/model_prediction/.
  5. After the models are trained, I extracted muscle traces from Spotlight data using src/poseforge/spotlight/scripts/map_segmentation_to_muscle.py. This file expect warped muscle images from the Spotlight data to be at the original recording dimension of the behavior images (a behavior that will change in later versions).

Plan for refactoring

  1. Moving on, I will move the fly alignment step (rotation and cropping) from poseforge.spotlight to spotlight-tools under spotlight-control. The postprocessing procedure in spotlight-tools will therefore save cropped behavior frames already as a MKV file. Within poseforge.spotlight, I think it still makes sense to convert the video to individual JPEGs for speed efficiency during model training. The flip detection training/inference code therefore need to be rewritten.
  2. Instead of Tommy's 37-keypoint SLEAP model, I will train a tiny 3-keypoint, single-animal SLEAP model tracking only the neck, the thorax, and the abdomen tip. This already allows me to align the fly and make the crop.

Reproducibility for the 3-keypoint SLEAP model

The dependency is a bit circular:

  • The training videos for the 3-keypoint SLEAP model are the full-size behavior video generated by the old spotlight-tools postprocessing pipeline.
  • The purpose of having the 3 keypoint positions is that it tells us the fly's position and orientation, so we can align and crop the behavior frames during postprocessing.
  • Therefore, the full-size behavior video will no longer be saved in the new postprocessing pipeline.

... so technically spotlight-tools v0.1.1 is required to reproduce the 3-keypoint SLEAP model. However, in reality, the full-size videos are backed up at smb://sv-nas1.rcp.epfl.ch/upramdya/data/SW/poseforge/initial_development_oct2025/spotlight_20250613_trials/20250613-fly1b-*/processed/behavior_video.mkv.