Skip to content

Conversation

@vedantvakharia
Copy link
Contributor

Summary

This PR implements Phase 1 of the skeleton visualization feature requested in #693, enabling researchers to visualize skeletal connections between tracked keypoints without sharing sensitive video content.

Key Achievement: Skeleton connections now render as lines (using napari's Vectors layer) providing a clean, anatomically-accurate visualization of animal poses.


Motivation and Context

Closes #693 (Phase 1)

Pose tracking datasets contain rich information about animal movement, but sharing and interpreting this data can be challenging:

  • Sharing original videos may violate privacy/ethical guidelines
  • Keypoint clouds alone are difficult to interpret without anatomical context
  • Existing tools either require video overlays or produce cluttered visualizations

This implementation solves these problems by adding a skeleton visualization layer that:

  1. Renders connections as clean, thin lines
  2. Works entirely with pose data (no video required)
  3. Integrates seamlessly with Movement's existing napari plugin
  4. Supports customizable skeleton structures for different species/experiments

Implementation Details

New Module: movement/napari/skeleton/

Core Components:

  1. PrecomputedRenderer (renderers/precomputed.py)

    • Pre-computes all skeleton vectors for smooth playback
    • Correct napari vector format: (N, 2, D+1) where N is total vectors
    • Handles coordinate transformation: Movement [x, y] → napari [t, y, x]
    • Gracefully skips connections with NaN keypoints
  2. SkeletonState (state.py)

    • Manages skeleton configuration (connections, colors, widths, segments)
    • Supports JSON embedding in NetCDF dataset attributes
    • Provides methods for adding/removing connections dynamically
    • Validates configuration against dataset structure
  3. Configuration I/O (config.py)

    • YAML export/import for sharing skeleton configs
    • Hex ↔ RGBA color conversion utilities
    • Validation against dataset keypoints
    • Configuration-to-arrays conversion for renderers
  4. Templates (templates.py)

  5. Main API (__init__.py)

    • add_skeleton_layer(viewer, dataset, connections, **kwargs)
    • Returns napari Vectors layer (ensures line rendering)
    • Supports templates by name or custom configurations

Technical Highlights

Napari Vector Format (Critical):

# Each vector: [[t, y, x], [0, dy, dx]]
# Shape: (N, 2, 3) for 2D+time
# Shape: (N, 2, 4) for 3D+time

This format ensures:

  • Skeletons render as lines (Vectors layer), not solid shapes
  • Automatic time-slicing with napari's time slider
  • Proper coordinate system for napari visualization

NaN Handling:

  • Connections with NaN keypoints are skipped (not rendered)
  • No partial/broken lines in visualization
  • No NaN values propagate to napari layer

Multi-Individual Support:

  • Each individual gets their own skeleton
  • Vectors are flattened across frames, individuals, and connections
  • Properly colored per connection, not per individual

Features

Implemented in Phase 1

  • Skeleton rendering as lines (napari Vectors layer)
  • Pre-defined templates (mouse, rat)
  • Programmatic API: add_skeleton_layer()
  • 2D and 3D pose data support
  • Multiple individuals handling
  • NaN/missing keypoint handling
  • Configuration persistence (NetCDF + YAML)
  • Color coding by segment/connection
  • Customizable line widths
  • Full type hints and Google-style docstrings
  • Comprehensive test suite

Planned for Phase 2 & 3

Phase 2: Multi-Renderer Support

  • CachedRenderer for medium datasets (LRU caching)
  • GPUDirectRenderer for large datasets (vispy shaders)
  • Automatic renderer recommendation
  • Renderer switching UI

Phase 3: Polish & Advanced Features

  • Interactive connection editor
  • Dock widget UI for skeleton configuration
  • Additional species templates
  • Performance optimizations

Usage Example

import napari
from movement.io import load_poses
from movement.napari.skeleton import add_skeleton_layer

# Load pose data
dataset = load_poses.from_dlc_file("path/to/poses.h5")

# Create napari viewer
viewer = napari.Viewer()

# Add keypoints (optional)
from movement.napari.convert import ds_to_napari_layers
points_data, _, properties = ds_to_napari_layers(dataset)
viewer.add_points(points_data[:, 1:], properties=properties, size=5)

# Add skeleton using template
skeleton_layer = add_skeleton_layer(viewer, dataset, connections="mouse")

# Or use custom configuration
custom_config = {
    "keypoints": ["nose", "ear_left", "ear_right", "tail_base"],
    "connections": [
        {"start": "nose", "end": "ear_left", "color": "#FF0000", "width": 2.0, "segment": "head"},
        {"start": "nose", "end": "ear_right", "color": "#FF0000", "width": 2.0, "segment": "head"},
    ]
}
skeleton_layer = add_skeleton_layer(viewer, dataset, connections=custom_config)

napari.run()

Testing

Test Suite

29 tests total (all passing ):

  • 20 unit tests: Config, renderer, state management
  • 9 integration tests: napari layer functionality

Coverage:

  • Config I/O (YAML save/load, validation)
  • Color conversion (hex ↔ RGBA)
  • PrecomputedRenderer (vector format, coordinate order, NaN handling)
  • SkeletonState (add/remove connections, validation, persistence)
  • napari integration (layer type, multiple individuals, 3D support)

Manual Verification

Tested with:

  • Synthetic datasets (circular, linear, random walk motion)
  • 2D and 3D pose data
  • Multiple individuals (1-5)
  • Missing keypoints (NaN values)
  • Configuration persistence (NetCDF save/load)

Visual Confirmation:

  • Skeleton renders as thin lines (Vectors layer), not solid shapes
  • Smooth animation with time slider
  • Correct color coding per connection
  • No broken/partial lines with NaN keypoints

Screenshot

image I have taken this screenshot while using the mouse skeleton. This is what each line means in the screenshot -
  • Head (Blue):
    • nose → ear_left
    • nose → ear_right
  • Body (Green):
    • ear_left → neck
    • ear_right → neck
    • hip_left → neck
    • hip_right → neck
  • Tail (Orange):
    • hip_left → tail_base ← First orange line
    • hip_right → tail_base ← Second orange line

Files Changed

New Files (12 total)

Core Module (movement/napari/skeleton/):

  • __init__.py - Main API and public exports
  • state.py - SkeletonState class
  • config.py - Configuration I/O and validation
  • templates.py - Pre-defined skeleton templates
  • renderers/__init__.py - Renderer exports
  • renderers/base.py - BaseRenderer abstract class
  • renderers/precomputed.py - PrecomputedRenderer implementation

Tests:

  • tests/fixtures/skeleton.py - Test fixtures and synthetic data generators
  • tests/test_unit/test_napari_plugin/test_skeleton/__init__.py
  • tests/test_unit/test_napari_plugin/test_skeleton/test_config.py - Config tests
  • tests/test_unit/test_napari_plugin/test_skeleton/test_precomputed_renderer.py - Renderer tests
  • tests/test_integration/test_skeleton_napari.py - napari integration tests

Breaking Changes

None. This is a new feature with no impact on existing functionality.


Checklist

  • Code follows project style guidelines
  • Added tests for new functionality
  • All tests passing
  • Added docstrings (Google style)
  • Added type hints
  • No breaking changes
  • Documentation updated (docstrings in code)
  • Tested manually with napari
  • Pre-commit hooks passing

Next Steps

After Phase 1 review and merge:

  1. Phase 2 PR: Multi-renderer support for performance optimization
  2. Phase 3 PR: UI widgets and advanced features

Questions for Reviewers

  1. Module location: Is movement/napari/skeleton/ the right place, or would you prefer a different structure?

  2. Template repository: Should additional species templates be in the codebase or maintained separately?

  3. Performance warnings: Should we add warnings for large datasets (>10K frames) in Phase 1, or wait for Phase 2's optimized renderers?


Thank you for reviewing! Looking forward to feedback and happy to make adjustments.

- Implement skeleton rendering system with PrecomputedRenderer
- Add configuration validation and I/O (YAML support)
- Include predefined templates (mouse, rat)
- Add SkeletonState management for datasets
- Comprehensive unit and integration tests
- Fix: Add proper docstrings to abstract properties instead of noqa
- Fix: Remove noqa:E402 by using try/except pattern for napari import
@vedantvakharia
Copy link
Contributor Author

Note: I accidentally deleted the branch for PR #745. I tried to revive it, but I couldn't do so. The pr is the same as the earlier one, only minor edits are there.

Also, whenever the maintainers get time, can you give the feedback on the overall architecture and implementation approach for Phase 1? As the phase 2 implementation direction depends upon the phase 1 implementation, and any changes in phase 1 will directly affect how I implement the phase 2.

@sonarqubecloud
Copy link

Remove Attributes section from BaseRenderer class docstring to prevent
Sphinx from documenting abstract properties twice (once in the class
Attributes section and once at the @Property level).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2D (and maybe 3D) skeletons in the Movement GUI

1 participant