Skip to content

test: restructure suite and speed up CI#96

Merged
Routhleck merged 4 commits intomasterfrom
chore-optimize-env-pytest
Feb 6, 2026
Merged

test: restructure suite and speed up CI#96
Routhleck merged 4 commits intomasterfrom
chore-optimize-env-pytest

Conversation

@Routhleck
Copy link
Owner

@Routhleck Routhleck commented Feb 6, 2026

Summary

  • restructure tests into unit/integration/visualization with markers and conftest
  • fix CLI entry + lazy imports, add missing tests and coverage utilities
  • reduce slow test parameters and mark slow tests; CI runs non-slow only
  • fix environment/binder brainpy dependency

Test

  • uv run pytest -q
  • uv run pytest -q -m "not slow"

Summary by Sourcery

Restructure the test suite into unit, integration, and visualization categories with markers, speed up slower tests, and improve CLI, packaging, and developer tooling for coverage and linting.

New Features:

  • Add a dedicated command-line entry module and script entry for the canns CLI with subcommands for ASA, gallery, GUI, and version reporting.
  • Introduce lazy submodule loading in the top-level package to improve import performance and ergonomics.
  • Add new utilities and helpers around data loaders, task base saving/loading, benchmarking, and plotting with corresponding tests.

Bug Fixes:

  • Point the canns console script to the new CLI entry point instead of the previous master attribute.
  • Fix Binder and environment dependencies by depending on brainpy[cpu] instead of brainx[cpu].
  • Harden TDA / decoding and navigation tests against shape and padding issues and adjust assertions to be less brittle across environments.

Enhancements:

  • Reorganize tests under unit, integration, and visualization directories, applying pytest markers (integration, visualization, slow) and using more efficient fixtures and parameters to reduce runtime.
  • Refine several analyzer, trainer, and task tests to reuse computed data, lower resolutions, and relax numerical tolerances while preserving behavioral coverage.
  • Improve the linting script to be CI-aware by disabling in-place fixes and using format checks instead of modifications when running in CI.
  • Add additional tests for CLI dispatch, lazy imports, pipeline launcher behavior, and various utilities to improve robustness and coverage.

Build:

  • Add pytest-cov as a development dependency and configure coverage collection and reporting in pyproject.toml.
  • Narrow pytest discovery to test_*.py in the tests directory, add common markers, and extend pytest addopts for better reporting.
  • Add a Makefile coverage target that runs pytest with coverage reporting and ensure formatting is checked (not auto-fixed) in CI via the lint helper.

CI:

  • Keep the CI workflow running the test suite under the updated pytest configuration and lint behavior without changing the overall job structure.

Tests:

  • Add extensive new unit and integration tests for visualization routines, pipeline launcher entry, CLI behavior, data loaders, benchmark utilities, task base serialization, lazy imports, and navigation helpers.
  • Update existing experimental data, navigation, metrics, trainer, and visualization tests to reflect the new structure, markers, and performance-oriented parameter choices.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Feb 6, 2026

Reviewer's Guide

Restructures the test suite into unit/integration/visualization layers with markers, adds a proper CLI entrypoint and lazy submodule loading, introduces coverage tooling and a Makefile coverage target, speeds up or marks slow tests while adding new tests around CLI, pipeline launcher, data loaders, tasks, utils, and visualization, and fixes environment dependencies (brainpy vs brainx) and CI pytest configuration.

Sequence diagram for the new canns CLI entrypoint behavior

sequenceDiagram
    actor User
    participant Shell
    participant cann_script as cann_console_script
    participant main as canns_main
    participant Launcher as pipeline_launcher
    participant ASA as pipeline_asa
    participant Gallery as pipeline_gallery
    participant GUI as pipeline_asa_gui
    participant Metadata as importlib_metadata
    participant VersionModule as canns_version_module

    User->>Shell: type cann [--version|--asa|--gallery|--gui]
    Shell->>cann_script: execute entrypoint
    cann_script->>main: main(argv)

    alt --version flag
        main->>Metadata: version(canns)
        alt version lookup ok
            Metadata-->>main: installed_version
            main-->>User: print installed_version
        else version lookup fails
            main->>VersionModule: import __version__
            alt module has __version__
                VersionModule-->>main: __version__
                main-->>User: print __version__
            else fallback
                main-->>User: print unknown
            end
        end
        main-->>Shell: return 0
    else --gui flag
        main->>GUI: gui_main()
        GUI-->>main: exit_code
        main-->>Shell: return exit_code
    else --gallery flag
        main->>Gallery: gallery_main()
        Gallery-->>main: complete
        main-->>Shell: return 0
    else --asa flag
        main->>ASA: asa_main()
        ASA-->>main: complete
        main-->>Shell: return 0
    else no flags
        main->>Launcher: launcher_main()
        Launcher-->>main: complete
        main-->>Shell: return 0
    end
Loading

Class diagram for canns lazy imports and CLI entrypoint

classDiagram
    class CannsPackage {
        +set _LAZY_SUBMODULES
        +__all__
        +__getattr__(name str) Module
    }

    class AnalyzerModule
    class DataModule
    class ModelsModule
    class PipelineModule
    class TrainerModule
    class UtilsModule

    CannsPackage ..> AnalyzerModule : lazy import analyzer
    CannsPackage ..> DataModule : lazy import data
    CannsPackage ..> ModelsModule : lazy import models
    CannsPackage ..> PipelineModule : lazy import pipeline
    CannsPackage ..> TrainerModule : lazy import trainer
    CannsPackage ..> UtilsModule : lazy import utils

    class CannsMain {
        +main(argv Sequence~str~) int
    }

    class PipelineLauncher {
        +main() int
    }

    class PipelineASA {
        +main() int
    }

    class PipelineGallery {
        +main() void
    }

    class PipelineASA_GUI {
        +main() int
    }

    CannsMain ..> PipelineLauncher : calls when no flags
    CannsMain ..> PipelineASA : calls when --asa
    CannsMain ..> PipelineGallery : calls when --gallery
    CannsMain ..> PipelineASA_GUI : calls when --gui

    class LintTool {
        +main() void
    }

    LintTool : env_var_CI bool
    LintTool : fix bool
    LintTool : run_codespell()
    LintTool : run_ruff_check(fix bool)
    LintTool : run_ruff_format(check_only bool)

    CannsPackage <.. CannsMain : package entrypoint script
    LintTool <.. CannsPackage : developer tooling
Loading

Flow diagram for the restructured pytest suite and markers

flowchart TD
    A[tests directory] --> B[unit tests]
    A --> C[integration tests]
    A --> D[visualization tests]

    B --> B1[tests/unit/...]
    C --> C1[tests/integration/...]
    D --> D1[tests/visualization/...]

    subgraph Markers
        M1[unit tests\nno special marker]
        M2[integration tests\nmarker: integration]
        M3[visualization tests\nmarker: visualization]
        M4[slow tests across layers\nmarker: slow]
    end

    B1 --> M1
    C1 --> M2
    D1 --> M3

    M2 --> M4
    M3 --> M4

    subgraph PytestConfiguration
        P1[python_files: test_*.py]
        P2[testpaths: tests]
        P3[markers registered\n integration, visualization, slow]
        P4[addopts: --assert=plain -ra]
    end

    A --> PytestConfiguration

    subgraph LocalRun
        L1[uv run pytest]
        L1 --> L2[run all tests\nincluding slow]
    end

    subgraph CISuggestedRun
        CI1[uv run pytest -m not slow]
        CI1 --> CI2[run all tests\nexcept slow]
    end
Loading

File-Level Changes

Change Details Files
Restructure tests into unit, integration, and visualization suites with markers and updated pytest configuration.
  • Move experimental analyzer tests into integration tree and apply pytest integration markers
  • Move navigation and tracking task tests into unit vs integration trees as appropriate
  • Introduce visualization tests package with a global visualization marker and new plotting/animation tests
  • Adjust test code style and parameters for consistency and speed (e.g., reduced sample sizes, relaxed tolerances)
  • Update pytest.ini options to match new layout and add custom markers for integration, visualization, and slow tests
tests/analyzer/experimental_data/test_experimental_data_cann1d.py
tests/analyzer/experimental_data/test_experimental_data_cann2d.py
tests/analyzer/experimental_data/test_asa_experimental_data.py
tests/analyzer/experimental_data/test_cell_classification.py
tests/analyzer/test_utils.py
tests/analyzer/test_fixed_point_finder.py
tests/analyzer/metrics/test_systematic_ratemap.py
tests/task/tracking/test_tracking1d.py
tests/task/tracking/test_tracking2d.py
tests/task/open_loop_navigation/test_open_loop_navigation.py
tests/task/open_loop_navigation/test_reproducibility.py
tests/task/closed_loop_navigation/test_closed_loop_navigation.py
tests/analyzer/visualization/test_backend.py
tests/visualization/test_plotting.py
tests/integration/analyzer/experimental_data/test_experimental_data_cann1d.py
tests/integration/analyzer/experimental_data/test_experimental_data_cann2d.py
tests/integration/analyzer/experimental_data/test_asa_experimental_data.py
tests/integration/analyzer/experimental_data/test_cell_classification.py
tests/integration/task/open_loop_navigation/test_open_loop_navigation.py
tests/integration/task/open_loop_navigation/test_reproducibility.py
tests/unit/analyzer/metrics/test_systematic_ratemap.py
tests/unit/analyzer/test_utils.py
tests/unit/task/tracking/test_tracking1d.py
tests/unit/task/tracking/test_tracking2d.py
tests/visualization/test_backend.py
tests/conftest.py
pyproject.toml
Speed up slow tests and mark remaining slow paths while sharing expensive fixtures.
  • Reduce model sizes, resolutions, numbers of samples, and iterations in metrics, Sanger, and TDA-related tests
  • Introduce module-scoped fixtures (e.g., small_model and ratemap) to reuse expensive computations
  • Lower coverage/quality thresholds (e.g., spatial coverage percent) where appropriate for smaller test problems
  • Mark performance-sensitive tests with pytest.mark.slow so they can be excluded in CI via markers
tests/unit/analyzer/metrics/test_systematic_ratemap.py
tests/integration/analyzer/experimental_data/test_asa_experimental_data.py
tests/unit/trainer/test_sanger.py
tests/integration/task/open_loop_navigation/test_open_loop_navigation.py
tests/visualization/test_plotting.py
pyproject.toml
Introduce and test lazy submodule imports and a unified CLI entry point for the canns package, and wire console scripts correctly.
  • Add getattr-based lazy loading of analyzer, data, models, pipeline, trainer, and utils submodules in canns.init
  • Add a new canns.main.py implementing a CLI wrapper that dispatches to ASA, gallery, GUI, or launcher and prints version from metadata or fallback
  • Update pyproject console script entry to point canns to canns.main:main instead of canns:master
  • Add unit tests for lazy imports and CLI dispatch/version behaviour
  • Add integration tests for pipeline launcher and pipeline main interactions with ASA/Gallery apps
src/canns/__init__.py
src/canns/__main__.py
tests/unit/test_lazy_imports.py
tests/unit/test_cli.py
tests/integration/pipeline/test_launcher.py
pyproject.toml
Expand unit tests for data loaders, tasks, utility helpers, and benchmark decorator.
  • Add tests for ROI and grid data validation, spike preprocessing, and data summary helpers in canns.data.loaders
  • Add tests for Task base class save/load roundtrips for dict and dataclass payloads and error handling
  • Add tests for open-loop navigation utilities (e.g., map2pi wrapping)
  • Add tests for benchmark decorator to ensure it runs wrapped functions expected number of times and prints headings
tests/unit/data/test_loaders.py
tests/unit/task/test_task_base.py
tests/unit/task/test_open_loop_utils.py
tests/unit/utils/test_benchmark.py
Add visualization regression tests to ensure plotting APIs run headless and produce output files.
  • Create tests covering 1D/2D energy landscape static and animated plots, raster plots, average firing rate plots, and tuning curves using PlotConfigs
  • Ensure visualization tests are marked both visualization and slow and check that image/gif files are emitted to tmp_path
  • Keep previous visualization backend tests but move them under visualization test package with marker
tests/visualization/test_plotting.py
tests/visualization/test_backend.py
Tighten linting workflow and keep CI non-mutating while enabling local auto-fix.
  • Update devtools/lint.py so ruff check uses --fix only outside CI and ruff format runs in check mode under CI
  • Use CI environment variable to decide fix vs check behaviour to keep CI runs read-only
devtools/lint.py
Add coverage configuration and Makefile targets for coverage runs, and integrate pytest-cov dev dependency.
  • Add pytest-cov to dev extras in pyproject
  • Configure coverage.run and coverage.report sections for branch coverage, source filter, and missing-lines handling with common exclusions
  • Extend pytest addopts with -ra for extra test summary
  • Add a Makefile coverage target invoking pytest with coverage options
pyproject.toml
Makefile
Fix environment and Binder dependencies to use brainpy instead of brainx.
  • Update Binder requirements.txt to depend on brainpy[cpu]
  • Update environment.yml pip extras to install brainpy[cpu]
binder/requirements.txt
environment.yml
Minor assertions and formatting cleanups in tests and visualization code.
  • Normalize string quoting, assertion messages, and line wrapping to be ruff/black compliant
  • Simplify some ValueError messages and one-line function calls for readability and consistency
tests/unit/trainer/test_bcm.py
tests/unit/trainer/test_oja.py
tests/unit/trainer/test_sanger.py
tests/unit/trainer/test_stdp.py
tests/unit/test_version.py
tests/unit/analyzer/test_fixed_point_finder.py
tests/unit/task/closed_loop_navigation/test_closed_loop_navigation.py
tests/unit/analyzer/test_utils.py
tests/unit/task/tracking/test_tracking1d.py
tests/unit/task/tracking/test_tracking2d.py
src/canns/analyzer/visualization/theta_sweep_plots.py
tests/integration/task/open_loop_navigation/test_reproducibility.py
tests/integration/task/open_loop_navigation/test_open_loop_navigation.py
tests/visualization/test_backend.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • In the new CLI entrypoint (canns.__main__), only the GUI path propagates a return code while the ASA and gallery paths always return 0; consider consistently returning the subcommand’s exit code (or at least handling non-zero returns) so failures in those entrypoints surface correctly to callers and CI.
  • The lazy import mechanism in canns.__init__ implements __getattr__ but not __dir__ or an updated __all__; adding these for the lazy submodules would improve discoverability and type-checker/IDE support for canns.analyzer, canns.models, etc.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In the new CLI entrypoint (`canns.__main__`), only the GUI path propagates a return code while the ASA and gallery paths always return 0; consider consistently returning the subcommand’s exit code (or at least handling non-zero returns) so failures in those entrypoints surface correctly to callers and CI.
- The lazy import mechanism in `canns.__init__` implements `__getattr__` but not `__dir__` or an updated `__all__`; adding these for the lazy submodules would improve discoverability and type-checker/IDE support for `canns.analyzer`, `canns.models`, etc.

## Individual Comments

### Comment 1
<location> `pyproject.toml:274` </location>
<code_context>
+exclude_lines = [
+    "pragma: no cover",
+    "if TYPE_CHECKING:",
+    "if __name__ == .__main__.:",
+    "raise NotImplementedError",
+]
</code_context>

<issue_to_address>
**issue (testing):** Coverage exclude pattern for __main__ guard looks incorrect and likely won’t match the intended line.

`exclude_lines` uses regex matching, so `"if __name__ == .__main__.:"` is unlikely to match a typical guard like `if __name__ == "__main__":`. The `.` characters will match any character and the quoting doesn’t align with the usual pattern. If you want to ignore standard `__main__` guards, consider a more accurate regex such as `"if __name__ == ['\"]__main__['\"]:"` or a simpler literal match that reflects the actual line format.
</issue_to_address>

### Comment 2
<location> `tests/integration/analyzer/experimental_data/test_experimental_data_cann2d.py:156-165` </location>
<code_context>
+from typing import TYPE_CHECKING

 # Version information
 try:
</code_context>

<issue_to_address>
**issue (testing):** Swallowing exceptions in `test_decode_circular_coordinates` can mask regressions.

Catching all exceptions and only logging them means real failures in `tda_vis` or `decode_circular_coordinates` will be silently ignored. If you need to handle known platform- or data-specific issues, please use `pytest.skip` / `pytest.xfail` under a concrete condition or catch a specific exception type. Otherwise, let exceptions surface so the test fails when decoding is broken.
</issue_to_address>

### Comment 3
<location> `tests/integration/analyzer/experimental_data/test_asa_experimental_data.py:135-144` </location>
<code_context>
     assert xx is not None and yy is not None and tt is not None


+@pytest.mark.slow
 def test_tda_decode_and_cohomap():
     # grid_data = load_grid_data()
</code_context>

<issue_to_address>
**suggestion (testing):** The combined `integration` + `slow` marking may exclude this important TDA/decoding path from normal CI runs.

Since this test hits a high-level ASA/TDA + decoding pipeline, having it marked as both `integration` and `slow` means it won’t run in the default `-m "not slow"` CI configuration described in the PR. Consider adding a lighter-weight non-slow test that still exercises the decoding + plotting config behavior (e.g., reduced neurons/timepoints or fewer `n_points` values), and keep this heavier version as an explicitly slow test.
</issue_to_address>

### Comment 4
<location> `tests/unit/test_lazy_imports.py:10-17` </location>
<code_context>
+            sys.modules.pop(name, None)
+
+
+def test_lazy_imports():
+    _clear_canns_modules()
+
+    import canns
+
+    assert "canns.models" not in sys.modules
+    _ = canns.models
+    assert "canns.models" in sys.modules
</code_context>

<issue_to_address>
**suggestion (testing):** Extend lazy import test to cover all declared lazy submodules.

Currently this only verifies lazy import for `canns.models`, but `__init__` declares several lazy submodules (`analyzer`, `data`, `models`, `pipeline`, `trainer`, `utils`). Consider parametrizing the test over this set and asserting that each attribute access (e.g. `canns.analyzer`) causes the corresponding `canns.<name>` module to appear in `sys.modules`.

Suggested implementation:

```python
import sys

import pytest

```

```python
@pytest.mark.parametrize(
    "submodule",
    ["analyzer", "data", "models", "pipeline", "trainer", "utils"],
)
def test_lazy_imports(submodule):
    _clear_canns_modules()

    import canns

    full_name = f"canns.{submodule}"
    assert full_name not in sys.modules

    # Access the lazy attribute to trigger import
    _ = getattr(canns, submodule)

    assert full_name in sys.modules

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"if __name__ == .__main__.:",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): Coverage exclude pattern for main guard looks incorrect and likely won’t match the intended line.

exclude_lines uses regex matching, so "if __name__ == .__main__.:" is unlikely to match a typical guard like if __name__ == "__main__":. The . characters will match any character and the quoting doesn’t align with the usual pattern. If you want to ignore standard __main__ guards, consider a more accurate regex such as "if __name__ == ['\"]__main__['\"]:" or a simpler literal match that reflects the actual line format.

Comment on lines 156 to +165
try:
persistence_result = tda_vis(embed_spikes, tda_config)

# Test coordinate decoding
decoding_result = decode_circular_coordinates(
persistence_result=persistence_result,
embed_data=embed_data,
real_ground=True,
real_of=True,
save_path=None
save_path=None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): Swallowing exceptions in test_decode_circular_coordinates can mask regressions.

Catching all exceptions and only logging them means real failures in tda_vis or decode_circular_coordinates will be silently ignored. If you need to handle known platform- or data-specific issues, please use pytest.skip / pytest.xfail under a concrete condition or catch a specific exception type. Otherwise, let exceptions surface so the test fails when decoding is broken.

Comment on lines +135 to +144
@pytest.mark.slow
def test_tda_decode_and_cohomap():
# grid_data = load_grid_data()
# if grid_data is None:
grid_data = create_mock_spike_data(
num_neurons=50,
num_timepoints=3000,
density=2.0,
num_neurons=20,
num_timepoints=600,
density=1.2,
structured=True,
duration=30.0,
duration=10.0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): The combined integration + slow marking may exclude this important TDA/decoding path from normal CI runs.

Since this test hits a high-level ASA/TDA + decoding pipeline, having it marked as both integration and slow means it won’t run in the default -m "not slow" CI configuration described in the PR. Consider adding a lighter-weight non-slow test that still exercises the decoding + plotting config behavior (e.g., reduced neurons/timepoints or fewer n_points values), and keep this heavier version as an explicitly slow test.

Comment on lines +10 to +17
def test_lazy_imports():
_clear_canns_modules()

import canns

assert "canns.models" not in sys.modules
_ = canns.models
assert "canns.models" in sys.modules
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Extend lazy import test to cover all declared lazy submodules.

Currently this only verifies lazy import for canns.models, but __init__ declares several lazy submodules (analyzer, data, models, pipeline, trainer, utils). Consider parametrizing the test over this set and asserting that each attribute access (e.g. canns.analyzer) causes the corresponding canns.<name> module to appear in sys.modules.

Suggested implementation:

import sys

import pytest
@pytest.mark.parametrize(
    "submodule",
    ["analyzer", "data", "models", "pipeline", "trainer", "utils"],
)
def test_lazy_imports(submodule):
    _clear_canns_modules()

    import canns

    full_name = f"canns.{submodule}"
    assert full_name not in sys.modules

    # Access the lazy attribute to trigger import
    _ = getattr(canns, submodule)

    assert full_name in sys.modules

@Routhleck Routhleck merged commit 4c843e7 into master Feb 6, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant