Conversation
There was a problem hiding this comment.
Pull request overview
This PR modernizes Asimov’s workflow infrastructure by standardizing the analysis directory layout (moving away from C01_offline), adding new monitoring/scheduler abstractions, and expanding blueprint capabilities (strategies, optional deps, prior validation).
Changes:
- Rename/standardize the run category directory to
analysesacross pipelines, tests, and docs. - Introduce new workflow features: strategy expansion, optional dependency handling, monitor state machine helpers, and a programmatic monitor API.
- Modernize packaging/docs: pyproject updates, additional docs pages, and improved logging behavior.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
asimov/__init__.py |
Lazy, rotating file logging setup to avoid log creation for read-only commands |
asimov/condor.py / asimov/scheduler_utils.py |
Begins moving Condor operations behind a scheduler abstraction |
asimov/strategies.py |
Adds “matrix strategy” blueprint expansion |
asimov/priors.py + pipeline/template updates |
Adds pydantic-validated priors and injects pipeline prior interfaces into templates |
asimov/monitor_helpers.py / asimov/monitor_api.py |
Refactors monitor logic into reusable helpers and provides a Python API |
tests/* + docs/* |
Updates expectations/docs for analyses directory and adds coverage for new features |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
This BayesWave test previously staged Prod1_test.ini, but now stages analyses/Prod0_test.ini. That looks inconsistent with the rest of the file, which still expects Prod1 output paths (e.g., the DAG path under .../Prod1/...). Update this to stage the correct Prod1 ini (or update the rest of the test to consistently use Prod0).
| os.system("git add analyses/Prod1_test.ini analyses/s000000xx_gpsTime.txt") |
There was a problem hiding this comment.
pipe.submit_dag() can return a tuple/list in some pipelines (e.g., (cluster_id,) or (cluster_id, PipelineLogger)), but this branch casts the full return value to int, which will raise TypeError. Normalize cluster_id here the same way you already do later in this file (extract the first element when a tuple/list is returned).
| cluster_id = pipe.submit_dag(dryrun=dryrun) | |
| # Normalize cluster_id: submit_dag may return a tuple/list, e.g. (cluster_id, ...) | |
| if isinstance(cluster_id, (tuple, list)): | |
| cluster_id = cluster_id[0] |
There was a problem hiding this comment.
The template checks/reads chirp-mass, but the rest of the file (and typical blueprint keys) use space-separated names (e.g., chirp mass). With the new LALInferencePriorInterface.convert() currently passing keys through, this will silently omit chirp-mass bounds even when provided. Fix by standardizing the key name in the template (recommended) or mapping chirp mass -> chirp-mass during prior conversion.
There was a problem hiding this comment.
self.production.job_id is assigned directly from cluster_id without normalization/casting. Other pipelines in this PR store job_id as an int, and other parts of the code (and tests) may assume it’s numeric. Consider casting to int (or ensuring cluster_id is always an int) for consistency, and keep return types consistent across pipelines.
There was a problem hiding this comment.
self.event is referenced in the warning message but isn’t defined on this class in the diff shown. If default-branch detection hits this exception path, this will raise a new AttributeError and mask the original error. Use a known attribute (e.g., self.directory or self.url) in the log message instead.
| self.logger.warning(f"Could not detect default branch for {self.directory}: {e}") |
There was a problem hiding this comment.
On validation failure this command prints an error but still exits with status code 0, which makes it hard to use in CI/pre-commit hooks. Return a non-zero exit code (e.g., raise click.ClickException(...) or ctx.exit(1) / sys.exit(1)), ideally preserving the validation details.
| raise click.ClickException(f"Blueprint '{file_path}' is invalid: {e}") |
There was a problem hiding this comment.
The testing extra is defined but empty, while asimov.pipelines.testing is always included in the distributed packages. If the intent is for testing pipelines to be available only under asimov[testing], this configuration won’t achieve that (extras don’t control which packages are shipped). Either remove/adjust the claim and the empty extra, or move testing pipelines into a separate distribution/package that can be optionally installed.
There was a problem hiding this comment.
The testing extra is defined but empty, while asimov.pipelines.testing is always included in the distributed packages. If the intent is for testing pipelines to be available only under asimov[testing], this configuration won’t achieve that (extras don’t control which packages are shipped). Either remove/adjust the claim and the empty extra, or move testing pipelines into a separate distribution/package that can be optionally installed.
There was a problem hiding this comment.
This docstring states the testing pipelines are only installed with the testing extra, but pyproject.toml currently includes asimov.pipelines.testing in the default packages list. Update the documentation to match actual packaging behavior, or change packaging so the statement becomes true.
| The testing pipelines are installed as part of the standard asimov | |
| installation and are available after installing the base package: | |
| .. code-block:: bash | |
| pip install asimov | |
| These pipelines are intended primarily for testing, development, and | |
| example purposes rather than production analyses. |
Description
Brief description of what this PR does.
Type of Change
Motivation and Context
Why is this change needed? What problem does it solve?
Fixes #(issue number) (if applicable)
Changes Made
List the main changes:
Testing
How has this been tested?
Test configuration:
Checklist
Documentation
Breaking Changes
Does this PR introduce any breaking changes? If yes, describe:
Additional Notes
Any additional information, concerns, or considerations for reviewers.
Screenshots (if applicable)
For UI or output changes, include before/after screenshots or terminal output.
Reviewer Notes: