-
Notifications
You must be signed in to change notification settings - Fork 6
Merge 'develop' into 'stable' for SDK 1.14 & Infrahub 1.4 #509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merge stable into develop
Merge stable into develop
* Update dependencies Add init command Add testing changelog Add docs Add copier to all extras update lock file update lock file update lock file update lock file * update lock file * update lock file
Merge stable into develop
Finalize typing on ctl.schema
Merge stable into develop
Use correct case for Infrahub
Send in the correct and expected types
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](actions/checkout@v4...v5) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '5' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <[email protected]>
…p/actions/checkout-5 Bump actions/checkout from 4 to 5
Remove unnecessary call (rewrite as a literal)
Update packages in local lock file
Merge stable into develop with resolved conflicts
WalkthroughThe PR updates multiple GitHub Actions workflows to use actions/checkout@v5. It introduces a new async CLI command infrahubctl repository init using copier, with documentation and a changelog entry. Schema CLI logic switches from content to payload for schema data, refactors error reporting, and updates public function signatures. Utilities adjust JsonDecodeError to store URL as str and update GitRepoManager initialization. Packaging adds optional copier, includes it in extras, and removes a mypy override and a Ruff ignore. Tests add coverage for repository init and make minor stylistic edits. Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov Report❌ Patch coverage is
@@ Coverage Diff @@
## stable #509 +/- ##
==========================================
+ Coverage 75.76% 75.83% +0.07%
==========================================
Files 100 100
Lines 8825 9303 +478
Branches 1728 1826 +98
==========================================
+ Hits 6686 7055 +369
- Misses 1664 1756 +92
- Partials 475 492 +17
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 7 files with indirect coverage changes 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
infrahub_sdk/pytest_plugin/items/graphql_query.py (1)
29-41: Bug: wrong exception caught when parsing non-JSON HTTP responses in repr_failure.httpx.Response.json() raises ValueError (and stdlib json.JSONDecodeError, a subclass) — not ujson.JSONDecodeError. As written, invalid JSON in an error response can bubble up and mask the original HTTPStatusError. Parse first, then pretty-print, and catch ValueError.
Apply this diff within the shown lines:
- try: - response_content = ujson.dumps(excinfo.value.response.json(), indent=4) - except ujson.JSONDecodeError: - response_content = excinfo.value.response.text + try: + parsed = excinfo.value.response.json() + response_content = ujson.dumps(parsed, indent=4) + except ValueError: + response_content = excinfo.value.response.textIf you prefer to be extra defensive (e.g., in case of a custom json backend), also catch TypeError when dumping:
- response_content = ujson.dumps(parsed, indent=4) + try: + response_content = ujson.dumps(parsed, indent=4) + except (TypeError, ValueError): + response_content = str(parsed)infrahub_sdk/testing/docker.py (1)
24-28: InvalidVersion branch likely inverted: skips when no max bound, runs when a max existsWhen the image version is a branch name like "stable"/"develop" (unparseable), the code returns
max_infrahub_version is None, which:
- Skips tests when there is no max (probably wrong; with “latest” we generally want to run when there’s no upper bound).
- Runs tests when a max exists (risky; “latest” could exceed the intended maximum).
This contradicts the comment’s intent (“skip if … should not be ran beyond a maximum version”). Flip the condition to skip when a max is specified.
Apply this diff:
@@ - except InvalidVersion: + except InvalidVersion: @@ - # For now, we consider this means we are testing against the most recent version of infrahub, - # so we skip if the test should not be ran against a maximum version. - return max_infrahub_version is None + # For now, treat this as testing against the most recent version of Infrahub, + # so skip only when a maximum version is defined (we may exceed it). + return max_infrahub_version is not Noneinfrahub_sdk/ctl/schema.py (1)
72-84: Error display can crash on unexpected shapes; also improve node label formatting.
- Indexing
node[loc_type][loc_path[6]]can raise KeyError/IndexError if the server returns a slightly different error path or the payload lacks the expected keys.- Printed node identifier concatenates namespace and name without a separator.
Apply this diff:
- if len(loc_path) == 6: - loc_type = loc_path[-1] - input_str = error.get("input", None) - error_message = f"{loc_type} ({input_str}) | {error['msg']} ({error['type']})" - console.print(f" Node: {node.get('namespace', None)}{node.get('name', None)} | {error_message}") - - elif len(loc_path) > 6: - loc_type = loc_path[5] - input_label = node[loc_type][loc_path[6]].get("name", None) - input_str = error.get("input", None) - error_message = f"{loc_type[:-1].title()}: {input_label} ({input_str}) | {error['msg']} ({error['type']})" - console.print(f" Node: {node.get('namespace', None)}{node.get('name', None)} | {error_message}") + if len(loc_path) == 6: + loc_type = loc_path[-1] + input_str = error.get("input") + node_ns = node.get("namespace") or "" + node_name = node.get("name") or "" + node_label = f"{node_ns}.{node_name}" if node_ns and node_name else node_ns or node_name + error_message = f"{loc_type} ({input_str}) | {error['msg']} ({error['type']})" + console.print(f" Node: {node_label} | {error_message}") + + elif len(loc_path) > 6: + loc_type = loc_path[5] # e.g., 'attributes' or 'relationships' + items = node.get(loc_type) or {} + entry = None + if isinstance(items, list) and len(items) > int(loc_path[6]): + entry = items[int(loc_path[6])] + elif isinstance(items, dict): + entry = items.get(loc_path[6]) + input_label = (entry or {}).get("name") + input_str = error.get("input") + node_ns = node.get("namespace") or "" + node_name = node.get("name") or "" + node_label = f"{node_ns}.{node_name}" if node_ns and node_name else node_ns or node_name + error_message = f"{loc_type[:-1].title()}: {input_label} ({input_str}) | {error['msg']} ({error['type']})" + console.print(f" Node: {node_label} | {error_message}")
🧹 Nitpick comments (19)
infrahub_sdk/pytest_plugin/items/graphql_query.py (1)
50-51: Specify file encoding when reading the GraphQL query.Explicit encoding avoids environment-dependent decoding issues on non-UTF-8 locales.
- query = (self.session.infrahub_config_path.parent / self.test.spec.path).read_text() # type: ignore[attr-defined,union-attr] + query = (self.session.infrahub_config_path.parent / self.test.spec.path).read_text(encoding="utf-8") # type: ignore[attr-defined,union-attr]infrahub_sdk/testing/docker.py (2)
25-27: Minor: capitalize “Infrahub” in comments for consistencyThe nearby comments still use lowercase “infrahub”. Align with the docstring and repository style.
Apply this diff:
- # unreleased versions of infrahub, like `stable` or `develop` branch. - # For now, we consider this means we are testing against the most recent version of infrahub, + # unreleased versions of Infrahub, like `stable` or `develop` branch. + # For now, we consider this means we are testing against the most recent version of Infrahub,
30-34: Optional: validate min/max strings before Version() to surface clearer errorsIf callers accidentally pass an invalid
min_infrahub_versionormax_infrahub_version,Version(...)will raise and bubble up here. Consider guarding with a small helper to raise aValueErrorwith a clearer message, or to ignore invalid bounds with a warning. Not a blocker..github/workflows/labels.yml (1)
20-20: Optional hardening: pin actions to a commit SHA and set minimal permissions
- Supply-chain: Pin actions/checkout to its commit SHA to avoid tag drift.
- Permissions: github-labeler typically needs write on contents and issues. Consider setting explicit, least-privilege permissions at job scope to avoid relying on repo defaults.
Example pin within the changed line:
-uses: actions/checkout@v5 +uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v5Outside the selected line range, add minimal permissions under the job:
jobs: labeler: permissions: contents: write issues: write pull-requests: write.github/workflows/repository-dispatch.yml (1)
45-45: Checkout v5 bump is fine; consider removing checkout step entirelyThe job only invokes peter-evans/repository-dispatch and doesn’t use repository files. You can drop the checkout to shave seconds and avoid unnecessary token exposure.
Proposed change:
- - name: Checkout code - uses: actions/checkout@v5Optional hardening: pin to a SHA if you keep the step.
- uses: actions/checkout@v5 + uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v5Also consider explicit minimal permissions for the job since it uses a PAT for dispatch and doesn’t need GITHUB_TOKEN writes:
jobs: repository-dispatch: permissions: contents: read.github/workflows/update-submodule.yml (1)
36-36: Checkout v5 bump is OK; step appears unnecessaryThis job only runs a curl to the GitHub API and doesn’t read the repo contents. You can remove the checkout step to simplify and speed up the workflow.
Proposed change:
- - name: Checkout code - uses: actions/checkout@v5If you prefer to keep it, pin the action to a commit SHA:
- uses: actions/checkout@v5 + uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v5.github/workflows/sync-docs.yml (1)
19-19: Both checkout steps updated to v5 — looks good; consider pinning to SHAsChanges are consistent with other workflows. Given this workflow commits to a separate repo, pinning to SHAs is recommended for supply-chain hardening. No other changes needed.
Suggested pins:
- uses: actions/checkout@v5 + uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v5Optional job-level minimal permissions (push to target repo is via PAT so GITHUB_TOKEN can stay read-only):
jobs: sync: permissions: contents: readAlso applies to: 24-24
.github/workflows/release.yml (1)
24-24: Checkout v5 upgrade LGTM.No functional changes beyond the version bump. Keep an eye on any steps that might require full history or tags; add
fetch-depth: 0only if needed.Example (only if you later rely on tags/merge-base in this job):
- - name: "Check out repository code" - uses: "actions/checkout@v5" + - name: "Check out repository code" + uses: "actions/checkout@v5" + with: + fetch-depth: 0 submodules: truechangelog/466.added.md (1)
1-1: Enrich changelog entry for discoverability.Consider stating the extra required for the CLI and a minimal usage sample to help users.
-Added `infrahubctl repository init` command to allow the initialization of an Infrahub repository using [infrahub-template](https://github.com/opsmill/infrahub-template). +Add `infrahubctl repository init` to initialize an Infrahub repository from the official [infrahub-template](https://github.com/opsmill/infrahub-template). + +Notes: +- Requires installing the `ctl` extra (includes `copier`): `pip install infrahub-sdk[ctl]` +- Example: `infrahubctl repository init --template https://github.com/opsmill/infrahub-template.git ./my-repo`.github/workflows/ci.yml (1)
49-49: Consistent upgrade to actions/checkout@v5 across CI.Looks good and consistent, including commented sections. If any step later depends on tags or deep history (e.g., versioning or change detection), add
fetch-depth: 0for that specific step only; otherwise default shallow clones are faster.Optional example if needed in a specific job:
- - name: "Check out repository code" - uses: "actions/checkout@v5" + - name: "Check out repository code" + uses: "actions/checkout@v5" + with: + fetch-depth: 0Also applies to: 66-66, 79-79, 97-97, 113-113, 137-137, 164-164, 193-193, 230-230, 279-279, 319-319, 327-327
tests/integration/test_infrahub_client.py (1)
68-71: Turn the TODO into a version-gated skip to keep intent executable.Instead of leaving tests commented out, mark them with a conditional skip that activates once the minimum Infrahub version is present. This keeps the tests visible and self-documenting.
If the client exposes a server version (or a feature flag), you can do something like:
-# TODO enable these tests for Infrahub version containing this commit -# https://github.com/opsmill/infrahub/commit/5a4d6860196b5bfb51fb8a124f33125f4a0b6753 -# when we support testing against multiple Infrahub versions. +import pytest +from packaging.version import Version + +# Replace `get_server_version()` with the actual helper/fixture from infrahub_sdk.pytest_plugin +MIN_VERSION = Version("1.4.0") # adjust to the first release including the linked commit + +pytestmark = pytest.mark.skipif( + Version(get_server_version()) < MIN_VERSION, # noqa: F821 + reason="Feature not available on current Infrahub test target", +)Would you like me to wire this to the plugin fixture you use for integration tests (e.g., pulling the version from the running container/client)?
tests/unit/sdk/test_topological_sort.py (1)
7-7: LGTM on empty-input assertion; consider broadening type hints in implementation.The switch to a literal
{}is fine and clearer. Note: tests exercise integer and string keys, whiletopological_sortis annotated forMapping[str, Iterable[str]]. Consider generalizing the implementation to acceptHashablekeys to align with tests.As a follow-up outside this file, you can adjust the signature in
infrahub_sdk/topological_sort.py:-def topological_sort(dependency_dict: Mapping[str, Iterable[str]]) -> list[set[str]]: +from collections.abc import Hashable + +def topological_sort(dependency_dict: Mapping[Hashable, Iterable[Hashable]]) -> list[set[Hashable]]:infrahub_sdk/utils.py (1)
235-236: Avoid returning "None" as a string when no active branch is found.
repo.active_branchisstr | None.return str(repo.active_branch)will return the literal "None" if the repo is in a detached HEAD state — likely unintended.Suggested change:
- return str(repo.active_branch) + branch_name = repo.active_branch + if branch_name is None: + # Detached HEAD or repo not initialized; choose the behavior that best fits callers. + # Option A: raise to force explicit handling + raise ValueError("No active git branch found (detached HEAD).") + # Option B: return empty string (document this contract) + # return "" + return branch_namePlease confirm which behavior callers expect; I can update usages or add tests accordingly.
tests/unit/ctl/test_repository_app.py (2)
329-363: Assert the CLI result and clean up the temporary YAML file; fix a misspelling.
- Capture
runner.invoke(...)to assertexit_codefor clearer failures.- Remove the temp YAML file to avoid leaving stray files (you used
delete=False).- Rename
coppied_answers->copied_answers.Apply:
- runner.invoke(app, ["repository", "init", str(dst), "--data", str(yaml_path), "--vcs-ref", commit]) - coppied_answers = yaml.safe_load((dst / ".copier-answers.yml").read_text()) - coppied_answers.pop("_src_path") + result = runner.invoke( + app, ["repository", "init", str(dst), "--data", str(yaml_path), "--vcs-ref", commit] + ) + assert result.exit_code == 0, result.stdout + copied_answers = yaml.safe_load((dst / ".copier-answers.yml").read_text()) + copied_answers.pop("_src_path") @@ - assert coppied_answers == answers + assert copied_answers == answers assert (dst / "generators").is_dir() assert (dst / "queries").is_dir() assert (dst / "scripts").is_dir() assert (dst / "pyproject.toml").is_file() + yaml_path.unlink(missing_ok=True)
363-397: Local-template test: add cleanup for the temporary YAML file.Mirror the cleanup to prevent leaking files when
delete=Falseis set.@@ assert result.exit_code == 0, result.stdout @@ assert output_file.is_file() assert output_file.read_text() == "Hello local-test" + yaml_path.unlink(missing_ok=True)infrahub_sdk/ctl/repository.py (2)
172-189: Init command: add a debug flag for consistency; make trust a plain bool; call init_logging.Other subcommands in this module accept
--debugand callinit_logging(debug=...). Mirror that forinit. Also,trustshould bebool(Typer will generate--trust/--no-trustautomatically);Optional[bool]is unnecessary.Apply this diff:
-@app.command() -async def init( - directory: Path = typer.Argument(help="Directory path for the new project."), +@app.command() +async def init( + directory: Path = typer.Argument(help="Directory path for the new project."), template: str = typer.Option( default="https://github.com/opsmill/infrahub-template.git", help="Template to use for the new repository. Can be a local path or a git repository URL.", ), data: Optional[Path] = typer.Option(default=None, help="Path to YAML file containing answers to CLI prompt."), vcs_ref: Optional[str] = typer.Option( default="HEAD", help="VCS reference to use for the template. Defaults to HEAD.", ), - trust: Optional[bool] = typer.Option( - default=False, - help="Trust the template repository. If set, the template will be cloned without verification.", - ), + trust: bool = typer.Option( + default=False, + help="Trust the template to run its tasks (unsafe). Use with caution.", + ), + debug: bool = False, _: str = CONFIG_PARAM, ) -> None: - """Initialize a new Infrahub repository.""" + """Initialize a new Infrahub repository.""" + init_logging(debug=debug)
202-206: Template source resolution: minor nit.
template_source = template or ""is redundant since Typer ensures a non-empty default. Not harmful, just noise.Apply this diff:
- template_source = template or "" + template_source = templateinfrahub_sdk/ctl/schema.py (2)
57-61: Validate expected locator shape to avoid mis-indexing in later logic.Currently you only validate length and initial segments. Consider also checking that
loc_path[3] == "nodes"before indexingloc_path[4]as a node index.Suggested adjustment (illustrative):
- if not valid_error_path(loc_path=loc_path): + if not valid_error_path(loc_path=loc_path) or (len(loc_path) >= 4 and loc_path[3] != "nodes"): continue
173-184: Unify output via Rich console for consistency; clarify “no diff” path.Elsewhere you use
console.print. Stick to that here and pretty-print YAML.Apply this diff:
- success, response = await client.schema.check(schemas=[item.payload for item in schemas_data], branch=branch) + success, response = await client.schema.check(schemas=[item.payload for item in schemas_data], branch=branch) @@ - else: + else: for schema_file in schemas_data: console.print(f"[green] schema '{schema_file.location}' is Valid!") if response == {"diff": {"added": {}, "changed": {}, "removed": {}}}: - print("No diff") + console.print("[green] No diff") else: - print(yaml.safe_dump(data=response, indent=4)) + console.print(yaml.safe_dump(data=response, indent=4))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (19)
.github/workflows/ci.yml(11 hunks).github/workflows/labels.yml(1 hunks).github/workflows/publish-pypi.yml(1 hunks).github/workflows/release.yml(1 hunks).github/workflows/repository-dispatch.yml(1 hunks).github/workflows/sync-docs.yml(1 hunks).github/workflows/update-submodule.yml(1 hunks)changelog/466.added.md(1 hunks)docs/docs/infrahubctl/infrahubctl-repository.mdx(2 hunks)infrahub_sdk/ctl/repository.py(2 hunks)infrahub_sdk/ctl/schema.py(6 hunks)infrahub_sdk/pytest_plugin/items/graphql_query.py(1 hunks)infrahub_sdk/schema/repository.py(1 hunks)infrahub_sdk/testing/docker.py(1 hunks)infrahub_sdk/utils.py(2 hunks)pyproject.toml(3 hunks)tests/integration/test_infrahub_client.py(1 hunks)tests/unit/ctl/test_repository_app.py(2 hunks)tests/unit/sdk/test_topological_sort.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
When implementing Infrahub checks, subclass InfrahubCheck and override validate(data); do not implement or rely on a check() method
Files:
tests/unit/sdk/test_topological_sort.pyinfrahub_sdk/pytest_plugin/items/graphql_query.pyinfrahub_sdk/schema/repository.pyinfrahub_sdk/testing/docker.pytests/integration/test_infrahub_client.pyinfrahub_sdk/utils.pytests/unit/ctl/test_repository_app.pyinfrahub_sdk/ctl/schema.pyinfrahub_sdk/ctl/repository.py
tests/unit/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place and write unit tests under tests/unit/ (isolated component tests)
Files:
tests/unit/sdk/test_topological_sort.pytests/unit/ctl/test_repository_app.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Use the custom pytest plugin (infrahub_sdk.pytest_plugin) fixtures for clients, configuration, and Infrahub-specific testing
Files:
tests/unit/sdk/test_topological_sort.pytests/integration/test_infrahub_client.pytests/unit/ctl/test_repository_app.py
docs/**/*.{md,mdx}
📄 CodeRabbit inference engine (CLAUDE.md)
docs/**/*.{md,mdx}: Follow the Diataxis framework and classify docs as Tutorials, How-to guides, Explanation, or Reference
Structure How-to guides with required frontmatter and sections: introduction, prerequisites, step-by-step steps, validation, related resources
Structure Topics (Explanation) docs with introduction, concepts & definitions, background & context, architecture & design, connections, further reading
Use a professional, concise, informative tone with consistent structure across documents
Use proper language tags on all code blocks
Include both async and sync examples where applicable using the Tabs component
Add validation steps to guides to confirm success and expected outputs
Use callouts for warnings, tips, and important notes
Define new terms on first use and use domain-relevant terminology consistent with Infrahub’s model/UI
Conform to markdown style rules in .markdownlint.yaml and Vale styles in .vale/styles/
Files:
docs/docs/infrahubctl/infrahubctl-repository.mdx
tests/integration/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Place and write integration tests under tests/integration/ (tests against real Infrahub instances)
Files:
tests/integration/test_infrahub_client.py
infrahub_sdk/ctl/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
infrahub_sdk/ctl/**/*.py: Build CLI commands with Typer
Organize and keep all CLI commands within infrahub_sdk/ctl/
Files:
infrahub_sdk/ctl/schema.pyinfrahub_sdk/ctl/repository.py
🧠 Learnings (3)
📚 Learning: 2025-08-24T13:35:12.974Z
Learnt from: CR
PR: opsmill/infrahub-sdk-python#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T13:35:12.974Z
Learning: Applies to tests/**/*.py : Use the custom pytest plugin (infrahub_sdk.pytest_plugin) fixtures for clients, configuration, and Infrahub-specific testing
Applied to files:
tests/unit/ctl/test_repository_app.py
📚 Learning: 2025-08-24T13:35:12.974Z
Learnt from: CR
PR: opsmill/infrahub-sdk-python#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T13:35:12.974Z
Learning: Applies to infrahub_sdk/ctl/**/*.py : Organize and keep all CLI commands within infrahub_sdk/ctl/
Applied to files:
pyproject.toml
📚 Learning: 2025-08-24T13:35:12.974Z
Learnt from: CR
PR: opsmill/infrahub-sdk-python#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-24T13:35:12.974Z
Learning: Applies to infrahub_sdk/ctl/**/*.py : Build CLI commands with Typer
Applied to files:
pyproject.toml
🧬 Code graph analysis (4)
tests/unit/sdk/test_topological_sort.py (1)
infrahub_sdk/topological_sort.py (1)
topological_sort(20-40)
infrahub_sdk/utils.py (2)
infrahub_sdk/exceptions.py (1)
JsonDecodeError(13-20)infrahub_sdk/repository.py (1)
GitRepoManager(9-33)
infrahub_sdk/ctl/schema.py (2)
infrahub_sdk/schema/__init__.py (5)
validate(101-102)load(287-299)load(712-724)check(321-335)check(746-760)infrahub_sdk/yaml.py (3)
data(147-150)payload(168-169)SchemaFile(166-169)
infrahub_sdk/ctl/repository.py (1)
infrahub_sdk/async_typer.py (1)
command(29-31)
🪛 LanguageTool
docs/docs/infrahubctl/infrahubctl-repository.mdx
[style] ~67-~67: To form a complete sentence, be sure to include a subject or ‘there’.
Context: ...Template to use for the new repository. Can be a local path or a git repository URL...
(MISSING_IT_THERE)
[grammar] ~68-~68: There might be a mistake here.
Context: ...L file containing answers to CLI prompt. * --vcs-ref TEXT: VCS reference to use for the template....
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...thout verification. [default: no-trust] * --config-file TEXT: [env var: INFRAHUBCTL_CONFIG; default:...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...UBCTL_CONFIG; default: infrahubctl.toml] * --help: Show this message and exit. ## `infra...
(QB_NEW_EN)
🔇 Additional comments (16)
infrahub_sdk/pytest_plugin/items/graphql_query.py (1)
19-20: Wording/terminology improvement looks good.Capitalization and phrasing are clearer: “Infrahub repository configuration” is consistent and more formal.
infrahub_sdk/testing/docker.py (2)
16-17: Docstring capitalization fix looks goodConsistent capitalization of the Infrahub proper noun. No functional impact.
36-48: Fixtures read wellClient fixtures are straightforward and correctly typed. The
# noqa: S106is appropriate for hardcoded test credentials..github/workflows/labels.yml (1)
20-20: Checkout v5 bump looks goodPure version bump to actions/checkout@v5. No behavior changes expected for this job.
infrahub_sdk/schema/repository.py (1)
27-27: Consistent capitalization of “Infrahub”Docstring capitalization fix aligns with branding and existing docs. No functional impact.
.github/workflows/publish-pypi.yml (2)
53-53: Checkout v5 upgrade looks good.Moving to actions/checkout@v5 is consistent with the rest of the PR and should be safe with
submodules: true.
57-66: I've provided a script to confirm:
- How Poetry is installed (e.g. via
install-poetryorsetup-poetry)- Whether
virtualenvs-in-projectis explicitly set- That the “Install Dependencies” step runs from the repository root (no
working-directoryoverride)Once we know those details, we can ensure the cache path aligns with where Poetry actually creates the
.venv.infrahub_sdk/utils.py (1)
98-98: Good: JsonDecodeError now stores URL as str (consistent with exception signature).This aligns with
JsonDecodeError(url: str | None)and avoids leaking transport-layer URL types via exceptions.docs/docs/infrahubctl/infrahubctl-repository.mdx (1)
22-22: Command list update looks good.Adding
initto the command list is consistent with the new CLI and improves discoverability.pyproject.toml (2)
48-48: Add copier as optional dep: double-check Python compatibility matrix.
copier = "^9.8.0"is reasonable. Please verify Copier’s supported Python versions cover^3.9,<3.14to avoid resolver conflicts for users on 3.9 and 3.13.If CI installs only specific extras in different jobs, ensure the job running CLI tests installs the
ctlextra.
73-86: Confirmed CI workflows already install thectlandallextras as needed
Theci.ymlpipeline uses
poetry install --no-interaction --no-ansi --extras ctlbefore running the CLI/docs‐validation steps (lines 163–168), andpoetry install --all-extrasfor both the unit‐test (lines 235–238) and integration‐test jobs (lines 286–289).No jobs install only the
testsextras, so the newly addedcopierdependency will always be present in CI. No changes required.tests/unit/ctl/test_repository_app.py (1)
3-4: Imports for tempfile/Path/yaml are appropriate for the new tests.No issues spotted.
Also applies to: 8-8
infrahub_sdk/ctl/schema.py (4)
39-39: Switching to payload is correct.Validating
schema_file.payloadaligns with the newSchemaFileAPI (payload proxies content). Good change.
51-56: Graceful fallback when detail is absent looks good.Early return after handling non-detailed errors keeps output clean.
90-91: Iterating errors list is fine.Switch to direct indexing
response["errors"]is OK given API always returns a list here.
125-126: Using payload for load is correct.Passing
schemas=[item.payload ...]matches the updated model and server contract.
| ## `infrahubctl repository init` | ||
|
|
||
| Initialize a new Infrahub repository. | ||
|
|
||
| **Usage**: | ||
|
|
||
| ```console | ||
| $ infrahubctl repository init [OPTIONS] DIRECTORY | ||
| ``` | ||
|
|
||
| **Arguments**: | ||
|
|
||
| * `DIRECTORY`: Directory path for the new project. [required] | ||
|
|
||
| **Options**: | ||
|
|
||
| * `--template TEXT`: Template to use for the new repository. Can be a local path or a git repository URL. [default: https://github.com/opsmill/infrahub-template.git] | ||
| * `--data PATH`: Path to YAML file containing answers to CLI prompt. | ||
| * `--vcs-ref TEXT`: VCS reference to use for the template. Defaults to HEAD. [default: HEAD] | ||
| * `--trust / --no-trust`: Trust the template repository. If set, the template will be cloned without verification. [default: no-trust] | ||
| * `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml] | ||
| * `--help`: Show this message and exit. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammar/style in the new repository init section; slight wording polish.
Several bullets read as fragments and trip style checks. Tightening phrasing improves clarity.
Apply the following diff within this block:
## `infrahubctl repository init`
Initialize a new Infrahub repository.
**Usage**:
```console
$ infrahubctl repository init [OPTIONS] DIRECTORYArguments:
-* DIRECTORY: Directory path for the new project. [required]
+* DIRECTORY: Directory path for the new project. [required]
Options:
-* --template TEXT: Template to use for the new repository. Can be a local path or a git repository URL. [default: https://github.com/opsmill/infrahub-template.git]
-* --data PATH: Path to YAML file containing answers to CLI prompt.
+* --template TEXT: Template to use for the new repository; can be a local path or a Git repository URL. [default: https://github.com/opsmill/infrahub-template.git]
+* --data PATH: Path to a YAML file containing answers to the CLI prompts.
--vcs-ref TEXT: VCS reference to use for the template. Defaults to HEAD. [default: HEAD]
-*--trust / --no-trust: Trust the template repository. If set, the template will be cloned without verification. [default: no-trust]
+*--trust / --no-trust: Trust the template repository. If enabled, the template will be used without additional verification. [default: no-trust]--config-file TEXT: [env var: INFRAHUBCTL_CONFIG; default: infrahubctl.toml]--help: Show this message and exit.
Optional: add a short Validation subsection here showing expected files (e.g., `.copier-answers.yml`, `pyproject.toml`) after a successful run.
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 LanguageTool</summary>
[style] ~67-~67: To form a complete sentence, be sure to include a subject or ‘there’.
Context: ...Template to use for the new repository. Can be a local path or a git repository URL...
(MISSING_IT_THERE)
---
[grammar] ~68-~68: There might be a mistake here.
Context: ...L file containing answers to CLI prompt. * `--vcs-ref TEXT`: VCS reference to use for the template....
(QB_NEW_EN)
---
[grammar] ~70-~70: There might be a mistake here.
Context: ...thout verification. [default: no-trust] * `--config-file TEXT`: [env var: INFRAHUBCTL_CONFIG; default:...
(QB_NEW_EN)
---
[grammar] ~71-~71: There might be a mistake here.
Context: ...UBCTL_CONFIG; default: infrahubctl.toml] * `--help`: Show this message and exit. ## `infra...
(QB_NEW_EN)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In docs/docs/infrahubctl/infrahubctl-repository.mdx around lines 51 to 73,
polish wording and fix grammar in the infrahubctl repository init section:
change the --template description to "Template to use for the new repository;
can be a local path or a Git repository URL.", change --data to "Path to a
YAML file containing answers to the CLI prompts.", and update --trust / --no-trust to "Trust the template repository. If enabled, the template will be
used without additional verification." Keep the rest of the block intact and
optionally add a short "Validation" subsection listing expected files like
.copier-answers.yml and pyproject.toml after a successful run.
</details>
<!-- fingerprinting:phantom:poseidon:chinchilla -->
<!-- This is an auto-generated comment by CodeRabbit -->
|
|
||
| import typer | ||
| import yaml | ||
| from copier import run_copy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid hard dependency on optional package "copier" (move import inside the command).
copier is added as an optional dependency in this PR. Importing it at module import time will break all repository subcommands (e.g., add/list) for users who installed the base SDK without extras. Defer the import to the init command and emit a helpful message if missing.
Apply this diff:
-from copier import run_copyAnd see the follow-up diff in the init() block that adds a lazy import.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/repository.py around line 9, the top-level import "from
copier import run_copy" creates a hard dependency on the optional "copier"
package; move this import into the init() command so other repository
subcommands don't fail when "copier" isn't installed. Edit init() to perform a
lazy import using try/except ImportError, and if ImportError is raised raise a
clear click/CLI error (or print a helpful message) instructing the user to
install the optional extras (e.g., infrahub-sdk[copier]) or to pip install
copier; ensure no other module-level references to run_copy remain so other
commands work without copier.
| config_data = None | ||
| if data: | ||
| try: | ||
| with Path.open(data, encoding="utf-8") as file: | ||
| config_data = yaml.safe_load(file) | ||
| typer.echo(f"Loaded config: {config_data}") | ||
| except Exception as exc: | ||
| typer.echo(f"Error loading YAML file: {exc}", err=True) | ||
| raise typer.Exit(code=1) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YAML handling: avoid leaking secrets; validate structure; use idiomatic Path.open.
- Printing the fully loaded config can leak credentials or tokens from answers files.
- Ensure the YAML decodes into a mapping (dict) as expected by copier.
- Prefer
data.open()overPath.open(data, ...)for clarity.
Apply this diff:
- config_data = None
- if data:
- try:
- with Path.open(data, encoding="utf-8") as file:
- config_data = yaml.safe_load(file)
- typer.echo(f"Loaded config: {config_data}")
- except Exception as exc:
- typer.echo(f"Error loading YAML file: {exc}", err=True)
- raise typer.Exit(code=1)
+ config_data = None
+ if data:
+ try:
+ with data.open("r", encoding="utf-8") as file:
+ config_data = yaml.safe_load(file) or {}
+ if not isinstance(config_data, dict):
+ raise typer.BadParameter("Expected a mapping in the YAML answers file (key: value).")
+ except (FileNotFoundError, yaml.YAMLError, typer.BadParameter) as exc:
+ typer.echo(f"Invalid YAML file: {exc}", err=True)
+ raise typer.Exit(code=1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| config_data = None | |
| if data: | |
| try: | |
| with Path.open(data, encoding="utf-8") as file: | |
| config_data = yaml.safe_load(file) | |
| typer.echo(f"Loaded config: {config_data}") | |
| except Exception as exc: | |
| typer.echo(f"Error loading YAML file: {exc}", err=True) | |
| raise typer.Exit(code=1) | |
| config_data = None | |
| if data: | |
| try: | |
| with data.open("r", encoding="utf-8") as file: | |
| config_data = yaml.safe_load(file) or {} | |
| if not isinstance(config_data, dict): | |
| raise typer.BadParameter("Expected a mapping in the YAML answers file (key: value).") | |
| except (FileNotFoundError, yaml.YAMLError, typer.BadParameter) as exc: | |
| typer.echo(f"Invalid YAML file: {exc}", err=True) | |
| raise typer.Exit(code=1) |
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/repository.py around lines 192 to 201, avoid printing the
entire loaded YAML (which may leak secrets), use the idiomatic Path.open() call
on the Path object (data.open(...)), and validate that yaml.safe_load returns a
mapping: open the file with data.open(encoding="utf-8"), call yaml.safe_load,
check isinstance(config_data, dict) and raise typer.Exit(1) with a concise error
if not a mapping, and remove or replace the typer.echo(f"Loaded config:
{config_data}") with a non-sensitive acknowledgment (e.g., a single
informational message without contents) while keeping the existing exception
handling to log the error and exit.
| await asyncio.to_thread( | ||
| run_copy, | ||
| template_source, | ||
| str(directory), | ||
| data=config_data, | ||
| vcs_ref=vcs_ref, | ||
| unsafe=trust, | ||
| ) | ||
| except Exception as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Lazy import copier and handle copier-specific errors; keep the event loop responsive.
Perform a local import with a clear guidance if copier is missing, and handle its specific exceptions for cleaner UX.
Apply this diff:
- try:
- await asyncio.to_thread(
- run_copy,
- template_source,
- str(directory),
- data=config_data,
- vcs_ref=vcs_ref,
- unsafe=trust,
- )
- except Exception as e:
- typer.echo(f"Error running copier: {e}", err=True)
- raise typer.Exit(code=1)
+ # Lazy import to keep 'copier' optional for users not using 'repository init'
+ try:
+ from copier import run_copy # type: ignore
+ try:
+ from copier.errors import UserMessageError as CopierUserMessageError # type: ignore
+ except Exception: # older copier
+ CopierUserMessageError = Exception # type: ignore
+ except Exception:
+ typer.echo(
+ "The 'copier' package is required for 'infrahubctl repository init'. "
+ "Install with: pip install 'infrahub-sdk[ctl]' or pip install copier",
+ err=True,
+ )
+ raise typer.Exit(code=1)
+
+ try:
+ await asyncio.to_thread(
+ run_copy,
+ template_source,
+ str(directory),
+ data=config_data,
+ vcs_ref=vcs_ref,
+ unsafe=trust,
+ )
+ except CopierUserMessageError as exc:
+ typer.echo(f"Template error: {exc}", err=True)
+ raise typer.Exit(code=1)
+ except Exception as exc:
+ typer.echo(f"Error running copier: {exc}", err=True)
+ raise typer.Exit(code=1)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await asyncio.to_thread( | |
| run_copy, | |
| template_source, | |
| str(directory), | |
| data=config_data, | |
| vcs_ref=vcs_ref, | |
| unsafe=trust, | |
| ) | |
| except Exception as e: | |
| # Lazy import to keep 'copier' optional for users not using 'repository init' | |
| try: | |
| from copier import run_copy # type: ignore | |
| try: | |
| from copier.errors import UserMessageError as CopierUserMessageError # type: ignore | |
| except Exception: # older copier versions | |
| CopierUserMessageError = Exception # type: ignore | |
| except Exception: | |
| typer.echo( | |
| "The 'copier' package is required for 'infrahubctl repository init'. " | |
| "Install with: pip install 'infrahub-sdk[ctl]' or pip install copier", | |
| err=True, | |
| ) | |
| raise typer.Exit(code=1) | |
| try: | |
| await asyncio.to_thread( | |
| run_copy, | |
| template_source, | |
| str(directory), | |
| data=config_data, | |
| vcs_ref=vcs_ref, | |
| unsafe=trust, | |
| ) | |
| except CopierUserMessageError as exc: | |
| typer.echo(f"Template error: {exc}", err=True) | |
| raise typer.Exit(code=1) | |
| except Exception as exc: | |
| typer.echo(f"Error running copier: {exc}", err=True) | |
| raise typer.Exit(code=1) |
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/repository.py around lines 208 to 216, do a local import of
copier right before calling run_copy so we fail fast with a clear message if
copier is not installed, and catch copier-specific exceptions to provide a
cleaner error to the user: add "try: import copier except ModuleNotFoundError:
raise RuntimeError('copier is required for this command; please install package
\"copier\"')", then wrap the asyncio.to_thread call in a try/except that catches
copier.CopierException (or copier.CopierError / the public base exception from
copier) and re-raises or raises a RuntimeError with the copier error details,
leaving other exceptions unchanged so the event loop remains responsive.
| def get_node(schemas_data: list[SchemaFile], schema_index: int, node_index: int) -> dict | None: | ||
| if schema_index < len(schemas_data) and node_index < len(schemas_data[schema_index].payload["nodes"]): | ||
| return schemas_data[schema_index].payload["nodes"][node_index] | ||
| return None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against missing nodes key and out-of-range indices.
Using payload["nodes"] can raise a KeyError for malformed payloads and your bounds check doesn't cover that. Prefer .get("nodes", []).
Apply this diff:
-def get_node(schemas_data: list[SchemaFile], schema_index: int, node_index: int) -> dict | None:
- if schema_index < len(schemas_data) and node_index < len(schemas_data[schema_index].payload["nodes"]):
- return schemas_data[schema_index].payload["nodes"][node_index]
- return None
+def get_node(schemas_data: list[SchemaFile], schema_index: int, node_index: int) -> dict | None:
+ if schema_index >= len(schemas_data):
+ return None
+ payload = schemas_data[schema_index].payload or {}
+ nodes = payload.get("nodes", [])
+ if isinstance(nodes, list) and 0 <= node_index < len(nodes):
+ return nodes[node_index]
+ return None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_node(schemas_data: list[SchemaFile], schema_index: int, node_index: int) -> dict | None: | |
| if schema_index < len(schemas_data) and node_index < len(schemas_data[schema_index].payload["nodes"]): | |
| return schemas_data[schema_index].payload["nodes"][node_index] | |
| return None | |
| def get_node(schemas_data: list[SchemaFile], schema_index: int, node_index: int) -> dict | None: | |
| if schema_index >= len(schemas_data): | |
| return None | |
| payload = schemas_data[schema_index].payload or {} | |
| nodes = payload.get("nodes", []) | |
| if isinstance(nodes, list) and 0 <= node_index < len(nodes): | |
| return nodes[node_index] | |
| return None |
🤖 Prompt for AI Agents
In infrahub_sdk/ctl/schema.py around lines 100 to 103, the function accesses
payload["nodes"] which can raise KeyError for malformed payloads and the current
length checks don't guard against missing or non-list nodes; update the function
to use payload.get("nodes", []) (or retrieve and validate nodes into a local
variable), verify nodes is a sequence, and then check indices against its length
before returning nodes[node_index], otherwise return None.
Summary by CodeRabbit