-
Notifications
You must be signed in to change notification settings - Fork 37
IFC-1886: Paginated branch graphql query #7418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds paginated branch support to the GraphQL API: new types InfrahubBranch, InfrahubBranchEdge, InfrahubBranchType, StatusField and several value-field wrappers; new query field InfrahubBranch exposed via InfrahubBranchQueryList with a resolver enforcing limit/offset validation. Types and query symbols exported from modules and wired into InfrahubBaseQuery. Backend query layer gains BranchNodeGetListQuery (adds raw_filter to exclude DELETING) and Branch.get_list_count using that query. GraphQL type mapping logic updated. Unit tests for paginated branch queries added. pre-commit EOF-fixer exclusion updated to include schema.graphql. Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
backend/tests/unit/graphql/queries/test_branch.py (1)
152-228: Add docstring and expand test coverage.The test method is missing a docstring, which violates the coding guidelines. Additionally, the test only covers the happy path scenario. Consider adding test cases for edge cases to ensure robustness of the pagination implementation.
As per coding guidelines, add a docstring:
async def test_paginated_branch_query( self, db: InfrahubDatabase, default_branch: Branch, register_core_models_schema, session_admin, client, service, ): + """Test paginated branch query returns correct pagination metadata. + + Creates 10 test branches and verifies that querying page 2 with limit 5 + returns the correct page metadata and 5 branches. + """ for i in range(10):Additionally, consider adding test cases for:
- Page 1 to verify first page behavior
- Last page with fewer items than limit
- Page beyond available data (should return empty list)
- Invalid pagination parameters (page=0, negative limit)
- Default pagination parameters when not specified
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/infrahub/graphql/queries/__init__.py(2 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/schema.py(2 hunks)backend/infrahub/graphql/types/__init__.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/queries/__init__.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/schema.pybackend/infrahub/graphql/queries/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/queries/__init__.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/schema.pybackend/infrahub/graphql/queries/branch.py
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
🧬 Code graph analysis (3)
backend/tests/unit/graphql/queries/test_branch.py (6)
backend/tests/conftest.py (4)
db(122-138)default_branch(161-162)register_core_models_schema(217-218)branch(1239-1243)backend/tests/helpers/test_app.py (3)
default_branch(54-61)client(146-173)service(112-113)backend/tests/unit/conftest.py (1)
session_admin(2679-2681)backend/infrahub/services/__init__.py (1)
client(154-158)backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (2)
BranchType(16-46)InfrahubBranchType(49-52)
backend/infrahub/graphql/queries/branch.py (2)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (3)
BranchType(16-46)InfrahubBranchType(49-52)get_list(34-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: validate-generated-documentation
- GitHub Check: E2E-testing-playwright
- GitHub Check: graphql-schema
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-functional
- GitHub Check: backend-tests-unit
- GitHub Check: backend-tests-integration
- GitHub Check: backend-benchmark
CodSpeed Performance ReportMerging #7418 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
backend/infrahub/core/branch/models.py(1 hunks)backend/infrahub/core/node/standard.py(1 hunks)backend/infrahub/core/query/__init__.py(2 hunks)backend/infrahub/core/query/standard_node.py(1 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/core/node/standard.pybackend/infrahub/core/query/standard_node.pybackend/infrahub/core/branch/models.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/query/__init__.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/core/node/standard.pybackend/infrahub/core/query/standard_node.pybackend/infrahub/core/branch/models.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/query/__init__.py
🧬 Code graph analysis (4)
backend/infrahub/core/node/standard.py (4)
backend/infrahub/core/branch/models.py (1)
get_list_and_count(169-178)backend/infrahub/graphql/types/branch.py (1)
get_list_and_count(58-70)backend/infrahub/core/query/__init__.py (5)
Query(339-697)init(393-406)execute(530-563)get(222-223)count(588-605)backend/infrahub/core/query/standard_node.py (1)
StandardNodeGetListQuery(132-171)
backend/infrahub/core/branch/models.py (3)
backend/infrahub/core/node/standard.py (1)
get_list_and_count(232-245)backend/infrahub/graphql/types/branch.py (1)
get_list_and_count(58-70)backend/infrahub/core/branch/enums.py (1)
BranchStatus(4-8)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (2)
get_list_and_count(169-178)Branch(29-544)backend/infrahub/core/node/standard.py (2)
get_list_and_count(232-245)to_graphql(71-86)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (3)
BranchType(16-46)InfrahubBranchType(53-70)get_list_and_count(58-70)backend/infrahub/core/branch/models.py (1)
get_list_and_count(169-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: graphql-schema
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-unit
- GitHub Check: javascript-lint
🔇 Additional comments (2)
backend/infrahub/core/node/standard.py (1)
231-245: LGTM! Clean pagination implementation.The
get_list_and_countmethod correctly mirrorsget_listwhile adding count information. The count reflects total matching records (not just the page) sinceQuery.count()excludes LIMIT/OFFSET, which is the standard pagination pattern.backend/infrahub/core/branch/models.py (1)
168-178: LGTM! Consistent filtering of DELETING branches.The method correctly filters out branches with DELETING status, maintaining consistency with the existing
get_listmethod (lines 163-166).
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| limit = kwargs.pop("limit", 100) | ||
| offset = kwargs.pop("offset", 0) | ||
| fields = extract_graphql_fields(info) | ||
| branches, count = await InfrahubBranchType.get_list_and_count( | ||
| fields=fields.get("edges", {}).get("node", {}), | ||
| graphql_context=info.context, | ||
| limit=limit, | ||
| offset=offset, | ||
| **kwargs, | ||
| ) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-standard pagination structure.
The return structure {"count": count, "edges": {"node": branches}} doesn't follow standard GraphQL pagination patterns. Standard Relay cursor connections use:
type Connection {
edges: [Edge!]!
pageInfo: PageInfo!
}
type Edge {
node: Node!
cursor: String!
}Your implementation has edges as a single object containing node as an array, rather than edges as an array of edge objects. This makes the InfrahubBranchEdge type redundant and could confuse API consumers familiar with standard pagination.
Consider refactoring to a simpler, clearer structure:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
- fields=fields.get("edges", {}).get("node", {}),
+ fields=fields.get("branches", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
- return {"count": count, "edges": {"node": branches}}
+ return {"count": count, "branches": branches}Then update InfrahubBranchType (in types/branch.py) to:
class InfrahubBranchType(InfrahubObjectType):
count = Field(Int, required=True)
branches = Field(List(of_type=NonNull(BranchType)), required=True)This is simpler and more intuitive than the current edge wrapper.
🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 48, the
resolver returns a non-standard pagination shape {"count": count, "edges":
{"node": branches"}}; change it to return a simple, conventional shape like
{"count": count, "branches": branches} and update any callers to expect
"branches" instead of edges.node. Also update the GraphQL type in
types/branch.py to expose count: Int! and branches: [BranchType!]! (replace the
InfrahubBranchEdge wrapper) and adjust InfrahubBranchType.get_list_and_count
usage/field extraction if needed so fields map to the BranchType nodes directly.
Ensure tests and schema refs are updated to the new field name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the robot is right about this. the structure should be
edges: [
{
node: {
name: "branch-2", ...
}
}, ...
]
you'll need to update the test too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajtmccarty You're absolutely right! The structure needs to follow the standard Relay pattern. Here's the corrected resolver:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
fields=fields.get("edges", {}).get("node", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
# Transform branches into proper edge structure
edges = [{"node": branch} for branch in branches]
return {"count": count, "edges": edges}Additionally, you'll need to fix the InfrahubBranchEdge type in backend/infrahub/graphql/types/branch.py. The node field should be a single BranchType, not a List:
class InfrahubBranchEdge(InfrahubObjectType):
node = Field(BranchType, required=True) # Single BranchType, not ListAnd update InfrahubBranchType:
class InfrahubBranchType(InfrahubObjectType):
count = Field(Int, required=True)
edges = Field(List(of_type=NonNull(InfrahubBranchEdge)), required=True) # List of edgesThe test in backend/tests/unit/graphql/queries/test_branch.py will also need to be updated to expect edges as an array and access each edge's node field accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajtmccarty we are using this because we want to maintain a consistent response like some of the other queries i.e.
query MyQuery {
BuiltinTag {
count
edges {
node {
name {
value
}
description {
value
}
}
}
}
}
cc @ogenstad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the confusing part here is that coderabbit spots an issue and then hallucinates a solution that is incorrect. What we want is that "edges" should be a list/array and each "node" should be an object as an entry in that array. A potential area of confusion here is that you don't see this within the GraphQL query which is correct as per the above however I think the response might not be? What I'd do is to create an environment with a few BuiltinTag objects and issue the above query and observe what the response looks like and then see if the structure matches what we now get from the InfrahubBranch query.
| name: str | None = None, | ||
| **kwargs: Any, | ||
| ) -> tuple[list[Self], int]: | ||
| kwargs["raw_filter"] = f"n.status <> '{BranchStatus.DELETING.value}'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's be better to use a named parameter for raw_filter instead of kwargs
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| limit = kwargs.pop("limit", 100) | ||
| offset = kwargs.pop("offset", 0) | ||
| fields = extract_graphql_fields(info) | ||
| branches, count = await InfrahubBranchType.get_list_and_count( | ||
| fields=fields.get("edges", {}).get("node", {}), | ||
| graphql_context=info.context, | ||
| limit=limit, | ||
| offset=offset, | ||
| **kwargs, | ||
| ) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the robot is right about this. the structure should be
edges: [
{
node: {
name: "branch-2", ...
}
}, ...
]
you'll need to update the test too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/infrahub/core/branch/models.py (1)
156-169: **Fix incorrect type hint for kwargs.The type hint
**kwargs: dict[str, Any]is incorrect. The**kwargssyntax already indicates a dictionary of keyword arguments, so the type hint should be**kwargs: Anyto indicate the types of individual values.Apply this diff:
async def get_list( cls, db: InfrahubDatabase, limit: int = 1000, ids: list[str] | None = None, name: str | None = None, - **kwargs: dict[str, Any], + **kwargs: Any, ) -> list[Self]: query: Query = await BranchNodeGetListQuery.init( db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs ) await query.execute(db=db) return [cls.from_db(result.get("n")) for result in query.get_results()] @classmethod async def get_list_count( cls, db: InfrahubDatabase, limit: int = 1000, ids: list[str] | None = None, name: str | None = None, - **kwargs: dict[str, Any], + **kwargs: Any, ) -> int: query: Query = await BranchNodeGetListQuery.init( db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs ) return await query.count(db=db)Also applies to: 171-183
♻️ Duplicate comments (2)
backend/infrahub/graphql/queries/branch.py (2)
40-44: Critical: Incorrect Relay pagination structure for edges.The resolver returns
{"count": count, "edges": {"node": branches}}whereedgesis a single object. According to the standard Relay pagination pattern (confirmed by past review comments),edgesshould be an array of edge objects, each containing anodefield:[{"node": branch1}, {"node": branch2}, ...].Apply this diff:
fields = extract_graphql_fields(info) branches = await BranchType.get_list( graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset ) count = await InfrahubBranchType.get_list_count(graphql_context=info.context) - return {"count": count, "edges": {"node": branches}} + edges = [{"node": branch} for branch in branches] + return {"count": count, "edges": edges}Additionally, update the field extraction to handle edges as an array:
fields = extract_graphql_fields(info) + # Extract node fields from edges array + edge_fields = fields.get("edges", {}) + if isinstance(edge_fields, dict): + node_fields = edge_fields.get("node", {}) + else: + node_fields = {} + branches = await BranchType.get_list( - graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset + graphql_context=info.context, fields=node_fields, limit=limit, offset=offset )
33-44: Missing docstring for resolver function.The
infrahub_branch_resolverfunction lacks a docstring, which violates the coding guidelines requiring Google-style docstrings for all functions.Add a docstring following the Google style format:
async def infrahub_branch_resolver( root: dict, # noqa: ARG001 info: GraphQLResolveInfo, limit: int | None = None, offset: int | None = None, ) -> dict[str, Any]: + """Resolve paginated branch query. + + Retrieves a paginated list of branches with total count. + + Args: + root: GraphQL root value (unused) + info: GraphQL resolve info containing context and field information + limit: Maximum number of branches to return + offset: Number of branches to skip for pagination + + Returns: + Dictionary with count (total branches) and edges (list of branch nodes) + """ fields = extract_graphql_fields(info)Based on coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
backend/infrahub/core/branch/models.py(2 hunks)backend/infrahub/core/query/__init__.py(1 hunks)backend/infrahub/core/query/branch.py(2 hunks)backend/infrahub/core/query/standard_node.py(2 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)schema/schema.graphql(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- backend/infrahub/core/query/standard_node.py
- backend/infrahub/core/query/init.py
- backend/infrahub/graphql/types/branch.py
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/branch/models.pybackend/infrahub/core/query/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/branch/models.pybackend/infrahub/core/query/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (2)
backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (4)
BranchType(16-46)InfrahubBranchType(53-60)get_list(34-46)get_list_count(58-60)backend/infrahub/core/branch/models.py (2)
get_list(156-169)get_list_count(172-183)
backend/infrahub/core/branch/models.py (4)
backend/infrahub/core/query/__init__.py (7)
Query(339-695)QueryType(140-142)init(391-404)execute(528-561)get(222-223)get_results(619-627)count(586-603)backend/infrahub/core/query/branch.py (1)
BranchNodeGetListQuery(153-154)backend/infrahub/core/node/standard.py (2)
from_db(152-180)get(131-138)backend/infrahub/graphql/types/branch.py (1)
get_list_count(58-60)
backend/infrahub/core/query/branch.py (2)
backend/infrahub/core/branch/enums.py (1)
BranchStatus(4-8)backend/infrahub/core/query/standard_node.py (1)
StandardNodeGetListQuery(132-172)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: validate-generated-documentation
- GitHub Check: graphql-schema
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (3)
backend/infrahub/core/query/branch.py (1)
6-6: LGTM! Required imports for the new query class.The imports are necessary for the new
BranchNodeGetListQueryclass implementation.Also applies to: 9-9
schema/schema.graphql (1)
10410-10411: Query definition is well-structured pending the schema fix above.The
InfrahubBranchquery parameters (ids,limit,name,offset) and return type align with pagination expectations. Ensure the type schema fix is applied so the query returns the correct structure.backend/tests/unit/graphql/queries/test_branch.py (1)
63-63: LGTM: Removal of include_subscription parameter aligns with guidance.The removal of
include_subscription=Falsefrom theseprepare_graphql_paramscalls aligns with the guidance to avoid this parameter in new code.Also applies to: 108-108, 132-132
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | ||
| raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add docstring and type annotation.
The class is missing a docstring and type annotation, which are required by the coding guidelines.
Apply this diff to add the missing docstring and type annotation:
class BranchNodeGetListQuery(StandardNodeGetListQuery):
+ """Query to retrieve a list of branches, excluding those with DELETING status.
+
+ This query extends StandardNodeGetListQuery with a filter to exclude branches
+ that are currently being deleted from the results.
+ """
+
- raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'"
+ raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'"As per coding guidelines.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | |
| raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'" | |
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | |
| """Query to retrieve a list of branches, excluding those with DELETING status. | |
| This query extends StandardNodeGetListQuery with a filter to exclude branches | |
| that are currently being deleted from the results. | |
| """ | |
| raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'" |
🤖 Prompt for AI Agents
In backend/infrahub/core/query/branch.py around lines 153-154, the class
BranchNodeGetListQuery is missing a docstring and a type annotation for its
raw_filter attribute; add a concise class docstring that explains this Query
filters out deleting Branch nodes and add an explicit type annotation on
raw_filter (raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'") to
satisfy the coding guidelines.
| assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort( | ||
| key=operator.itemgetter("name") | ||
| ) == expected_branches.sort(key=operator.itemgetter("name")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incorrect edges data structure access.
The test accesses edges["node"] as if edges is a single object, but according to the GraphQL Relay pagination pattern (confirmed in past review comments), edges should be an array of edge objects, each containing a node field.
Apply this diff to fix the assertion:
- assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort(
- key=operator.itemgetter("name")
- ) == expected_branches.sort(key=operator.itemgetter("name"))
+ edges = all_branches.data["InfrahubBranch"]["edges"]
+ actual_branches = [edge["node"] for edge in edges]
+ assert sorted(actual_branches, key=operator.itemgetter("name")) == sorted(
+ expected_branches, key=operator.itemgetter("name")
+ )Note: This also fixes the comparison issue where .sort() returns None rather than the sorted list.
🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 228 to 230,
the assertion incorrectly treats edges as a single dict and uses .sort() (which
returns None); extract the list of nodes from edges (e.g. map or list
comprehension to get each edge["node"]) for both actual and expected data if
needed, then use sorted(..., key=operator.itemgetter("name")) on each list and
compare the two sorted lists.
| graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset | ||
| ) | ||
| count = await InfrahubBranchType.get_list_count(graphql_context=info.context) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what Aaron meant earlier regarding the separation of count and edges is that users can send in queries that request different information:
In this case we'd expect to return both edges and branches:
query MyQuery {
InfrahubBranch {
count
edges {
node {
name
}
}
}
}If we however have a query like the following one we don't need to send an additional query for the "count" since the request doesn't expect to have this information:
query MyQuery {
InfrahubBranch {
edges {
node {
name
}
}
}
}Finally it could be that someone just wants to count the branches, and in this case we don't need to query the database for the details of each branch.
query MyQuery {
InfrahubBranch {
count
}
}One thing that I noticed is that we have different filters for BranchType.get_list compared to InfrahubBranchType.get_list_count where we filter out the -global- branch with .get_list() but not with get_list_count(). So the count returns an extra number which is never seen.
A comment regarding the structure. We'd expect "edges" to be an array, instead of having "node" be an array. I.e. "edges" should be an array and "node" should be an object.
return {"count": count, "edges": {"node": branches}}On the topic of structure I think we should probably align the output that we get so that it matches the other nodes. I.e. more of a move away from what we had in the original Branch query. I.e. I think that a query would instead look like this:
```graphql
query MyQuery {
InfrahubBranch {
edges {
node {
name {
value # <-- This is the new part we don't currently have.
}
}
}
}
}| ) | ||
| assert all_branches.errors is None | ||
| assert all_branches.data | ||
| assert all_branches.data["InfrahubBranch"]["count"] == 13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This highlights the issue I mentioned above with the count. When we start out with the default branch and then create 10 branches. Probably the -global- branch is part of the count. And also one other branch. I'm not sure if this is due to the testing environment an we'd have some duplicate of the default branch. It would be good to determine where the extra branch that isn't the global branch is.
| for i in range(10) | ||
| ], | ||
| ] | ||
| assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As CodeRabbit is mentioning all_branches.data["InfrahubBranch"]["edges"]["node"] should not be a valid field as all_branches.data["InfrahubBranch"]["edges"] should be an array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
backend/infrahub/graphql/types/branch.py (1)
54-56: Add class docstring.The
InfrahubBranchTypeclass is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.As per coding guidelines, apply this diff:
class InfrahubBranchType(InfrahubObjectType): + """Paginated response type for branch queries. + + Provides edges-based pagination with total count for branch listings. + The count excludes the global branch. + """ count = Field(Int, description="Total number of items") edges = Field(List(of_type=NonNull(InfrahubBranchEdge)))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/tests/unit/graphql/queries/test_branch.py
- backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/query/__init__.py (1)
count(586-603)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-unit
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)
5-5: LGTM! Imports are appropriate.The added imports (
Int,List,NonNullfrom graphene, andBranchNotFoundError) are correctly placed and necessary for the new pagination types.Also applies to: 10-10
| class InfrahubBranchEdge(InfrahubObjectType): | ||
| node = Field(BranchType, required=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add class docstring.
The InfrahubBranchEdge class is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.
As per coding guidelines, apply this diff:
class InfrahubBranchEdge(InfrahubObjectType):
+ """Edge type for paginated branch queries.
+
+ Wraps a BranchType node in the edges/nodes pagination pattern.
+ """
node = Field(BranchType, required=True)🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 50 to 51, the
InfrahubBranchEdge class is missing a Google-style class docstring; add a
concise docstring immediately below the class definition describing the purpose
of the class (an Edge wrapper for BranchType in GraphQL connections) and
document its attributes (e.g., node: BranchType — the branch node at the edge),
following the project’s Google-style docstring format and keeping wording brief
and descriptive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
backend/infrahub/graphql/types/branch.py (2)
50-51: Add class docstring for InfrahubBranchEdge.
Missing Google-style docstring per coding guidelines. Keep it concise.Apply this diff:
class InfrahubBranchEdge(InfrahubObjectType): + """Edge wrapper for BranchType. + + Attributes: + node: The branch node at this edge. + """ node = Field(BranchType, required=True)Based on coding guidelines.
58-66: Count misreported when filters are used; add docstring and respect filters.
Current logic subtracts 1 whenever the global branch exists, even if aname/idsfilter excludes it. This undercounts. Also missing docstring.Apply this minimal, safe fix (handles
name; guards against negatives) and add a Google-style docstring:@classmethod async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: - async with graphql_context.db.start_session(read_only=True) as db: - count = await Branch.get_list_count(db=db, **kwargs) - try: - await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) - return count - 1 - except BranchNotFoundError: - return count + """Return filtered branch count excluding the global branch. + + Args: + graphql_context: GraphQL context containing database connection. + **kwargs: Filters forwarded to Branch.get_list_count (e.g., ids, name). + + Returns: + Count of branches matching filters, excluding GLOBAL_BRANCH_NAME. + """ + async with graphql_context.db.start_session(read_only=True) as db: + count = await Branch.get_list_count(db=db, **kwargs) + if count == 0: + return 0 + # If a name filter is provided and it's not the global branch, do not adjust. + name = kwargs.get("name") + if name is not None and name != GLOBAL_BRANCH_NAME: + return count + try: + await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) + return max(count - 1, 0) + except BranchNotFoundError: + return countIf
idsfiltering is used, adjust only when the global branch id is included. Please confirm which attribute represents the GraphQLid(e.g.,uuidvsid) so we can add that branch safely.Run this to locate callers and confirm whether
ids/namefilters are passed toget_list_count:#!/bin/bash # Find usages and inspect how filters are forwarded. rg -nP -C3 'get_list_count\s*\(' backend | sed -n '1,200p' rg -nP -C3 'InfrahubBranchType\s*\.\s*get_list_count' backend rg -nP -C3 'InfrahubBranchQueryList|InfrahubBranch' backend/infrahub/graphql/queries
🧹 Nitpick comments (1)
backend/infrahub/graphql/types/branch.py (1)
54-57: Docstring + field declarations: make required and use Int/List directly.
Add class docstring; make edges/count required; preferInt()/List()overField(Int, ...)for scalars; simplify NonNull wrapping.Apply this diff:
-class InfrahubBranchType(InfrahubObjectType): - count = Field(Int, description="Total number of items") - edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge)))) +class InfrahubBranchType(InfrahubObjectType): + """Paginated branch list response. + + Attributes: + count: Total number of items matching the filters (excludes global). + edges: The paginated list of branch edges. + """ + count = Int(required=True, description="Total number of items matching the filters (excludes global)") + edges = List(NonNull(InfrahubBranchEdge), required=True, description="Edges for the current page")Optional: consider renaming
count→total_countto align with common GraphQL conventions; if done, update schema/resolvers accordingly. Based on coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)schema/schema.graphql(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: validate-generated-documentation
- GitHub Check: graphql-schema
- GitHub Check: backend-tests-functional
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-integration
- GitHub Check: backend-tests-unit
- GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
schema/schema.graphql (2)
6883-6891: ✅ Previous critical issue resolved—node field is now singular.The node field in
InfrahubBranchEdgeis now correctly declared asnode: Branch!(singular), fixing the relay-style pagination cardinality issue flagged in the previous review. The schema now properly structures pagination with:
InfrahubBranchType.edges: [InfrahubBranchEdge!]!— array of edges- Each
InfrahubBranchEdge.node: Branch!— single node per edgeThis correctly implements offset/limit pagination semantics.
10410-10411: Query field definition looks good.The new
InfrahubBranchquery with pagination (offset, limit) and filtering (ids, name) parameters returns the correctInfrahubBranchType!structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (5)
backend/infrahub/graphql/types/__init__.py (1)
24-24: Export InfrahubBranchEdge to keep the public API consistent.You expose TaskLogEdge here; InfrahubBranchEdge should be exported similarly for consumers importing from types.
Apply this diff:
-from .branch import BranchType, InfrahubBranch, InfrahubBranchType +from .branch import BranchType, InfrahubBranch, InfrahubBranchType, InfrahubBranchEdge @@ "IPNetworkType", + "InfrahubBranchEdge", "InfrahubBranch", "InfrahubBranchType",Also applies to: 44-45
backend/infrahub/graphql/types/branch.py (4)
34-36: Add a short docstring for _map_fields_to_graphql.Document the purpose (filter out GLOBAL and map selected fields) and expected shapes of inputs/outputs.
As per coding guidelines.
54-64: Docstrings missing for new GraphQL types.Please add Google‑style class docstrings for RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, and InfrahubBranch to explain the value‑wrapper pattern and field mapping logic.
As per coding guidelines.
Also applies to: 66-101
103-110: Add class docstring to InfrahubBranchEdge.Briefly describe the edge wrapper and its node semantics.
As per coding guidelines.
111-119: Add docstring to get_list_count.Explain that it returns the count excluding GLOBAL_BRANCH_NAME and why the existence check is needed.
As per coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/__init__.py(2 hunks)backend/infrahub/graphql/types/branch.py(3 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (4)
backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/test_app.py (3)
default_branch(54-61)service(112-113)client(146-173)backend/tests/unit/conftest.py (1)
session_admin(2679-2681)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (3)
BranchType(17-51)InfrahubBranch(66-100)InfrahubBranchType(107-119)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (5)
BranchType(17-51)InfrahubBranch(66-100)InfrahubBranchType(107-119)get_list(39-51)get_list_count(112-119)backend/infrahub/core/branch/models.py (2)
get_list(157-170)get_list_count(173-184)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/node/standard.py (2)
to_graphql(71-86)get(131-138)backend/infrahub/database/__init__.py (2)
get(84-88)start_session(222-239)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
| gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service) | ||
| all_branches = await graphql( | ||
| schema=gql_params.schema, | ||
| source=query, | ||
| context_value=gql_params.context, | ||
| root_value=None, | ||
| variable_values={}, | ||
| ) | ||
| assert all_branches.errors is None | ||
| assert all_branches.data | ||
| assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch | ||
|
|
||
| expected_branches = [ | ||
| { | ||
| "description": {"value": "Default Branch"}, | ||
| "name": {"value": "main"}, | ||
| }, | ||
| { | ||
| "description": {"value": "my description"}, | ||
| "name": {"value": "branch3"}, | ||
| }, | ||
| *[ | ||
| { | ||
| "description": {"value": f"sample description {i}"}, | ||
| "name": {"value": f"sample-branch-{i}"}, | ||
| } | ||
| for i in range(10) | ||
| ], | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix count and expected set; avoid cross-test state assumptions.
- Count should exclude the global branch and only include branches created in this test + main. That’s 11, not 12.
- expected_branches includes branch3 which is created in a different test; fixtures reset DB, so it won’t exist here.
- Comparing a paginated page to the full set is incorrect.
Refactor to derive ordering from the API, then compare the correct slice:
- assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch
-
- expected_branches = [
- {
- "description": {"value": "Default Branch"},
- "name": {"value": "main"},
- },
- {
- "description": {"value": "my description"},
- "name": {"value": "branch3"},
- },
- *[
- {
- "description": {"value": f"sample description {i}"},
- "name": {"value": f"sample-branch-{i}"},
- }
- for i in range(10)
- ],
- ]
- all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]]
- assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort(
- key=lambda x: x["name"]["value"]
- )
+ # Verify total count (10 created here + main), global branch is excluded by the API.
+ assert all_branches.data["InfrahubBranch"]["count"] == 11
+
+ # Establish server ordering with a baseline query, then validate the paginated slice.
+ baseline_query = """
+ query {
+ InfrahubBranch(offset: 0, limit: 1000) {
+ edges {
+ node {
+ name { value }
+ description { value }
+ }
+ }
+ }
+ }
+ """
+ gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service)
+ baseline = await graphql(schema=gql_params.schema, source=baseline_query, context_value=gql_params.context)
+ assert baseline.errors is None
+ all_nodes = [e["node"] for e in baseline.data["InfrahubBranch"]["edges"]]
+
+ # Current page: offset=2, limit=5
+ page_nodes = [e["node"] for e in all_branches.data["InfrahubBranch"]["edges"]]
+ assert len(page_nodes) == 5
+ assert page_nodes == all_nodes[2:7]Committable suggestion skipped: line range outside the PR's diff.
| all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]] | ||
| assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort( | ||
| key=lambda x: x["name"]["value"] | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace list.sort() with sorted; current assertion always compares None to None.
.list.sort() returns None. Use sorted(...) or sort in-place and compare the lists. The refactor above removes this pitfall and validates the actual page slice deterministically.
🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 232 to 235,
the test uses list.sort() inside the assertion which returns None so the
assertion compares None to None; replace list.sort() with sorted(...) (or call
.sort() on each list before the assertion) so you compare the actual sorted
lists deterministically. Ensure both all_branches_data_only and
expected_branches are sorted by the same key (lambda x: x["name"]["value"]) and
then assert equality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (3)
backend/infrahub/graphql/types/branch.py (3)
118-126: Add method docstring.The
get_list_countmethod is missing a docstring, which violates the coding guidelines. While a past review comment indicated this was addressed in commit 32c72ad, the docstring is still missing in the current code.As per coding guidelines, apply this diff:
@classmethod async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: + """Get the total count of branches excluding the global branch. + + Args: + graphql_context: The GraphQL context containing database connection. + **kwargs: Additional filter parameters passed to Branch.get_list_count. + + Returns: + The count of branches, excluding GLOBAL_BRANCH_NAME if it exists. + """ async with graphql_context.db.start_session(read_only=True) as db: count = await Branch.get_list_count(db=db, **kwargs) try: await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) return count - 1 except BranchNotFoundError: return count
110-111: Add class docstring.The
InfrahubBranchEdgeclass is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.As per coding guidelines, apply this diff:
class InfrahubBranchEdge(InfrahubObjectType): + """Edge type for paginated branch queries. + + Wraps an InfrahubBranch node in the edges/nodes pagination pattern. + """ node = Field(InfrahubBranch, required=True)
114-117: Add class docstring.The
InfrahubBranchTypeclass is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.As per coding guidelines, apply this diff:
class InfrahubBranchType(InfrahubObjectType): + """Paginated response type for branch queries. + + Provides pagination metadata (count) and a list of branch edges following + the GraphQL connection pattern. + """ count = Field(Int, description="Total number of items") edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge))))
🧹 Nitpick comments (2)
schema/schema.graphql (2)
10789-10790: Query field design for paginated branch retrieval.The
InfrahubBranchquery acceptsids,limit,name, andoffsetparameters. Consider documenting the behavior when conflicting filters are provided (e.g., when bothidsandoffset/limitare specified). Also, consider adding a documented maximum limit or validation constraint to prevent unbounded queries.Would you like me to check how this is handled in the resolver and suggest updates to the schema documentation if needed?
9406-9412: Value wrapper types introduced.Three new wrapper types are added:
NonRequiredBooleanValueField,NonRequiredStringValueField,RequiredStringValueField, andStatusField. Ensure this pattern is consistent with existing similar types in the schema (e.g., check if other query results use similar wrapper patterns). If this is a new convention, consider documenting it for future maintainers.Also applies to: 10938-10940, 11035-11037
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/types/branch.py(3 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)schema/schema.graphql(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (2)
schema/schema.graphql (2)
7258-7266: Standard relay-style pagination structure.The
InfrahubBranchEdgeandInfrahubBranchTypefollow proper GraphQL pagination semantics: each edge contains a singlenode(not an array), and the collection is on the parent'sedgesfield. The cardinality is correct.
11173-11173: Minor directive formatting change.Line 11173 adjusts the
@expanddirective definition. This appears to be a cosmetic change with no semantic impact.
| @staticmethod | ||
| async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: | ||
| return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring.
The _map_fields_to_graphql static method is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python methods.
As per coding guidelines, apply this diff:
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+ """Map Branch objects to GraphQL field dictionaries, excluding the global branch.
+
+ Args:
+ objs: List of Branch objects to map.
+ fields: Dictionary of field names to include in the output.
+
+ Returns:
+ List of dictionaries containing GraphQL field data for each branch.
+ """
return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME]🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 36 to 38, the async
static method _map_fields_to_graphql is missing a Google-style docstring; add a
docstring above the method that briefly describes its purpose (map a list of
Branch objects to GraphQL dicts while excluding branches named
GLOBAL_BRANCH_NAME), documents parameters (objs: list[Branch], fields: dict) and
their types, notes that the method is asynchronous and returns list[dict[str,
Any]], and describes the return value and behavior (awaits each obj.to_graphql
and filters out GLOBAL_BRANCH_NAME).
| class RequiredStringValueField(InfrahubObjectType): | ||
| value = String(required=True) | ||
|
|
||
|
|
||
| class NonRequiredStringValueField(InfrahubObjectType): | ||
| value = String(required=False) | ||
|
|
||
|
|
||
| class NonRequiredBooleanValueField(InfrahubObjectType): | ||
| value = Boolean(required=False) | ||
|
|
||
|
|
||
| class StatusField(InfrahubObjectType): | ||
| value = InfrahubBranchStatus(required=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add docstrings and clarify the purpose of value wrapper classes.
All four value wrapper classes (RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) are missing docstrings, which violates the coding guidelines. Additionally, the purpose of these wrapper types is not immediately clear—they add a layer of indirection without explanation.
As per coding guidelines, add docstrings to each class explaining their purpose:
class RequiredStringValueField(InfrahubObjectType):
+ """Wrapper type for required string values in GraphQL responses.
+
+ Provides a consistent structure for string fields that must always have a value.
+ """
value = String(required=True)
class NonRequiredStringValueField(InfrahubObjectType):
+ """Wrapper type for optional string values in GraphQL responses.
+
+ Provides a consistent structure for string fields that may be null.
+ """
value = String(required=False)
class NonRequiredBooleanValueField(InfrahubObjectType):
+ """Wrapper type for optional boolean values in GraphQL responses.
+
+ Provides a consistent structure for boolean fields that may be null.
+ """
value = Boolean(required=False)
class StatusField(InfrahubObjectType):
+ """Wrapper type for branch status enum values in GraphQL responses.
+
+ Provides a consistent structure for status fields.
+ """
value = InfrahubBranchStatus(required=True)🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 56 to 70, the four
small wrapper classes lack docstrings and do not explain their purpose; add
brief docstrings to each class (RequiredStringValueField,
NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) that
state they are simple GraphQL value-wrapper types used to standardize field
payloads/typing (e.g., wrapping a single value for consistency across
mutations/queries), note whether the inner value is required or optional and the
expected type (String/Boolean/InfrahubBranchStatus), and include one-line
examples or usage intent if helpful — keep them concise and follow project
docstring style.
| @staticmethod | ||
| async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: | ||
| field_keys = fields.keys() | ||
| result: list[dict[str, Any]] = [] | ||
| for obj in objs: | ||
| if obj.name == GLOBAL_BRANCH_NAME: | ||
| continue | ||
| data = {} | ||
| for field in field_keys: | ||
| value = getattr(obj, field, None) | ||
| if isinstance(fields.get(field), dict): | ||
| data[field] = {"value": value} | ||
| else: | ||
| data[field] = value | ||
| result.append(data) | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring and clarify field wrapping logic.
The _map_fields_to_graphql method is missing a docstring, which violates the coding guidelines. The field wrapping logic (lines 102-105) is also complex and not immediately clear—it wraps values in {"value": ...} dictionaries based on whether fields.get(field) is a dict, but the reasoning isn't documented.
As per coding guidelines, apply this diff:
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+ """Map Branch objects to GraphQL field dictionaries with wrapped field values.
+
+ Fields that have nested field selections in the GraphQL query (indicated by
+ dict values in the fields parameter) are wrapped in {"value": ...} structures
+ to match the value wrapper types. Other fields are returned as-is.
+
+ Args:
+ objs: List of Branch objects to map.
+ fields: Dictionary mapping field names to their selection sets.
+
+ Returns:
+ List of dictionaries with field data, excluding GLOBAL_BRANCH_NAME.
+ """
field_keys = fields.keys()
result: list[dict[str, Any]] = []
for obj in objs:
if obj.name == GLOBAL_BRANCH_NAME:
continue
data = {}
for field in field_keys:
value = getattr(obj, field, None)
+ # Wrap value if the field has nested selections (value wrapper types)
if isinstance(fields.get(field), dict):
data[field] = {"value": value}
else:
data[field] = value
result.append(data)
return result🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 92 to 107, add a
concise docstring to _map_fields_to_graphql describing the method’s purpose,
parameters (objs: list[Branch], fields: dict) and return type (list[dict[str,
Any]]), and clarify the field-wrapping logic: when fields.get(field) is a dict
it indicates GraphQL requested subfields so the code should wrap the raw
attribute value in {"value": <attr>} to match the expected nested GraphQL shape;
update the docstring to explain this behavior and its rationale so the wrapping
on lines 102-105 is clear to readers and maintainers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
schema/schema.graphql (1)
7243-7256:created_atfield should useNonRequiredStringValueFieldfor consistency.This issue was flagged in a previous review but remains unresolved. The
created_atfield (line 7246) uses a plainStringtype, while all other optional fields in theInfrahubBranchtype use wrapper types (NonRequiredStringValueField,NonRequiredBooleanValueField). Updatecreated_atto match the established pattern.Apply this diff:
type InfrahubBranch { branched_from: NonRequiredStringValueField - created_at: String + created_at: NonRequiredStringValueField description: NonRequiredStringValueField
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.pre-commit-config.yaml(1 hunks)schema/schema.graphql(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
.pre-commit-config.{yaml,yml}
📄 CodeRabbit inference engine (.github/instructions/tooling.instructions.md)
Don't use Git precommit
Files:
.pre-commit-config.yaml
🔇 Additional comments (2)
.pre-commit-config.yaml (1)
13-13: Clarify the purpose of excludingschema/schema.graphqlfromend-of-file-fixer.The original concern about violating coding guidelines is incorrect—precommit is already established in this codebase. Adding an exclude pattern to an existing hook is not adopting precommit; it's a configuration adjustment to existing infrastructure.
However, the substantive question remains: Why does this file need exclusion? The schema file appears to be version-controlled and manually maintained (not auto-generated). Before approving this pattern, confirm:
- Is
schema/schema.graphqlauto-generated (and if so, by what process)?- Does it have special formatting requirements that conflict with
end-of-file-fixer?- Is there a reason it cannot conform to standard EOF validation?
If the exclusion is necessary, document the rationale in a comment. If it's a temporary workaround, consider the alternatives mentioned in the original review.
schema/schema.graphql (1)
7258-7266: Pagination structure is now correctly defined.The
InfrahubBranchEdge.nodefield is correctly singular (node: InfrahubBranch!), and theInfrahubBranchTypeproperly exposescountandedgesfor standard offset/limit pagination semantics. This fixes the earlier cardinality issue from the previous review.
schema/schema.graphql
Outdated
| """Retrieve paginated information about active branches.""" | ||
| InfrahubBranch(ids: [ID!], limit: Int, name: String, offset: Int): InfrahubBranchType! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Query field descriptor and resolver alignment.
The new InfrahubBranch query accepts ids, limit, name, and offset parameters for flexible pagination and filtering. Verify that the backend resolver (infrahub_branch_resolver per the summary) correctly handles these parameters, particularly the ids filter which allows querying specific branches by identifier.
Run the following script to verify resolver implementation:
🏁 Script executed:
#!/bin/bash
# Search for the infrahub_branch_resolver implementation
rg -n 'infrahub_branch_resolver' --type=py -A 10Length of output: 1153
I need to examine the complete resolver implementation and verify parameter handling.
🏁 Script executed:
#!/bin/bash
# Get the full infrahub_branch_resolver implementation
cat backend/infrahub/graphql/queries/branch.py | sed -n '33,57p'Length of output: 885
🏁 Script executed:
#!/bin/bash
# Search for the GraphQL field descriptor definition for InfrahubBranch
rg -n 'InfrahubBranch' backend/infrahub/graphql/queries/branch.py -A 5 -B 5Length of output: 1162
🏁 Script executed:
#!/bin/bash
# Search for InfrahubBranch.get_list to see if it handles ids/name
rg -n 'def get_list' backend/infrahub --type=py -A 15 | grep -A 15 'InfrahubBranch\|class InfrahubBranch'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the GraphQL field definition in schema generation or resolvers
rg -n 'ids.*limit.*name.*offset' backend/infrahub/graphql --type=py -C 3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at the complete branch.py file structure
cat backend/infrahub/graphql/queries/branch.pyLength of output: 1782
🏁 Script executed:
#!/bin/bash
# Search for where InfrahubBranch query field is defined with parameters
rg -n 'InfrahubBranch.*ids.*limit.*name.*offset' backend/infrahub --type=pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for graphql schema generation that might define these parameters
rg -n 'ids.*limit.*name' backend/infrahub/graphql --type=py -B 5 -A 5Length of output: 0
🏁 Script executed:
#!/bin/bash
# Find where the GraphQL schema fields are defined or generated
fd -e py 'schema' backend/infrahub/graphql | head -20Length of output: 119
🏁 Script executed:
#!/bin/bash
# Look for the actual schema file or mutation/query definitions
rg -n 'class.*Query\|def.*query' backend/infrahub/graphql --type=py -A 10 | head -60Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's a schema generation file or mutation definitions
find backend/infrahub/graphql -name '*.py' -type f | xargs grep -l 'InfrahubBranch' | head -10Length of output: 246
🏁 Script executed:
#!/bin/bash
# Look at the schema.graphql file directly around the InfrahubBranch definition
rg -n 'InfrahubBranch' schema/schema.graphql -B 5 -A 5Length of output: 1886
🏁 Script executed:
#!/bin/bash
# Check if the resolver accepts **kwargs to handle additional parameters
rg -n 'infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -A 20Length of output: 893
🏁 Script executed:
#!/bin/bash
# Search for the specific line where the resolver is registered
rg -n 'resolver=infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -B 10 -A 5Length of output: 337
🏁 Script executed:
#!/bin/bash
# Find the field descriptor definition to see all parameters
rg -n 'class InfrahubBranch\|Field.*InfrahubBranch\|query_field' backend/infrahub/graphql --type=py -A 20 | head -80Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for name parameter handling in the resolver or nearby
rg -n 'name' backend/infrahub/graphql/queries/branch.py -C 3Length of output: 476
🏁 Script executed:
#!/bin/bash
# Check if there are multiple resolver definitions or overloads
rg -n 'def.*branch.*resolver\|@.*resolver' backend/infrahub/graphql/queries/branch.pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the field descriptor in the schema generation or graphene code
rg -n 'GraphQLField.*InfrahubBranch\|InfrahubBranchType' backend/infrahub --type=py -B 2 -A 8 | head -100Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if parameters are dynamically extracted from the GraphQL query
rg -n 'extract_graphql_fields\|extract.*param' backend/infrahub/graphql --type=py -C 2 | head -60Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at the complete resolver with more context
sed -n '30,70p' backend/infrahub/graphql/queries/branch.pyLength of output: 949
🏁 Script executed:
#!/bin/bash
# Find schema field definitions - look for where the query fields are built
rg -n 'define_query_field\|query_field_\|InfrahubBranch.*field' backend/infrahub --type=py -C 3 | head -100Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if ids parameter handling is in a wrapper or middleware
rg -n 'ids' backend/infrahub/graphql/queries/branch.py -B 5 -A 5Length of output: 655
🏁 Script executed:
#!/bin/bash
# Look for where the parameters are defined in the GraphQL schema
rg -n '^\s*ids:.*ID' schema/schema.graphql -B 5 -A 2Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check around line 10789-10790 in the schema file
sed -n '10785,10795p' schema/schema.graphqlLength of output: 946
🏁 Script executed:
#!/bin/bash
# Search more broadly for the InfrahubBranch field and its parameters
sed -n '10780,10800p' schema/schema.graphqlLength of output: 5632
🏁 Script executed:
#!/bin/bash
# Look at the InfrahubBranchType definition
rg -n 'type InfrahubBranchType' schema/schema.graphql -A 20Length of output: 598
🏁 Script executed:
#!/bin/bash
# Check if there's a resolver parameter mismatch documented
rg -n 'ids.*name' backend/infrahub/graphql/queries/branch.pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Look at what parameters InfrahubBranch.get_list accepts
fd -name '*.py' -path '*/infrahub/*' | xargs grep -n 'class InfrahubBranch' | head -5Length of output: 230
🏁 Script executed:
#!/bin/bash
# Find and examine the InfrahubBranch class
rg -n 'class InfrahubBranch' backend/infrahub --type=py -A 30 | head -50Length of output: 4177
🏁 Script executed:
#!/bin/bash
# Look for get_list method in InfrahubBranch
rg -n 'def get_list' backend/infrahub --type=py -B 2 -A 20 | grep -B 2 -A 20 'InfrahubBranch\|class.*Branch'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the actual GraphQL field definition more carefully
python3 << 'EOF'
import re
# Read schema.graphql
with open('schema/schema.graphql', 'r') as f:
content = f.read()
# Find the line with InfrahubBranch query
lines = content.split('\n')
for i, line in enumerate(lines):
if 'InfrahubBranch' in line and 'Retrieve paginated' in lines[i-1]:
# Print context around this line
start = max(0, i-5)
end = min(len(lines), i+10)
for j in range(start, end):
print(f"{j+1}: {lines[j]}")
break
EOFLength of output: 1216
Resolver missing ids and name parameter handling.
The InfrahubBranch query field declares ids and name parameters, but the infrahub_branch_resolver function (backend/infrahub/graphql/queries/branch.py:33-48) only accepts limit and offset. These parameters are not forwarded to InfrahubBranch.get_list().
Add the missing parameters to the resolver signature and pass them through:
async def infrahub_branch_resolver(
root: dict,
info: GraphQLResolveInfo,
ids: list[str] | None = None,
name: str | None = None,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
# ... update get_list call to include ids and nameAlternatively, adopt the **kwargs pattern used in branch_resolver (line 16) to handle all parameters.
🤖 Prompt for AI Agents
In schema/schema.graphql around lines 10789-10790, the InfrahubBranch query
declares ids and name args but the resolver
backend/infrahub/graphql/queries/branch.py (lines ~33-48) only accepts limit and
offset and does not forward ids/name to InfrahubBranch.get_list(); update the
resolver signature to accept ids: list[str] | None and name: str | None (or
switch to **kwargs like branch_resolver at line 16) and pass these parameters
through to InfrahubBranch.get_list(ids=ids, name=name, limit=limit,
offset=offset) so the query arguments are respected.
|
@solababs, I think we're close to the first iteration of this query now. I see that there's a slight issue with how the branch IDs are presented if you start an environment and compare the responses to these two queries: query MyQuery {
InfrahubBranch {
count
edges {
node {
id
name {
value
}
}
}
}
Branch {
id
name
}
}I think you're returning the internal Neo4j ID of these nodes which is typically different from the ID we present to the external world. |
| InfrahubBranchQueryList = Field( | ||
| InfrahubBranchType, | ||
| ids=List(of_type=NonNull(ID)), | ||
| name=String(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These parameters doesn't seem to work now.
query MyQuery {
InfrahubBranch(name: "branch2") {
count
edges {
node {
id
name {
value
}
}
}
}
}I.e infrahub_branch_resolver() doesn't accept the parameters ids or name so the above query would fail. I think perhaps we can just remove both of them in this first iteration and then consider which filter options we want and need. We will want to add some of them for sure. But it could potentially be quite verbose if we start to add all of the variations that we dynamically generate for other node types, so I think that we can have a discussion around that after this PR is merge.
I.e. if we were to follow the current approach we'd have:
- name__value
- name__values
- description__value
- description__values
- etc for all attributes.
…github.com/opsmill/infrahub into sb-20251013-infrahub-branch-query-ifc-1886
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (6)
backend/infrahub/graphql/types/branch.py (6)
36-38: Add method docstring.This method still lacks a docstring as flagged in previous reviews. Per coding guidelines, all Python methods require Google-style docstrings.
56-70: Add docstrings to value wrapper classes.All four value wrapper classes remain undocumented despite previous review feedback. The purpose and usage pattern of these wrapper types is unclear without docstrings.
72-91: Add class docstring.The
InfrahubBranchclass lacks a docstring despite previous review feedback. Per coding guidelines, document the class purpose and explain why certain fields remain unwrapped while others use value wrapper types.
92-110: Add method docstring.This method lacks a docstring as flagged in previous reviews. Document the mapping logic, especially the value-wrapping behavior on lines 105-108 and the ID-to-UUID mapping on line 102.
113-114: Add class docstring.The
InfrahubBranchEdgeclass lacks a docstring despite previous review feedback. Per coding guidelines, document that this is an edge wrapper for the GraphQL connection pattern.
117-120: Add class docstring.The
InfrahubBranchTypeclass lacks a docstring despite previous review feedback. Per coding guidelines, document this as a paginated response type with metadata and edges.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/infrahub/graphql/types/branch.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
📚 Learning: 2025-10-30T12:43:45.629Z
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
Applied to files:
backend/infrahub/graphql/types/branch.py
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
PR: opsmill/infrahub#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Documentation generated for Infrahub must reflect its novel approach, providing clarity around new concepts and demonstrating integration with familiar patterns from existing tools like Git, infrastructure-as-code, and CI/CD pipelines
Applied to files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (3)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/node/standard.py (2)
to_graphql(71-86)get(131-138)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: backend-tests-integration
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-functional
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)
101-103: The review comment concern is unfounded; the code correctly uses the external UUID.The Branch model inherits from StandardNode, which populates the
uuidattribute fromnode["uuid"](node properties), not from the Neo4j internal node ID. Inbackend/infrahub/core/node/standard.pylines 113-114, the initialization is:self.id = node.element_id self.uuid = UUID(node["uuid"])Line 102 of the GraphQL file correctly maps
"id"toobj.uuid, which is the external UUID. The PR concern about returning internal Neo4j node IDs does not apply here—obj.uuidis already the external UUID from node properties.
| @classmethod | ||
| async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: | ||
| async with graphql_context.db.start_session(read_only=True) as db: | ||
| count = await Branch.get_list_count(db=db, **kwargs) | ||
| try: | ||
| await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) | ||
| return count - 1 | ||
| except BranchNotFoundError: | ||
| return count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring (regression).
This method's docstring was marked as addressed in a previous commit (32c72ad) but is missing in the current code. Per coding guidelines, add a Google-style docstring documenting the purpose, parameters, and return value.
Apply this diff:
@classmethod
async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
+ """Get the total count of branches excluding the global branch.
+
+ Args:
+ graphql_context: The GraphQL context containing database connection.
+ **kwargs: Additional filter parameters passed to Branch.get_list_count.
+
+ Returns:
+ The count of branches, excluding GLOBAL_BRANCH_NAME if it exists.
+ """
async with graphql_context.db.start_session(read_only=True) as db:🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 121 to 129, the async
classmethod get_list_count is missing its Google-style docstring; add a
docstring directly under the method definition that briefly describes what the
method does (returns the number of Branch records accessible via the GraphQL
context, excluding the GLOBAL_BRANCH_NAME when present), documents parameters
(graphql_context: GraphqlContext, **kwargs: Any) and their roles, and specifies
the return type (int) and behavior (subtracts one if GLOBAL_BRANCH_NAME exists,
otherwise returns full count); keep wording concise and follow Google docstring
format for purpose, Args, and Returns.
| description = Field(NonRequiredStringValueField, required=False) | ||
| origin_branch = Field(NonRequiredStringValueField, required=False) | ||
| branched_from = Field(NonRequiredStringValueField, required=False) | ||
| status = Field(StatusField, required=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we override status from the parent BranchType class but we don't override graph_version
So a query ends up being:
query MyQuery {
InfrahubBranch {
count
edges {
node {
graph_version
status {
value
}
name {
value
}
}
}
}
}| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| limit: int | None = None, | ||
| offset: int | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As CodeRabbit suggests it would be good to catch any negative values for limit or offset and if present raise a ValidationError back to the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (7)
backend/tests/unit/graphql/queries/test_branch.py (1)
154-244: Critical: Fix test isolation, count assertion, and sorting logic.This test has several issues that need to be addressed:
Test isolation violation (line 222): The comment assumes
branch3from the previous test exists, but tests should be isolated. With proper fixture isolation, only the 10 branches created here plusmainshould exist, giving a count of 11, not 12.Cross-test contamination (lines 224-240):
expected_branchesincludesbranch3which is created in a different test and should not be assumed to exist.Broken assertion (lines 242-244): The
.sort()method returnsNone, so this assertion always comparesNone == Noneand passes even when the data is wrong. Usesorted()instead.Incorrect comparison: The query requests
offset=2, limit=5(5 items starting at index 2), but the assertion compares against all branches. You should either:
- Remove offset/limit from the query and compare all branches, OR
- Verify that exactly 5 branches are returned and validate they match the expected slice
Apply this diff to fix the issues:
- assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch + # Count should be 11: 10 created in this test + main (global branch excluded) + assert all_branches.data["InfrahubBranch"]["count"] == 11 expected_branches = [ { "description": {"value": "Default Branch"}, "name": {"value": "main"}, }, - { - "description": {"value": "my description"}, - "name": {"value": "branch3"}, - }, *[ { "description": {"value": f"sample description {i}"}, "name": {"value": f"sample-branch-{i}"}, } for i in range(10) ], ] + # Sort both lists and compare all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]] - assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort( - key=lambda x: x["name"]["value"] - ) + actual_sorted = sorted(all_branches_data_only, key=lambda x: x["name"]["value"]) + expected_sorted = sorted(expected_branches, key=lambda x: x["name"]["value"]) + + # Since we queried with offset=2, limit=5, we should only get 5 branches + assert len(actual_sorted) == 5 + # Verify they match the expected slice from the full sorted list + assert actual_sorted == expected_sorted[2:7]schema/schema.graphql (1)
7245-7259: Wrapcreated_atasNonRequiredStringValueFieldfor consistency.The
created_atfield (line 7248) uses plainString, while all other optional fields in this type use wrapper types (NonRequiredStringValueField,NonRequiredBooleanValueField,NonRequiredIntValueField). Updatecreated_atto match the established pattern and maintain internal consistency across optional fields.type InfrahubBranch { branched_from: NonRequiredStringValueField - created_at: String + created_at: NonRequiredStringValueField description: NonRequiredStringValueFieldbackend/infrahub/graphql/types/branch.py (5)
37-39: Add method docstring (regression).This method's docstring was previously requested but remains missing. Per coding guidelines, all Python methods require Google-style docstrings.
Apply this diff:
@staticmethod async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: + """Map Branch objects to GraphQL field dictionaries, excluding the global branch. + + Args: + objs: List of Branch objects to map. + fields: Dictionary of field names to include in the output. + + Returns: + List of dictionaries containing GraphQL field data for each branch. + """ return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME]Based on coding guidelines.
57-75: Add docstrings to value wrapper classes.All five value wrapper classes are missing docstrings, violating coding guidelines. These classes establish a pattern for wrapping field values in GraphQL responses but their purpose and usage are undocumented.
Apply this diff:
class RequiredStringValueField(InfrahubObjectType): + """Wrapper type for required string values in GraphQL responses. + + Provides a consistent value-field structure for non-null string attributes. + """ value = String(required=True) class NonRequiredStringValueField(InfrahubObjectType): + """Wrapper type for optional string values in GraphQL responses. + + Provides a consistent value-field structure for nullable string attributes. + """ value = String(required=False) class NonRequiredIntValueField(InfrahubObjectType): + """Wrapper type for optional integer values in GraphQL responses. + + Provides a consistent value-field structure for nullable integer attributes. + """ value = Int(required=False) class NonRequiredBooleanValueField(InfrahubObjectType): + """Wrapper type for optional boolean values in GraphQL responses. + + Provides a consistent value-field structure for nullable boolean attributes. + """ value = Boolean(required=False) class StatusField(InfrahubObjectType): + """Wrapper type for branch status enum values in GraphQL responses. + + Provides a consistent value-field structure for status attributes. + """ value = InfrahubBranchStatus(required=True)Based on coding guidelines.
125-132: Add method docstring (regression).This method's docstring was previously added in commit 32c72ad but is now missing again. Per coding guidelines, all Python methods require Google-style docstrings.
Apply this diff:
@classmethod async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: + """Get the total count of branches excluding the global branch. + + Args: + graphql_context: The GraphQL context containing database connection. + **kwargs: Additional filter parameters passed to Branch.get_list_count. + + Returns: + The count of branches, excluding GLOBAL_BRANCH_NAME if it exists. + """ async with graphql_context.db.start_session(read_only=True) as db:Based on coding guidelines.
116-118: Add class docstring.The
InfrahubBranchEdgeclass is missing a docstring, violating coding guidelines.Apply this diff:
class InfrahubBranchEdge(InfrahubObjectType): + """Edge type for paginated branch queries. + + Wraps an InfrahubBranch node in the GraphQL edges/nodes pagination pattern. + """ node = Field(InfrahubBranch, required=True)Based on coding guidelines.
95-113: I need to verify howInfrahubBranchType.edgesis populated. Let me search for the resolver that retrieves the branch list:Good information about Graphene's default resolver behavior. Now let me search for the specific resolver pattern used in this codebase:Let me search more specifically for how
InfrahubBranchTypeis resolved and how its edges are populated:Based on my investigation, I can now generate the rewritten review comment. The key findings are:
If a resolver method is not defined for a Field attribute, Graphene supplies a default resolver. If the Parent Value Object is a dictionary, the resolver will look for a dictionary key matching the field name.
The method
_map_fields_to_graphqlis being called (line 54), returns dictionaries with UUID mappings (line 104-105), and Graphene's default resolver will correctly extract dictionary keys.The missing docstring is a clear violation of the coding guidelines.
Add missing docstring.
This method violates coding guidelines by lacking a docstring. Add the following:
@staticmethod async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: + """Map Branch objects to GraphQL dictionaries with field wrapping. + + Converts Branch objects into dictionaries for GraphQL consumption, mapping the + id field to obj.uuid and wrapping nested field values in {"value": ...} structures + for value wrapper types. Filters out the global branch. + + Args: + objs: List of Branch objects to map. + fields: Dictionary mapping field names to their selection sets. + + Returns: + List of dictionaries with mapped fields, excluding GLOBAL_BRANCH_NAME. + """ field_keys = fields.keys()The UUID mapping at lines 104-105 is correct and will be properly resolved by Graphene's default resolver when it processes the returned dictionaries.
🧹 Nitpick comments (1)
backend/tests/unit/graphql/queries/test_branch.py (1)
11-16: Consider improving test clarity.The test logic is correct—it ensures that
InfrahubBranchfields have been wrapped (with value objects likeRequiredStringValueField) rather than using the raw field types fromBranchType. However, the error message and test name could be clearer about the intent.Consider this minor refactor for clarity:
def test_check_branch_type_and_infrahub_branch_has_equal_fields(): + """Ensure InfrahubBranch wraps fields with value objects, not raw types.""" for field_name, field_value in BranchType._meta.fields.items(): if field_name in ["id", "created_at"]: continue if InfrahubBranch._meta.fields[field_name] == field_value: - raise Exception(f"'{field_name}' is not updated in InfrahubBranch") + raise AssertionError( + f"'{field_name}' in InfrahubBranch uses the same type as BranchType. " + f"Expected a wrapped value field, got {field_value}" + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(3 hunks)backend/tests/unit/graphql/queries/test_branch.py(2 hunks)schema/schema.graphql(5 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/queries/branch.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/queries/branch.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.py
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
🧠 Learnings (4)
📓 Common learnings
Learnt from: ogenstad
Repo: opsmill/infrahub PR: 7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.663Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
📚 Learning: 2025-10-30T12:43:45.663Z
Learnt from: ogenstad
Repo: opsmill/infrahub PR: 7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.663Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
Applied to files:
schema/schema.graphqlbackend/infrahub/graphql/queries/branch.pybackend/infrahub/graphql/types/branch.py
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
Repo: opsmill/infrahub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Infrahub offers a central hub to manage the data, templates and playbooks that powers your infrastructure by combining the version control and branch management capabilities similar to Git with the flexible data model and UI of a graph database
Applied to files:
schema/schema.graphql
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
Repo: opsmill/infrahub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Documentation generated for Infrahub must reflect its novel approach, providing clarity around new concepts and demonstrating integration with familiar patterns from existing tools like Git, infrastructure-as-code, and CI/CD pipelines
Applied to files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (3)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/exceptions.py (1)
ValidationError(319-341)backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (5)
BranchType(18-54)InfrahubBranch(77-113)InfrahubBranchType(120-132)get_list(42-54)get_list_count(125-132)
backend/tests/unit/graphql/queries/test_branch.py (3)
backend/tests/helpers/graphql.py (1)
graphql(26-61)backend/infrahub/graphql/types/branch.py (2)
BranchType(18-54)InfrahubBranch(77-113)backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)
backend/infrahub/graphql/types/branch.py (3)
backend/infrahub/core/branch/models.py (4)
Branch(30-554)has_schema_changes(113-124)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/node/standard.py (2)
to_graphql(71-86)get(131-138)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-unit
- GitHub Check: backend-tests-integration
🔇 Additional comments (7)
backend/infrahub/graphql/queries/branch.py (1)
57-64: LGTM!The field definition is well-structured with appropriate parameters and description. The resolver binding is correct, and the type matches the resolver's return structure.
backend/tests/unit/graphql/queries/test_branch.py (1)
246-302: LGTM!This test properly validates the error handling for invalid pagination parameters. It correctly checks that:
- Negative offset values are rejected with the appropriate error message
- Zero or negative limit values are rejected with the appropriate error message
The test structure is clear and the assertions are specific.
schema/schema.graphql (5)
9409-9419: Wrapper types look good.The new value-wrapper types follow the established pattern for nullable scalar wrapping.
10945-10947: RequiredStringValueField type definition looks good.Properly structured wrapper for required string values, consistent with the pattern.
11042-11044: StatusField type properly wraps branch status values.The type correctly wraps
BranchStatusenum values. Per retrieved learnings,BranchStatusincludes states likeNEED_UPGRADE_REBASEwhich require conflict resolution capabilities.
7261-7269: Pagination types properly structured.
InfrahubBranchEdgeandInfrahubBranchTypefollow standard relay-style pagination patterns correctly. The singularnodefield and edges array are properly configured.
10796-10797: Verification inconclusive: ID resolution code paths both useobj.uuid—manual verification needed.Both the direct
Branchquery and paginatedInfrahubBranchquery ultimately resolve theidfield usingobj.uuid:
- Direct query:
BranchTypeinheritsto_graphql()fromStandardNode, which setsresponse["id"] = self.uuid(line 72 inbackend/infrahub/core/node/standard.py)- Paginated query:
InfrahubBranch._map_fields_to_graphql()explicitly setsdata["id"] = obj.uuid(line 105 inbackend/infrahub/graphql/types/branch.py)Since both code paths use
uuid, they should return consistent IDs ifuuidis the external/public ID. However, the original review concern suggests the paginated query was returning Neo4j internal node IDs. This indicates one of two possibilities:
obj.uuidis actually the Neo4j internal ID (not the external ID)—the issue exists at the data model level, not the resolver level.- The
_map_fields_to_graphql()implementation in this PR fixes the issue—the direct query may have been using a different ID source before, and this PR aligns both queries to use the correct external ID.Verify:
- The
uuidproperty on the Branch model is the external/public ID, not Neo4j's internal node ID.- The
_map_fields_to_graphql()method is actually invoked during paginated query execution.- Both queries now return identical ID values for the same branch.
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| limit: int | None = None, | ||
| offset: int | None = None, | ||
| ) -> dict[str, Any]: | ||
| if isinstance(limit, int) and limit < 1: | ||
| raise ValidationError("limit must be >= 1") | ||
| if isinstance(offset, int) and offset < 0: | ||
| raise ValidationError("offset must be >= 0") | ||
|
|
||
| fields = extract_graphql_fields(info) | ||
| result: dict[str, Any] = {} | ||
| if "edges" in fields: | ||
| branches = await InfrahubBranch.get_list( | ||
| graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset | ||
| ) | ||
| result["edges"] = [{"node": branch} for branch in branches] | ||
| if "count" in fields: | ||
| result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context) | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add docstring and avoid passing None kwargs to get_list.
This function is missing a docstring, which violates the coding guidelines. Additionally, when limit or offset are None (not provided in the GraphQL query), they are still explicitly passed to InfrahubBranch.get_list, which may override the model's default behavior. It's better to only include these parameters in the kwargs when they are actually provided.
As per coding guidelines, apply this diff:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
+ """Resolve paginated InfrahubBranch query.
+
+ Retrieves a paginated list of branches with metadata about total count
+ and branch details wrapped in an edges structure.
+
+ Args:
+ root: GraphQL root value (unused)
+ info: GraphQL resolve info containing context and field selection
+ limit: Maximum number of items to return (must be >= 1 if provided)
+ offset: Number of items to skip (must be >= 0 if provided)
+
+ Returns:
+ Dictionary containing requested fields (edges and/or count)
+
+ Raises:
+ ValidationError: If limit < 1 or offset < 0
+ """
if isinstance(limit, int) and limit < 1:
raise ValidationError("limit must be >= 1")
if isinstance(offset, int) and offset < 0:
raise ValidationError("offset must be >= 0")
fields = extract_graphql_fields(info)
result: dict[str, Any] = {}
+
+ # Build kwargs, only including limit/offset if provided
+ query_kwargs: dict[str, Any] = {}
+ if limit is not None:
+ query_kwargs["limit"] = limit
+ if offset is not None:
+ query_kwargs["offset"] = offset
+
if "edges" in fields:
branches = await InfrahubBranch.get_list(
- graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
+ graphql_context=info.context,
+ fields=fields.get("edges", {}).get("node", {}),
+ **query_kwargs,
)
result["edges"] = [{"node": branch} for branch in branches]
if "count" in fields:
result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context)
return result🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 34 to 54, add a
concise docstring describing the resolver's purpose/params/return and update the
call to InfrahubBranch.get_list to avoid passing limit/offset when they are
None: build a kwargs dict, conditionally set "limit" and "offset" only if they
are not None, and pass that dict (plus graphql_context and fields) so model
defaults are preserved; keep validation logic unchanged and ensure the docstring
follows project style.
| class InfrahubBranch(BranchType): | ||
| name = Field(RequiredStringValueField, required=True) | ||
| description = Field(NonRequiredStringValueField, required=False) | ||
| origin_branch = Field(NonRequiredStringValueField, required=False) | ||
| branched_from = Field(NonRequiredStringValueField, required=False) | ||
| graph_version = Field(NonRequiredIntValueField, required=False) | ||
| status = Field(StatusField, required=True) | ||
| sync_with_git = Field(NonRequiredBooleanValueField, required=False) | ||
| is_default = Field(NonRequiredBooleanValueField, required=False) | ||
| is_isolated = Field( | ||
| NonRequiredBooleanValueField, required=False, deprecation_reason="non isolated mode is not supported anymore" | ||
| ) | ||
| has_schema_changes = Field(NonRequiredBooleanValueField, required=False) | ||
|
|
||
| class Meta: | ||
| description = "InfrahubBranch" | ||
| name = "InfrahubBranch" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add class docstring and address created_at field inconsistency.
The InfrahubBranch class is missing a docstring, violating coding guidelines. Additionally, there's a field definition inconsistency: id and created_at remain as plain String fields inherited from BranchType, while all other fields are wrapped with Field() and value wrapper types.
The id field is handled specially in _map_fields_to_graphql (line 104-105), which justifies keeping it unwrapped. However, created_at receives no special handling (lines 107-111) yet remains unwrapped, breaking the value-field pattern.
Apply this diff to add docstring and wrap created_at:
class InfrahubBranch(BranchType):
+ """Extended branch type with value-wrapped fields for paginated GraphQL queries.
+
+ Most fields are wrapped in value objects to support consistent field resolution
+ in paginated edge/node responses. The id field remains unwrapped as it receives
+ special handling during mapping.
+ """
id = String(required=True)
+ created_at = Field(NonRequiredStringValueField, required=False)
name = Field(RequiredStringValueField, required=True)Based on coding guidelines.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 77 to 94, add a
one-line class docstring to InfrahubBranch and change the created_at declaration
to use the value-wrapper pattern (e.g., wrap it with
Field(NonRequiredDateTimeValueField, required=False)) so it matches the other
wrapped fields while leaving id unwrapped as handled elsewhere; also add or
adjust the import for the DateTime value-field type if it's not already
imported.
| class InfrahubBranchType(InfrahubObjectType): | ||
| count = Field(Int, description="Total number of items") | ||
| edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge)))) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add class docstring and consider marking fields as required.
The InfrahubBranchType class is missing a docstring, violating coding guidelines. Additionally, pagination response fields like count and edges should typically be marked as required=True since they should always be present in the response.
Apply this diff:
class InfrahubBranchType(InfrahubObjectType):
+ """Paginated response type for branch queries.
+
+ Provides pagination metadata (count) alongside the list of branches (edges).
+ """
count = Field(Int, description="Total number of items", required=True)
edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge))), required=True)Note: Based on @ajtmccarty's comment, consider optimizing to only fetch count when explicitly requested rather than always computing it.
Based on coding guidelines.
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 120 to 123, add a class
docstring to InfrahubBranchType describing its purpose (pagination wrapper for
Branch objects) and mark the GraphQL fields as required by setting required=True
(e.g., count and edges should be non-null/required), and update any type
declarations accordingly; also wire/count computation to be fetched lazily (only
compute count when requested) or add a TODO referencing @ajtmccarty to optimize
count retrieval instead of always computing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
backend/tests/unit/graphql/queries/test_branch.py (2)
223-223: Fix incorrect count assertion.The count should be 11, not 12. The test creates 10 branches here plus the
mainbranch (the global branch is excluded by the API). Thebranch3created intest_branch_querydoes not persist to this test due to fixture isolation.Apply this diff:
- assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch + assert all_branches.data["InfrahubBranch"]["count"] == 11 # 10 created here + main branch
225-245: Critical: Fix sort, pagination logic, and add ID verification.Multiple critical issues:
.sort()returnsNone: The assertion comparesNone == Noneand always passes, masking test failures.- Cross-test pollution:
branch3is created in a different test and won't exist here due to fixture isolation.- Pagination logic error: Comparing the full
expected_brancheslist against a paginated slice (offset=2, limit=5) is incorrect.- Missing ID verification: The test doesn't verify IDs, which is the core concern raised in the PR objectives about internal Neo4j IDs vs. public IDs being returned.
Apply this diff to fix all issues and add ID verification:
expected_branches = [ { "description": {"value": "Default Branch"}, "name": {"value": "main"}, }, - { - "description": {"value": "my description"}, - "name": {"value": "branch3"}, - }, *[ { "description": {"value": f"sample description {i}"}, "name": {"value": f"sample-branch-{i}"}, } for i in range(10) ], ] - all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]] - assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort( - key=lambda x: x["name"]["value"] - ) + + # Sort both lists and compare + actual_nodes = [edge["node"] for edge in all_branches.data["InfrahubBranch"]["edges"]] + sorted_actual = sorted(actual_nodes, key=lambda x: x["name"]["value"]) + sorted_expected = sorted(expected_branches, key=lambda x: x["name"]["value"]) + + # Verify we got the correct slice (offset=2, limit=5) + assert len(sorted_actual) == 5 + assert sorted_actual == sorted_expected[2:7]Additionally, add ID verification to address the PR concern:
query = """ query { InfrahubBranch(offset: 2, limit: 5) { count edges { node { + id name { value } description { value } } } } } """Then verify IDs are UUIDs, not internal Neo4j node IDs:
# After the sorted comparison, verify IDs are valid UUIDs import uuid for node in actual_nodes: try: uuid.UUID(node["id"]) except (ValueError, KeyError): raise AssertionError(f"Invalid or missing UUID for node: {node}")
🧹 Nitpick comments (1)
backend/tests/unit/graphql/queries/test_branch.py (1)
11-18: Test logic is sound but consider a clearer assertion message.The test correctly validates that
InfrahubBranchfields are wrapped with value-field types (e.g.,RequiredStringValueField) rather than using the same field definitions asBranchType. However, the exception message could be more explicit about what "not updated" means.Consider this diff for clarity:
- if InfrahubBranch._meta.fields[field_name] == field_value: - raise Exception(f"'{field_name}' is not updated in InfrahubBranch") + if InfrahubBranch._meta.fields[field_name] == field_value: + raise AssertionError( + f"Field '{field_name}' in InfrahubBranch must be wrapped with a value-field type, " + f"but it has the same definition as BranchType" + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/tests/unit/graphql/queries/test_branch.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/unit/graphql/queries/test_branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/unit/graphql/queries/test_branch.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: ogenstad
Repo: opsmill/infrahub PR: 7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.663Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
🧬 Code graph analysis (1)
backend/tests/unit/graphql/queries/test_branch.py (3)
backend/tests/helpers/graphql.py (1)
graphql(26-61)backend/infrahub/graphql/types/branch.py (2)
BranchType(18-54)InfrahubBranch(77-113)backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (3)
backend/tests/unit/graphql/queries/test_branch.py (3)
6-6: LGTM: Import additions support new pagination tests.The import of
BranchTypeandInfrahubBranchis necessary for the new field mapping validation and pagination tests.
155-194: LGTM: Branch creation loop is correct.The loop correctly creates 10 branches using
BranchCreatemutations with proper assertions for each result.
247-303: LGTM: Error validation tests are correct.The test correctly validates that:
- Invalid offset (-1) returns the error "offset must be >= 0"
- Invalid limit (0) returns the error "limit must be >= 1"
Both error conditions are properly asserted.
Summary by CodeRabbit
New Features
Bug Fixes / Data
Tests
Chores