-
Notifications
You must be signed in to change notification settings - Fork 35
IFC-1886: Paginated branch graphql query #7418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughAdds paginated branch support to the GraphQL API. Introduces InfrahubBranch, InfrahubBranchEdge, InfrahubBranchType, status and value-field wrapper types, and a new query field InfrahubBranch (InfrahubBranchQueryList) with resolver exposed on InfrahubBaseQuery. Reworks branch listing to use BranchNodeGetListQuery (which excludes DELETING via a raw_filter) and adds Branch.get_list_count. Updates GraphQL schema, exports, unit tests for pagination, and pre-commit configuration. Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (3)📓 Common learnings📚 Learning: 2025-10-30T12:43:45.629ZApplied to files:
📚 Learning: 2025-07-22T08:13:56.198ZApplied to files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
🔇 Additional comments (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (1)
backend/tests/unit/graphql/queries/test_branch.py (1)
152-228: Add docstring and expand test coverage.The test method is missing a docstring, which violates the coding guidelines. Additionally, the test only covers the happy path scenario. Consider adding test cases for edge cases to ensure robustness of the pagination implementation.
As per coding guidelines, add a docstring:
async def test_paginated_branch_query( self, db: InfrahubDatabase, default_branch: Branch, register_core_models_schema, session_admin, client, service, ): + """Test paginated branch query returns correct pagination metadata. + + Creates 10 test branches and verifies that querying page 2 with limit 5 + returns the correct page metadata and 5 branches. + """ for i in range(10):Additionally, consider adding test cases for:
- Page 1 to verify first page behavior
- Last page with fewer items than limit
- Page beyond available data (should return empty list)
- Invalid pagination parameters (page=0, negative limit)
- Default pagination parameters when not specified
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/infrahub/graphql/queries/__init__.py(2 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/schema.py(2 hunks)backend/infrahub/graphql/types/__init__.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/queries/__init__.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/schema.pybackend/infrahub/graphql/queries/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/queries/__init__.pybackend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/schema.pybackend/infrahub/graphql/queries/branch.py
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
🧬 Code graph analysis (3)
backend/tests/unit/graphql/queries/test_branch.py (6)
backend/tests/conftest.py (4)
db(122-138)default_branch(161-162)register_core_models_schema(217-218)branch(1239-1243)backend/tests/helpers/test_app.py (3)
default_branch(54-61)client(146-173)service(112-113)backend/tests/unit/conftest.py (1)
session_admin(2679-2681)backend/infrahub/services/__init__.py (1)
client(154-158)backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (2)
BranchType(16-46)InfrahubBranchType(49-52)
backend/infrahub/graphql/queries/branch.py (2)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (3)
BranchType(16-46)InfrahubBranchType(49-52)get_list(34-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: validate-generated-documentation
- GitHub Check: E2E-testing-playwright
- GitHub Check: graphql-schema
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-functional
- GitHub Check: backend-tests-unit
- GitHub Check: backend-tests-integration
- GitHub Check: backend-benchmark
CodSpeed Performance ReportMerging #7418 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
backend/infrahub/core/branch/models.py(1 hunks)backend/infrahub/core/node/standard.py(1 hunks)backend/infrahub/core/query/__init__.py(2 hunks)backend/infrahub/core/query/standard_node.py(1 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/core/node/standard.pybackend/infrahub/core/query/standard_node.pybackend/infrahub/core/branch/models.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/query/__init__.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/core/node/standard.pybackend/infrahub/core/query/standard_node.pybackend/infrahub/core/branch/models.pybackend/infrahub/graphql/types/branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/query/__init__.py
🧬 Code graph analysis (4)
backend/infrahub/core/node/standard.py (4)
backend/infrahub/core/branch/models.py (1)
get_list_and_count(169-178)backend/infrahub/graphql/types/branch.py (1)
get_list_and_count(58-70)backend/infrahub/core/query/__init__.py (5)
Query(339-697)init(393-406)execute(530-563)get(222-223)count(588-605)backend/infrahub/core/query/standard_node.py (1)
StandardNodeGetListQuery(132-171)
backend/infrahub/core/branch/models.py (3)
backend/infrahub/core/node/standard.py (1)
get_list_and_count(232-245)backend/infrahub/graphql/types/branch.py (1)
get_list_and_count(58-70)backend/infrahub/core/branch/enums.py (1)
BranchStatus(4-8)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (2)
get_list_and_count(169-178)Branch(29-544)backend/infrahub/core/node/standard.py (2)
get_list_and_count(232-245)to_graphql(71-86)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (3)
BranchType(16-46)InfrahubBranchType(53-70)get_list_and_count(58-70)backend/infrahub/core/branch/models.py (1)
get_list_and_count(169-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: graphql-schema
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-unit
- GitHub Check: javascript-lint
🔇 Additional comments (2)
backend/infrahub/core/node/standard.py (1)
231-245: LGTM! Clean pagination implementation.The
get_list_and_countmethod correctly mirrorsget_listwhile adding count information. The count reflects total matching records (not just the page) sinceQuery.count()excludes LIMIT/OFFSET, which is the standard pagination pattern.backend/infrahub/core/branch/models.py (1)
168-178: LGTM! Consistent filtering of DELETING branches.The method correctly filters out branches with DELETING status, maintaining consistency with the existing
get_listmethod (lines 163-166).
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| limit = kwargs.pop("limit", 100) | ||
| offset = kwargs.pop("offset", 0) | ||
| fields = extract_graphql_fields(info) | ||
| branches, count = await InfrahubBranchType.get_list_and_count( | ||
| fields=fields.get("edges", {}).get("node", {}), | ||
| graphql_context=info.context, | ||
| limit=limit, | ||
| offset=offset, | ||
| **kwargs, | ||
| ) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-standard pagination structure.
The return structure {"count": count, "edges": {"node": branches}} doesn't follow standard GraphQL pagination patterns. Standard Relay cursor connections use:
type Connection {
edges: [Edge!]!
pageInfo: PageInfo!
}
type Edge {
node: Node!
cursor: String!
}Your implementation has edges as a single object containing node as an array, rather than edges as an array of edge objects. This makes the InfrahubBranchEdge type redundant and could confuse API consumers familiar with standard pagination.
Consider refactoring to a simpler, clearer structure:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
- fields=fields.get("edges", {}).get("node", {}),
+ fields=fields.get("branches", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
- return {"count": count, "edges": {"node": branches}}
+ return {"count": count, "branches": branches}Then update InfrahubBranchType (in types/branch.py) to:
class InfrahubBranchType(InfrahubObjectType):
count = Field(Int, required=True)
branches = Field(List(of_type=NonNull(BranchType)), required=True)This is simpler and more intuitive than the current edge wrapper.
🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 48, the
resolver returns a non-standard pagination shape {"count": count, "edges":
{"node": branches"}}; change it to return a simple, conventional shape like
{"count": count, "branches": branches} and update any callers to expect
"branches" instead of edges.node. Also update the GraphQL type in
types/branch.py to expose count: Int! and branches: [BranchType!]! (replace the
InfrahubBranchEdge wrapper) and adjust InfrahubBranchType.get_list_and_count
usage/field extraction if needed so fields map to the BranchType nodes directly.
Ensure tests and schema refs are updated to the new field name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the robot is right about this. the structure should be
edges: [
{
node: {
name: "branch-2", ...
}
}, ...
]
you'll need to update the test too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajtmccarty You're absolutely right! The structure needs to follow the standard Relay pattern. Here's the corrected resolver:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
fields=fields.get("edges", {}).get("node", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
# Transform branches into proper edge structure
edges = [{"node": branch} for branch in branches]
return {"count": count, "edges": edges}Additionally, you'll need to fix the InfrahubBranchEdge type in backend/infrahub/graphql/types/branch.py. The node field should be a single BranchType, not a List:
class InfrahubBranchEdge(InfrahubObjectType):
node = Field(BranchType, required=True) # Single BranchType, not ListAnd update InfrahubBranchType:
class InfrahubBranchType(InfrahubObjectType):
count = Field(Int, required=True)
edges = Field(List(of_type=NonNull(InfrahubBranchEdge)), required=True) # List of edgesThe test in backend/tests/unit/graphql/queries/test_branch.py will also need to be updated to expect edges as an array and access each edge's node field accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajtmccarty we are using this because we want to maintain a consistent response like some of the other queries i.e.
query MyQuery {
BuiltinTag {
count
edges {
node {
name {
value
}
description {
value
}
}
}
}
}
cc @ogenstad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the confusing part here is that coderabbit spots an issue and then hallucinates a solution that is incorrect. What we want is that "edges" should be a list/array and each "node" should be an object as an entry in that array. A potential area of confusion here is that you don't see this within the GraphQL query which is correct as per the above however I think the response might not be? What I'd do is to create an environment with a few BuiltinTag objects and issue the above query and observe what the response looks like and then see if the structure matches what we now get from the InfrahubBranch query.
| name: str | None = None, | ||
| **kwargs: Any, | ||
| ) -> tuple[list[Self], int]: | ||
| kwargs["raw_filter"] = f"n.status <> '{BranchStatus.DELETING.value}'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's be better to use a named parameter for raw_filter instead of kwargs
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| **kwargs: Any, | ||
| ) -> dict[str, Any]: | ||
| limit = kwargs.pop("limit", 100) | ||
| offset = kwargs.pop("offset", 0) | ||
| fields = extract_graphql_fields(info) | ||
| branches, count = await InfrahubBranchType.get_list_and_count( | ||
| fields=fields.get("edges", {}).get("node", {}), | ||
| graphql_context=info.context, | ||
| limit=limit, | ||
| offset=offset, | ||
| **kwargs, | ||
| ) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the robot is right about this. the structure should be
edges: [
{
node: {
name: "branch-2", ...
}
}, ...
]
you'll need to update the test too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/infrahub/core/branch/models.py (1)
156-169: **Fix incorrect type hint for kwargs.The type hint
**kwargs: dict[str, Any]is incorrect. The**kwargssyntax already indicates a dictionary of keyword arguments, so the type hint should be**kwargs: Anyto indicate the types of individual values.Apply this diff:
async def get_list( cls, db: InfrahubDatabase, limit: int = 1000, ids: list[str] | None = None, name: str | None = None, - **kwargs: dict[str, Any], + **kwargs: Any, ) -> list[Self]: query: Query = await BranchNodeGetListQuery.init( db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs ) await query.execute(db=db) return [cls.from_db(result.get("n")) for result in query.get_results()] @classmethod async def get_list_count( cls, db: InfrahubDatabase, limit: int = 1000, ids: list[str] | None = None, name: str | None = None, - **kwargs: dict[str, Any], + **kwargs: Any, ) -> int: query: Query = await BranchNodeGetListQuery.init( db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs ) return await query.count(db=db)Also applies to: 171-183
♻️ Duplicate comments (2)
backend/infrahub/graphql/queries/branch.py (2)
40-44: Critical: Incorrect Relay pagination structure for edges.The resolver returns
{"count": count, "edges": {"node": branches}}whereedgesis a single object. According to the standard Relay pagination pattern (confirmed by past review comments),edgesshould be an array of edge objects, each containing anodefield:[{"node": branch1}, {"node": branch2}, ...].Apply this diff:
fields = extract_graphql_fields(info) branches = await BranchType.get_list( graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset ) count = await InfrahubBranchType.get_list_count(graphql_context=info.context) - return {"count": count, "edges": {"node": branches}} + edges = [{"node": branch} for branch in branches] + return {"count": count, "edges": edges}Additionally, update the field extraction to handle edges as an array:
fields = extract_graphql_fields(info) + # Extract node fields from edges array + edge_fields = fields.get("edges", {}) + if isinstance(edge_fields, dict): + node_fields = edge_fields.get("node", {}) + else: + node_fields = {} + branches = await BranchType.get_list( - graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset + graphql_context=info.context, fields=node_fields, limit=limit, offset=offset )
33-44: Missing docstring for resolver function.The
infrahub_branch_resolverfunction lacks a docstring, which violates the coding guidelines requiring Google-style docstrings for all functions.Add a docstring following the Google style format:
async def infrahub_branch_resolver( root: dict, # noqa: ARG001 info: GraphQLResolveInfo, limit: int | None = None, offset: int | None = None, ) -> dict[str, Any]: + """Resolve paginated branch query. + + Retrieves a paginated list of branches with total count. + + Args: + root: GraphQL root value (unused) + info: GraphQL resolve info containing context and field information + limit: Maximum number of branches to return + offset: Number of branches to skip for pagination + + Returns: + Dictionary with count (total branches) and edges (list of branch nodes) + """ fields = extract_graphql_fields(info)Based on coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
backend/infrahub/core/branch/models.py(2 hunks)backend/infrahub/core/query/__init__.py(1 hunks)backend/infrahub/core/query/branch.py(2 hunks)backend/infrahub/core/query/standard_node.py(2 hunks)backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)schema/schema.graphql(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- backend/infrahub/core/query/standard_node.py
- backend/infrahub/core/query/init.py
- backend/infrahub/graphql/types/branch.py
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/branch/models.pybackend/infrahub/core/query/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/core/branch/models.pybackend/infrahub/core/query/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (2)
backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (4)
BranchType(16-46)InfrahubBranchType(53-60)get_list(34-46)get_list_count(58-60)backend/infrahub/core/branch/models.py (2)
get_list(156-169)get_list_count(172-183)
backend/infrahub/core/branch/models.py (4)
backend/infrahub/core/query/__init__.py (7)
Query(339-695)QueryType(140-142)init(391-404)execute(528-561)get(222-223)get_results(619-627)count(586-603)backend/infrahub/core/query/branch.py (1)
BranchNodeGetListQuery(153-154)backend/infrahub/core/node/standard.py (2)
from_db(152-180)get(131-138)backend/infrahub/graphql/types/branch.py (1)
get_list_count(58-60)
backend/infrahub/core/query/branch.py (2)
backend/infrahub/core/branch/enums.py (1)
BranchStatus(4-8)backend/infrahub/core/query/standard_node.py (1)
StandardNodeGetListQuery(132-172)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: validate-generated-documentation
- GitHub Check: graphql-schema
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (3)
backend/infrahub/core/query/branch.py (1)
6-6: LGTM! Required imports for the new query class.The imports are necessary for the new
BranchNodeGetListQueryclass implementation.Also applies to: 9-9
schema/schema.graphql (1)
10410-10411: Query definition is well-structured pending the schema fix above.The
InfrahubBranchquery parameters (ids,limit,name,offset) and return type align with pagination expectations. Ensure the type schema fix is applied so the query returns the correct structure.backend/tests/unit/graphql/queries/test_branch.py (1)
63-63: LGTM: Removal of include_subscription parameter aligns with guidance.The removal of
include_subscription=Falsefrom theseprepare_graphql_paramscalls aligns with the guidance to avoid this parameter in new code.Also applies to: 108-108, 132-132
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | ||
| raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add docstring and type annotation.
The class is missing a docstring and type annotation, which are required by the coding guidelines.
Apply this diff to add the missing docstring and type annotation:
class BranchNodeGetListQuery(StandardNodeGetListQuery):
+ """Query to retrieve a list of branches, excluding those with DELETING status.
+
+ This query extends StandardNodeGetListQuery with a filter to exclude branches
+ that are currently being deleted from the results.
+ """
+
- raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'"
+ raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'"As per coding guidelines.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | |
| raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'" | |
| class BranchNodeGetListQuery(StandardNodeGetListQuery): | |
| """Query to retrieve a list of branches, excluding those with DELETING status. | |
| This query extends StandardNodeGetListQuery with a filter to exclude branches | |
| that are currently being deleted from the results. | |
| """ | |
| raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'" |
🤖 Prompt for AI Agents
In backend/infrahub/core/query/branch.py around lines 153-154, the class
BranchNodeGetListQuery is missing a docstring and a type annotation for its
raw_filter attribute; add a concise class docstring that explains this Query
filters out deleting Branch nodes and add an explicit type annotation on
raw_filter (raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'") to
satisfy the coding guidelines.
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| limit: int | None = None, | ||
| offset: int | None = None, | ||
| ) -> dict[str, Any]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add validation for pagination parameters.
The limit and offset parameters lack validation, which could lead to issues with negative values or excessively large limits. Per past review feedback, parameter defaults were removed from the field definition, but the resolver signature still has None defaults and no validation.
Apply this diff to add validation:
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
+ """...""" # Add docstring as per previous comment
+
+ # Validate and set defaults for pagination parameters
+ if limit is None:
+ limit = 100
+ if offset is None:
+ offset = 0
+
+ if limit < 1:
+ raise ValueError("limit must be at least 1")
+ if offset < 0:
+ raise ValueError("offset must be non-negative")
+
fields = extract_graphql_fields(info)🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 38, the
resolver accepts limit and offset but does not validate them; add input
validation that enforces offset >= 0, limit >= 0 and limit <= MAX_LIMIT (choose
a sensible cap like 100 or use a configured constant), treat None as not
provided (or apply a safe default if your schema requires one), and raise a
GraphQLError with a clear message for invalid values. Ensure the validation runs
at the start of infrahub_branch_resolver and that any error raised is
descriptive (e.g., "limit must be between 0 and 100" or "offset must be >= 0"),
avoiding silent truncation or database queries with bad parameters.
| assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort( | ||
| key=operator.itemgetter("name") | ||
| ) == expected_branches.sort(key=operator.itemgetter("name")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Incorrect edges data structure access.
The test accesses edges["node"] as if edges is a single object, but according to the GraphQL Relay pagination pattern (confirmed in past review comments), edges should be an array of edge objects, each containing a node field.
Apply this diff to fix the assertion:
- assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort(
- key=operator.itemgetter("name")
- ) == expected_branches.sort(key=operator.itemgetter("name"))
+ edges = all_branches.data["InfrahubBranch"]["edges"]
+ actual_branches = [edge["node"] for edge in edges]
+ assert sorted(actual_branches, key=operator.itemgetter("name")) == sorted(
+ expected_branches, key=operator.itemgetter("name")
+ )Note: This also fixes the comparison issue where .sort() returns None rather than the sorted list.
🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 228 to 230,
the assertion incorrectly treats edges as a single dict and uses .sort() (which
returns None); extract the list of nodes from edges (e.g. map or list
comprehension to get each edge["node"]) for both actual and expected data if
needed, then use sorted(..., key=operator.itemgetter("name")) on each list and
compare the two sorted lists.
| graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset | ||
| ) | ||
| count = await InfrahubBranchType.get_list_count(graphql_context=info.context) | ||
| return {"count": count, "edges": {"node": branches}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what Aaron meant earlier regarding the separation of count and edges is that users can send in queries that request different information:
In this case we'd expect to return both edges and branches:
query MyQuery {
InfrahubBranch {
count
edges {
node {
name
}
}
}
}If we however have a query like the following one we don't need to send an additional query for the "count" since the request doesn't expect to have this information:
query MyQuery {
InfrahubBranch {
edges {
node {
name
}
}
}
}Finally it could be that someone just wants to count the branches, and in this case we don't need to query the database for the details of each branch.
query MyQuery {
InfrahubBranch {
count
}
}One thing that I noticed is that we have different filters for BranchType.get_list compared to InfrahubBranchType.get_list_count where we filter out the -global- branch with .get_list() but not with get_list_count(). So the count returns an extra number which is never seen.
A comment regarding the structure. We'd expect "edges" to be an array, instead of having "node" be an array. I.e. "edges" should be an array and "node" should be an object.
return {"count": count, "edges": {"node": branches}}On the topic of structure I think we should probably align the output that we get so that it matches the other nodes. I.e. more of a move away from what we had in the original Branch query. I.e. I think that a query would instead look like this:
```graphql
query MyQuery {
InfrahubBranch {
edges {
node {
name {
value # <-- This is the new part we don't currently have.
}
}
}
}
}| ) | ||
| assert all_branches.errors is None | ||
| assert all_branches.data | ||
| assert all_branches.data["InfrahubBranch"]["count"] == 13 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This highlights the issue I mentioned above with the count. When we start out with the default branch and then create 10 branches. Probably the -global- branch is part of the count. And also one other branch. I'm not sure if this is due to the testing environment an we'd have some duplicate of the default branch. It would be good to determine where the extra branch that isn't the global branch is.
| for i in range(10) | ||
| ], | ||
| ] | ||
| assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As CodeRabbit is mentioning all_branches.data["InfrahubBranch"]["edges"]["node"] should not be a valid field as all_branches.data["InfrahubBranch"]["edges"] should be an array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
backend/infrahub/graphql/types/branch.py (1)
54-56: Add class docstring.The
InfrahubBranchTypeclass is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.As per coding guidelines, apply this diff:
class InfrahubBranchType(InfrahubObjectType): + """Paginated response type for branch queries. + + Provides edges-based pagination with total count for branch listings. + The count excludes the global branch. + """ count = Field(Int, description="Total number of items") edges = Field(List(of_type=NonNull(InfrahubBranchEdge)))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- backend/tests/unit/graphql/queries/test_branch.py
- backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/query/__init__.py (1)
count(586-603)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-unit
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)
5-5: LGTM! Imports are appropriate.The added imports (
Int,List,NonNullfrom graphene, andBranchNotFoundError) are correctly placed and necessary for the new pagination types.Also applies to: 10-10
| class InfrahubBranchEdge(InfrahubObjectType): | ||
| node = Field(BranchType, required=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add class docstring.
The InfrahubBranchEdge class is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.
As per coding guidelines, apply this diff:
class InfrahubBranchEdge(InfrahubObjectType):
+ """Edge type for paginated branch queries.
+
+ Wraps a BranchType node in the edges/nodes pagination pattern.
+ """
node = Field(BranchType, required=True)🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 50 to 51, the
InfrahubBranchEdge class is missing a Google-style class docstring; add a
concise docstring immediately below the class definition describing the purpose
of the class (an Edge wrapper for BranchType in GraphQL connections) and
document its attributes (e.g., node: BranchType — the branch node at the edge),
following the project’s Google-style docstring format and keeping wording brief
and descriptive.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
backend/infrahub/graphql/types/branch.py (2)
50-51: Add class docstring for InfrahubBranchEdge.
Missing Google-style docstring per coding guidelines. Keep it concise.Apply this diff:
class InfrahubBranchEdge(InfrahubObjectType): + """Edge wrapper for BranchType. + + Attributes: + node: The branch node at this edge. + """ node = Field(BranchType, required=True)Based on coding guidelines.
58-66: Count misreported when filters are used; add docstring and respect filters.
Current logic subtracts 1 whenever the global branch exists, even if aname/idsfilter excludes it. This undercounts. Also missing docstring.Apply this minimal, safe fix (handles
name; guards against negatives) and add a Google-style docstring:@classmethod async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: - async with graphql_context.db.start_session(read_only=True) as db: - count = await Branch.get_list_count(db=db, **kwargs) - try: - await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) - return count - 1 - except BranchNotFoundError: - return count + """Return filtered branch count excluding the global branch. + + Args: + graphql_context: GraphQL context containing database connection. + **kwargs: Filters forwarded to Branch.get_list_count (e.g., ids, name). + + Returns: + Count of branches matching filters, excluding GLOBAL_BRANCH_NAME. + """ + async with graphql_context.db.start_session(read_only=True) as db: + count = await Branch.get_list_count(db=db, **kwargs) + if count == 0: + return 0 + # If a name filter is provided and it's not the global branch, do not adjust. + name = kwargs.get("name") + if name is not None and name != GLOBAL_BRANCH_NAME: + return count + try: + await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) + return max(count - 1, 0) + except BranchNotFoundError: + return countIf
idsfiltering is used, adjust only when the global branch id is included. Please confirm which attribute represents the GraphQLid(e.g.,uuidvsid) so we can add that branch safely.Run this to locate callers and confirm whether
ids/namefilters are passed toget_list_count:#!/bin/bash # Find usages and inspect how filters are forwarded. rg -nP -C3 'get_list_count\s*\(' backend | sed -n '1,200p' rg -nP -C3 'InfrahubBranchType\s*\.\s*get_list_count' backend rg -nP -C3 'InfrahubBranchQueryList|InfrahubBranch' backend/infrahub/graphql/queries
🧹 Nitpick comments (1)
backend/infrahub/graphql/types/branch.py (1)
54-57: Docstring + field declarations: make required and use Int/List directly.
Add class docstring; make edges/count required; preferInt()/List()overField(Int, ...)for scalars; simplify NonNull wrapping.Apply this diff:
-class InfrahubBranchType(InfrahubObjectType): - count = Field(Int, description="Total number of items") - edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge)))) +class InfrahubBranchType(InfrahubObjectType): + """Paginated branch list response. + + Attributes: + count: Total number of items matching the filters (excludes global). + edges: The paginated list of branch edges. + """ + count = Int(required=True, description="Total number of items matching the filters (excludes global)") + edges = List(NonNull(InfrahubBranchEdge), required=True, description="Edges for the current page")Optional: consider renaming
count→total_countto align with common GraphQL conventions; if done, update schema/resolvers accordingly. Based on coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/branch.py(2 hunks)schema/schema.graphql(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)backend/infrahub/database/__init__.py (1)
start_session(222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: validate-generated-documentation
- GitHub Check: graphql-schema
- GitHub Check: backend-tests-functional
- GitHub Check: backend-docker-integration
- GitHub Check: backend-tests-integration
- GitHub Check: backend-tests-unit
- GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
schema/schema.graphql (2)
6883-6891: ✅ Previous critical issue resolved—node field is now singular.The node field in
InfrahubBranchEdgeis now correctly declared asnode: Branch!(singular), fixing the relay-style pagination cardinality issue flagged in the previous review. The schema now properly structures pagination with:
InfrahubBranchType.edges: [InfrahubBranchEdge!]!— array of edges- Each
InfrahubBranchEdge.node: Branch!— single node per edgeThis correctly implements offset/limit pagination semantics.
10410-10411: Query field definition looks good.The new
InfrahubBranchquery with pagination (offset, limit) and filtering (ids, name) parameters returns the correctInfrahubBranchType!structure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (5)
backend/infrahub/graphql/types/__init__.py (1)
24-24: Export InfrahubBranchEdge to keep the public API consistent.You expose TaskLogEdge here; InfrahubBranchEdge should be exported similarly for consumers importing from types.
Apply this diff:
-from .branch import BranchType, InfrahubBranch, InfrahubBranchType +from .branch import BranchType, InfrahubBranch, InfrahubBranchType, InfrahubBranchEdge @@ "IPNetworkType", + "InfrahubBranchEdge", "InfrahubBranch", "InfrahubBranchType",Also applies to: 44-45
backend/infrahub/graphql/types/branch.py (4)
34-36: Add a short docstring for _map_fields_to_graphql.Document the purpose (filter out GLOBAL and map selected fields) and expected shapes of inputs/outputs.
As per coding guidelines.
54-64: Docstrings missing for new GraphQL types.Please add Google‑style class docstrings for RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, and InfrahubBranch to explain the value‑wrapper pattern and field mapping logic.
As per coding guidelines.
Also applies to: 66-101
103-110: Add class docstring to InfrahubBranchEdge.Briefly describe the edge wrapper and its node semantics.
As per coding guidelines.
111-119: Add docstring to get_list_count.Explain that it returns the count excluding GLOBAL_BRANCH_NAME and why the existence check is needed.
As per coding guidelines.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/infrahub/graphql/queries/branch.py(2 hunks)backend/infrahub/graphql/types/__init__.py(2 hunks)backend/infrahub/graphql/types/branch.py(3 hunks)backend/tests/unit/graphql/queries/test_branch.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Place backend tests in
backend/tests/
Files:
backend/tests/unit/graphql/queries/test_branch.py
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/tests/unit/graphql/queries/test_branch.pybackend/infrahub/graphql/types/__init__.pybackend/infrahub/graphql/queries/branch.pybackend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (4)
backend/infrahub/graphql/initialization.py (1)
prepare_graphql_params(79-125)backend/tests/helpers/test_app.py (3)
default_branch(54-61)service(112-113)client(146-173)backend/tests/unit/conftest.py (1)
session_admin(2679-2681)backend/tests/helpers/graphql.py (1)
graphql(26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (3)
BranchType(17-51)InfrahubBranch(66-100)InfrahubBranchType(107-119)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
extract_graphql_fields(67-69)backend/infrahub/graphql/types/branch.py (5)
BranchType(17-51)InfrahubBranch(66-100)InfrahubBranchType(107-119)get_list(39-51)get_list_count(112-119)backend/infrahub/core/branch/models.py (2)
get_list(157-170)get_list_count(173-184)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/node/standard.py (2)
to_graphql(71-86)get(131-138)backend/infrahub/database/__init__.py (2)
get(84-88)start_session(222-239)backend/infrahub/graphql/initialization.py (1)
GraphqlContext(34-76)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
| async def infrahub_branch_resolver( | ||
| root: dict, # noqa: ARG001 | ||
| info: GraphQLResolveInfo, | ||
| limit: int | None = None, | ||
| offset: int | None = None, | ||
| ) -> dict[str, Any]: | ||
| fields = extract_graphql_fields(info) | ||
| result: dict[str, Any] = {} | ||
| if "edges" in fields: | ||
| branches = await InfrahubBranch.get_list( | ||
| graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset | ||
| ) | ||
| result["edges"] = [{"node": branch} for branch in branches] | ||
| if "count" in fields: | ||
| result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context) | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolver ignores ids/name and forwards None pagination; fix signature, validation, and forwarding.
Currently ids/name are accepted by the field but dropped by the resolver; passing limit/offset=None can also override model defaults. Add proper params, validate, and only pass provided args.
As per coding guidelines, add a docstring too. Apply:
-async def infrahub_branch_resolver(
- root: dict, # noqa: ARG001
- info: GraphQLResolveInfo,
- limit: int | None = None,
- offset: int | None = None,
-) -> dict[str, Any]:
- fields = extract_graphql_fields(info)
- result: dict[str, Any] = {}
- if "edges" in fields:
- branches = await InfrahubBranch.get_list(
- graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
- )
- result["edges"] = [{"node": branch} for branch in branches]
- if "count" in fields:
- result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context)
- return result
+async def infrahub_branch_resolver(
+ root: dict, # noqa: ARG001
+ info: GraphQLResolveInfo,
+ ids: list[str] | None = None,
+ name: str | None = None,
+ limit: int | None = None,
+ offset: int | None = None,
+) -> dict[str, Any]:
+ """Resolve paginated InfrahubBranch query.
+
+ Args:
+ root: Unused GraphQL root value.
+ info: Resolve info with context and selection set.
+ ids: Optional list of branch IDs to filter.
+ name: Optional branch name filter.
+ limit: Max items to return; must be >= 1 if provided.
+ offset: Number of items to skip; must be >= 0 if provided.
+ Returns:
+ Dict containing requested fields (edges and/or count).
+ """
+ fields = extract_graphql_fields(info)
+ result: dict[str, Any] = {}
+
+ # Normalize pagination; only pass if provided and valid
+ query_kwargs: dict[str, int] = {}
+ if isinstance(limit, int):
+ if limit < 1:
+ raise ValueError("limit must be >= 1")
+ query_kwargs["limit"] = limit
+ if isinstance(offset, int):
+ if offset < 0:
+ raise ValueError("offset must be >= 0")
+ query_kwargs["offset"] = offset
+
+ if "edges" in fields:
+ node_fields = fields.get("edges", {}).get("node", {})
+ branches = await InfrahubBranch.get_list(
+ graphql_context=info.context,
+ fields=node_fields,
+ ids=ids,
+ name=name,
+ **query_kwargs,
+ )
+ result["edges"] = [{"node": branch} for branch in branches]
+
+ if "count" in fields:
+ result["count"] = await InfrahubBranchType.get_list_count(
+ graphql_context=info.context, ids=ids, name=name
+ )
+ return resultAs per coding guidelines.
🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 48, the
resolver currently drops incoming ids/name arguments, and unconditionally
forwards limit/offset even when None (overriding model defaults) and lacks a
docstring; update the function signature to accept optional ids: list[str] |
None and name: str | None (with proper typing), add a short docstring describing
purpose and params, validate inputs (ensure ids is a non-empty list if provided
and name is a non-empty string), build kwargs for InfrahubBranch.get_list and
InfrahubBranchType.get_list_count that include only provided arguments (i.e.,
include limit/offset only when not None, include ids/name when provided), and
pass fields extracted from info for edges as before so the resolver respects and
forwards only explicit arguments to the model calls.
| gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service) | ||
| all_branches = await graphql( | ||
| schema=gql_params.schema, | ||
| source=query, | ||
| context_value=gql_params.context, | ||
| root_value=None, | ||
| variable_values={}, | ||
| ) | ||
| assert all_branches.errors is None | ||
| assert all_branches.data | ||
| assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch | ||
|
|
||
| expected_branches = [ | ||
| { | ||
| "description": {"value": "Default Branch"}, | ||
| "name": {"value": "main"}, | ||
| }, | ||
| { | ||
| "description": {"value": "my description"}, | ||
| "name": {"value": "branch3"}, | ||
| }, | ||
| *[ | ||
| { | ||
| "description": {"value": f"sample description {i}"}, | ||
| "name": {"value": f"sample-branch-{i}"}, | ||
| } | ||
| for i in range(10) | ||
| ], | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Fix count and expected set; avoid cross-test state assumptions.
- Count should exclude the global branch and only include branches created in this test + main. That’s 11, not 12.
- expected_branches includes branch3 which is created in a different test; fixtures reset DB, so it won’t exist here.
- Comparing a paginated page to the full set is incorrect.
Refactor to derive ordering from the API, then compare the correct slice:
- assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch
-
- expected_branches = [
- {
- "description": {"value": "Default Branch"},
- "name": {"value": "main"},
- },
- {
- "description": {"value": "my description"},
- "name": {"value": "branch3"},
- },
- *[
- {
- "description": {"value": f"sample description {i}"},
- "name": {"value": f"sample-branch-{i}"},
- }
- for i in range(10)
- ],
- ]
- all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]]
- assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort(
- key=lambda x: x["name"]["value"]
- )
+ # Verify total count (10 created here + main), global branch is excluded by the API.
+ assert all_branches.data["InfrahubBranch"]["count"] == 11
+
+ # Establish server ordering with a baseline query, then validate the paginated slice.
+ baseline_query = """
+ query {
+ InfrahubBranch(offset: 0, limit: 1000) {
+ edges {
+ node {
+ name { value }
+ description { value }
+ }
+ }
+ }
+ }
+ """
+ gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service)
+ baseline = await graphql(schema=gql_params.schema, source=baseline_query, context_value=gql_params.context)
+ assert baseline.errors is None
+ all_nodes = [e["node"] for e in baseline.data["InfrahubBranch"]["edges"]]
+
+ # Current page: offset=2, limit=5
+ page_nodes = [e["node"] for e in all_branches.data["InfrahubBranch"]["edges"]]
+ assert len(page_nodes) == 5
+ assert page_nodes == all_nodes[2:7]Committable suggestion skipped: line range outside the PR's diff.
| all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]] | ||
| assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort( | ||
| key=lambda x: x["name"]["value"] | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace list.sort() with sorted; current assertion always compares None to None.
.list.sort() returns None. Use sorted(...) or sort in-place and compare the lists. The refactor above removes this pitfall and validates the actual page slice deterministically.
🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 232 to 235,
the test uses list.sort() inside the assertion which returns None so the
assertion compares None to None; replace list.sort() with sorted(...) (or call
.sort() on each list before the assertion) so you compare the actual sorted
lists deterministically. Ensure both all_branches_data_only and
expected_branches are sorted by the same key (lambda x: x["name"]["value"]) and
then assert equality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (3)
backend/infrahub/graphql/types/branch.py (3)
118-126: Add method docstring.The
get_list_countmethod is missing a docstring, which violates the coding guidelines. While a past review comment indicated this was addressed in commit 32c72ad, the docstring is still missing in the current code.As per coding guidelines, apply this diff:
@classmethod async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: + """Get the total count of branches excluding the global branch. + + Args: + graphql_context: The GraphQL context containing database connection. + **kwargs: Additional filter parameters passed to Branch.get_list_count. + + Returns: + The count of branches, excluding GLOBAL_BRANCH_NAME if it exists. + """ async with graphql_context.db.start_session(read_only=True) as db: count = await Branch.get_list_count(db=db, **kwargs) try: await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) return count - 1 except BranchNotFoundError: return count
110-111: Add class docstring.The
InfrahubBranchEdgeclass is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.As per coding guidelines, apply this diff:
class InfrahubBranchEdge(InfrahubObjectType): + """Edge type for paginated branch queries. + + Wraps an InfrahubBranch node in the edges/nodes pagination pattern. + """ node = Field(InfrahubBranch, required=True)
114-117: Add class docstring.The
InfrahubBranchTypeclass is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.As per coding guidelines, apply this diff:
class InfrahubBranchType(InfrahubObjectType): + """Paginated response type for branch queries. + + Provides pagination metadata (count) and a list of branch edges following + the GraphQL connection pattern. + """ count = Field(Int, description="Total number of items") edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge))))
🧹 Nitpick comments (2)
schema/schema.graphql (2)
10789-10790: Query field design for paginated branch retrieval.The
InfrahubBranchquery acceptsids,limit,name, andoffsetparameters. Consider documenting the behavior when conflicting filters are provided (e.g., when bothidsandoffset/limitare specified). Also, consider adding a documented maximum limit or validation constraint to prevent unbounded queries.Would you like me to check how this is handled in the resolver and suggest updates to the schema documentation if needed?
9406-9412: Value wrapper types introduced.Three new wrapper types are added:
NonRequiredBooleanValueField,NonRequiredStringValueField,RequiredStringValueField, andStatusField. Ensure this pattern is consistent with existing similar types in the schema (e.g., check if other query results use similar wrapper patterns). If this is a new convention, consider documenting it for future maintainers.Also applies to: 10938-10940, 11035-11037
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/infrahub/graphql/types/branch.py(3 hunks)backend/tests/unit/graphql/queries/test_branch.py(1 hunks)schema/schema.graphql(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: E2E-testing-version-upgrade / From 1.3.0
- GitHub Check: backend-benchmark
- GitHub Check: E2E-testing-playwright
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (2)
schema/schema.graphql (2)
7258-7266: Standard relay-style pagination structure.The
InfrahubBranchEdgeandInfrahubBranchTypefollow proper GraphQL pagination semantics: each edge contains a singlenode(not an array), and the collection is on the parent'sedgesfield. The cardinality is correct.
11173-11173: Minor directive formatting change.Line 11173 adjusts the
@expanddirective definition. This appears to be a cosmetic change with no semantic impact.
| @staticmethod | ||
| async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: | ||
| return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring.
The _map_fields_to_graphql static method is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python methods.
As per coding guidelines, apply this diff:
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+ """Map Branch objects to GraphQL field dictionaries, excluding the global branch.
+
+ Args:
+ objs: List of Branch objects to map.
+ fields: Dictionary of field names to include in the output.
+
+ Returns:
+ List of dictionaries containing GraphQL field data for each branch.
+ """
return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME]🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 36 to 38, the async
static method _map_fields_to_graphql is missing a Google-style docstring; add a
docstring above the method that briefly describes its purpose (map a list of
Branch objects to GraphQL dicts while excluding branches named
GLOBAL_BRANCH_NAME), documents parameters (objs: list[Branch], fields: dict) and
their types, notes that the method is asynchronous and returns list[dict[str,
Any]], and describes the return value and behavior (awaits each obj.to_graphql
and filters out GLOBAL_BRANCH_NAME).
| class RequiredStringValueField(InfrahubObjectType): | ||
| value = String(required=True) | ||
|
|
||
|
|
||
| class NonRequiredStringValueField(InfrahubObjectType): | ||
| value = String(required=False) | ||
|
|
||
|
|
||
| class NonRequiredBooleanValueField(InfrahubObjectType): | ||
| value = Boolean(required=False) | ||
|
|
||
|
|
||
| class StatusField(InfrahubObjectType): | ||
| value = InfrahubBranchStatus(required=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add docstrings and clarify the purpose of value wrapper classes.
All four value wrapper classes (RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) are missing docstrings, which violates the coding guidelines. Additionally, the purpose of these wrapper types is not immediately clear—they add a layer of indirection without explanation.
As per coding guidelines, add docstrings to each class explaining their purpose:
class RequiredStringValueField(InfrahubObjectType):
+ """Wrapper type for required string values in GraphQL responses.
+
+ Provides a consistent structure for string fields that must always have a value.
+ """
value = String(required=True)
class NonRequiredStringValueField(InfrahubObjectType):
+ """Wrapper type for optional string values in GraphQL responses.
+
+ Provides a consistent structure for string fields that may be null.
+ """
value = String(required=False)
class NonRequiredBooleanValueField(InfrahubObjectType):
+ """Wrapper type for optional boolean values in GraphQL responses.
+
+ Provides a consistent structure for boolean fields that may be null.
+ """
value = Boolean(required=False)
class StatusField(InfrahubObjectType):
+ """Wrapper type for branch status enum values in GraphQL responses.
+
+ Provides a consistent structure for status fields.
+ """
value = InfrahubBranchStatus(required=True)🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 56 to 70, the four
small wrapper classes lack docstrings and do not explain their purpose; add
brief docstrings to each class (RequiredStringValueField,
NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) that
state they are simple GraphQL value-wrapper types used to standardize field
payloads/typing (e.g., wrapping a single value for consistency across
mutations/queries), note whether the inner value is required or optional and the
expected type (String/Boolean/InfrahubBranchStatus), and include one-line
examples or usage intent if helpful — keep them concise and follow project
docstring style.
| class InfrahubBranch(BranchType): | ||
| id = String(required=True) | ||
| created_at = String(required=False) | ||
|
|
||
| name = Field(RequiredStringValueField, required=True) | ||
| description = Field(NonRequiredStringValueField, required=False) | ||
| origin_branch = Field(NonRequiredStringValueField, required=False) | ||
| branched_from = Field(NonRequiredStringValueField, required=False) | ||
| status = Field(StatusField, required=True) | ||
| sync_with_git = Field(NonRequiredBooleanValueField, required=False) | ||
| is_default = Field(NonRequiredBooleanValueField, required=False) | ||
| is_isolated = Field( | ||
| NonRequiredBooleanValueField, required=False, deprecation_reason="non isolated mode is not supported anymore" | ||
| ) | ||
| has_schema_changes = Field(NonRequiredBooleanValueField, required=False) | ||
|
|
||
| class Meta: | ||
| description = "InfrahubBranch" | ||
| name = "InfrahubBranch" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add class docstring and clarify field definition pattern.
The InfrahubBranch class is missing a docstring, which violates the coding guidelines. Additionally, there's an inconsistency in field definitions: id and created_at are defined as plain String fields, while other fields use Field() wrappers with value wrapper types. This inconsistency is confusing.
As per coding guidelines, apply this diff:
class InfrahubBranch(BranchType):
+ """Extended branch type with wrapped field values for GraphQL pagination queries.
+
+ Provides an alternative representation where most fields are wrapped in value objects
+ to support consistent field resolution in paginated responses.
+ """
id = String(required=True)
created_at = String(required=False)Consider documenting why id and created_at remain unwrapped while other fields use the value wrapper pattern.
| @staticmethod | ||
| async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]: | ||
| field_keys = fields.keys() | ||
| result: list[dict[str, Any]] = [] | ||
| for obj in objs: | ||
| if obj.name == GLOBAL_BRANCH_NAME: | ||
| continue | ||
| data = {} | ||
| for field in field_keys: | ||
| value = getattr(obj, field, None) | ||
| if isinstance(fields.get(field), dict): | ||
| data[field] = {"value": value} | ||
| else: | ||
| data[field] = value | ||
| result.append(data) | ||
| return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring and clarify field wrapping logic.
The _map_fields_to_graphql method is missing a docstring, which violates the coding guidelines. The field wrapping logic (lines 102-105) is also complex and not immediately clear—it wraps values in {"value": ...} dictionaries based on whether fields.get(field) is a dict, but the reasoning isn't documented.
As per coding guidelines, apply this diff:
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+ """Map Branch objects to GraphQL field dictionaries with wrapped field values.
+
+ Fields that have nested field selections in the GraphQL query (indicated by
+ dict values in the fields parameter) are wrapped in {"value": ...} structures
+ to match the value wrapper types. Other fields are returned as-is.
+
+ Args:
+ objs: List of Branch objects to map.
+ fields: Dictionary mapping field names to their selection sets.
+
+ Returns:
+ List of dictionaries with field data, excluding GLOBAL_BRANCH_NAME.
+ """
field_keys = fields.keys()
result: list[dict[str, Any]] = []
for obj in objs:
if obj.name == GLOBAL_BRANCH_NAME:
continue
data = {}
for field in field_keys:
value = getattr(obj, field, None)
+ # Wrap value if the field has nested selections (value wrapper types)
if isinstance(fields.get(field), dict):
data[field] = {"value": value}
else:
data[field] = value
result.append(data)
return result🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 92 to 107, add a
concise docstring to _map_fields_to_graphql describing the method’s purpose,
parameters (objs: list[Branch], fields: dict) and return type (list[dict[str,
Any]]), and clarify the field-wrapping logic: when fields.get(field) is a dict
it indicates GraphQL requested subfields so the code should wrap the raw
attribute value in {"value": <attr>} to match the expected nested GraphQL shape;
update the docstring to explain this behavior and its rationale so the wrapping
on lines 102-105 is clear to readers and maintainers.
| """InfrahubBranch""" | ||
| type InfrahubBranch { | ||
| branched_from: NonRequiredStringValueField | ||
| created_at: String | ||
| description: NonRequiredStringValueField | ||
| has_schema_changes: NonRequiredBooleanValueField | ||
| id: String! | ||
| is_default: NonRequiredBooleanValueField | ||
| is_isolated: NonRequiredBooleanValueField @deprecated(reason: "non isolated mode is not supported anymore") | ||
| name: RequiredStringValueField! | ||
| origin_branch: NonRequiredStringValueField | ||
| status: StatusField! | ||
| sync_with_git: NonRequiredBooleanValueField | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Inconsistent handling of optional fields in InfrahubBranch type.
The created_at field (line 7246) is defined as a direct String type, while other optional fields (e.g., branched_from, description, has_schema_changes, etc.) use wrapper types like NonRequiredStringValueField or NonRequiredBooleanValueField. This inconsistency should be addressed—either wrap created_at as NonRequiredStringValueField or align the pattern across all optional fields for consistency.
Please clarify the intended pattern and ensure all optional fields follow the same convention.
---
🏁 Script executed:
```shell
#!/bin/bash
# Get full context around the InfrahubBranch type definition (lines 7240-7270)
sed -n '7240,7270p' schema/schema.graphql | cat -n
Length of output: 1048
🏁 Script executed:
#!/bin/bash
# Search for all created_at field definitions to see the pattern
rg -n 'created_at:' schema/schema.graphql | head -20Length of output: 726
🏁 Script executed:
#!/bin/bash
# Search for other optional String fields to identify the wrapping pattern
rg -nP '\w+:\s+NonRequired(String|Boolean)ValueField' schema/schema.graphql | head -20Length of output: 417
🏁 Script executed:
#!/bin/bash
# Search for fields defined as plain String (not wrapped) to see if there's a pattern
rg -nP '^\s+\w+:\s+String(\s*$|@|#)' schema/schema.graphql | head -20Length of output: 506
🏁 Script executed:
#!/bin/bash
# Look for the wrapper type definitions to understand their purpose
rg -nA 2 'type NonRequiredStringValueField' schema/schema.graphqlLength of output: 67
Wrap created_at as NonRequiredStringValueField to maintain consistency within the InfrahubBranch type.
Within this type, all other optional String and Boolean fields use wrapper types (NonRequiredStringValueField, NonRequiredBooleanValueField). The created_at field (line 7246) breaks this pattern by using a plain String type. Update it to created_at: NonRequiredStringValueField to align with the established pattern.
🤖 Prompt for AI Agents
In schema/schema.graphql around lines 7243 to 7256, the InfrahubBranch type uses
a plain String for created_at while all other optional string/boolean fields use
wrapper types; change created_at to use NonRequiredStringValueField to match the
pattern and maintain consistency across optional fields in this type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
schema/schema.graphql (1)
7243-7256:created_atfield should useNonRequiredStringValueFieldfor consistency.This issue was flagged in a previous review but remains unresolved. The
created_atfield (line 7246) uses a plainStringtype, while all other optional fields in theInfrahubBranchtype use wrapper types (NonRequiredStringValueField,NonRequiredBooleanValueField). Updatecreated_atto match the established pattern.Apply this diff:
type InfrahubBranch { branched_from: NonRequiredStringValueField - created_at: String + created_at: NonRequiredStringValueField description: NonRequiredStringValueField
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.pre-commit-config.yaml(1 hunks)schema/schema.graphql(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
.pre-commit-config.{yaml,yml}
📄 CodeRabbit inference engine (.github/instructions/tooling.instructions.md)
Don't use Git precommit
Files:
.pre-commit-config.yaml
🔇 Additional comments (2)
.pre-commit-config.yaml (1)
13-13: Clarify the purpose of excludingschema/schema.graphqlfromend-of-file-fixer.The original concern about violating coding guidelines is incorrect—precommit is already established in this codebase. Adding an exclude pattern to an existing hook is not adopting precommit; it's a configuration adjustment to existing infrastructure.
However, the substantive question remains: Why does this file need exclusion? The schema file appears to be version-controlled and manually maintained (not auto-generated). Before approving this pattern, confirm:
- Is
schema/schema.graphqlauto-generated (and if so, by what process)?- Does it have special formatting requirements that conflict with
end-of-file-fixer?- Is there a reason it cannot conform to standard EOF validation?
If the exclusion is necessary, document the rationale in a comment. If it's a temporary workaround, consider the alternatives mentioned in the original review.
schema/schema.graphql (1)
7258-7266: Pagination structure is now correctly defined.The
InfrahubBranchEdge.nodefield is correctly singular (node: InfrahubBranch!), and theInfrahubBranchTypeproperly exposescountandedgesfor standard offset/limit pagination semantics. This fixes the earlier cardinality issue from the previous review.
schema/schema.graphql
Outdated
| """Retrieve paginated information about active branches.""" | ||
| InfrahubBranch(ids: [ID!], limit: Int, name: String, offset: Int): InfrahubBranchType! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Query field descriptor and resolver alignment.
The new InfrahubBranch query accepts ids, limit, name, and offset parameters for flexible pagination and filtering. Verify that the backend resolver (infrahub_branch_resolver per the summary) correctly handles these parameters, particularly the ids filter which allows querying specific branches by identifier.
Run the following script to verify resolver implementation:
🏁 Script executed:
#!/bin/bash
# Search for the infrahub_branch_resolver implementation
rg -n 'infrahub_branch_resolver' --type=py -A 10Length of output: 1153
I need to examine the complete resolver implementation and verify parameter handling.
🏁 Script executed:
#!/bin/bash
# Get the full infrahub_branch_resolver implementation
cat backend/infrahub/graphql/queries/branch.py | sed -n '33,57p'Length of output: 885
🏁 Script executed:
#!/bin/bash
# Search for the GraphQL field descriptor definition for InfrahubBranch
rg -n 'InfrahubBranch' backend/infrahub/graphql/queries/branch.py -A 5 -B 5Length of output: 1162
🏁 Script executed:
#!/bin/bash
# Search for InfrahubBranch.get_list to see if it handles ids/name
rg -n 'def get_list' backend/infrahub --type=py -A 15 | grep -A 15 'InfrahubBranch\|class InfrahubBranch'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the GraphQL field definition in schema generation or resolvers
rg -n 'ids.*limit.*name.*offset' backend/infrahub/graphql --type=py -C 3Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at the complete branch.py file structure
cat backend/infrahub/graphql/queries/branch.pyLength of output: 1782
🏁 Script executed:
#!/bin/bash
# Search for where InfrahubBranch query field is defined with parameters
rg -n 'InfrahubBranch.*ids.*limit.*name.*offset' backend/infrahub --type=pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for graphql schema generation that might define these parameters
rg -n 'ids.*limit.*name' backend/infrahub/graphql --type=py -B 5 -A 5Length of output: 0
🏁 Script executed:
#!/bin/bash
# Find where the GraphQL schema fields are defined or generated
fd -e py 'schema' backend/infrahub/graphql | head -20Length of output: 119
🏁 Script executed:
#!/bin/bash
# Look for the actual schema file or mutation/query definitions
rg -n 'class.*Query\|def.*query' backend/infrahub/graphql --type=py -A 10 | head -60Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if there's a schema generation file or mutation definitions
find backend/infrahub/graphql -name '*.py' -type f | xargs grep -l 'InfrahubBranch' | head -10Length of output: 246
🏁 Script executed:
#!/bin/bash
# Look at the schema.graphql file directly around the InfrahubBranch definition
rg -n 'InfrahubBranch' schema/schema.graphql -B 5 -A 5Length of output: 1886
🏁 Script executed:
#!/bin/bash
# Check if the resolver accepts **kwargs to handle additional parameters
rg -n 'infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -A 20Length of output: 893
🏁 Script executed:
#!/bin/bash
# Search for the specific line where the resolver is registered
rg -n 'resolver=infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -B 10 -A 5Length of output: 337
🏁 Script executed:
#!/bin/bash
# Find the field descriptor definition to see all parameters
rg -n 'class InfrahubBranch\|Field.*InfrahubBranch\|query_field' backend/infrahub/graphql --type=py -A 20 | head -80Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look for name parameter handling in the resolver or nearby
rg -n 'name' backend/infrahub/graphql/queries/branch.py -C 3Length of output: 476
🏁 Script executed:
#!/bin/bash
# Check if there are multiple resolver definitions or overloads
rg -n 'def.*branch.*resolver\|@.*resolver' backend/infrahub/graphql/queries/branch.pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Search for the field descriptor in the schema generation or graphene code
rg -n 'GraphQLField.*InfrahubBranch\|InfrahubBranchType' backend/infrahub --type=py -B 2 -A 8 | head -100Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if parameters are dynamically extracted from the GraphQL query
rg -n 'extract_graphql_fields\|extract.*param' backend/infrahub/graphql --type=py -C 2 | head -60Length of output: 0
🏁 Script executed:
#!/bin/bash
# Look at the complete resolver with more context
sed -n '30,70p' backend/infrahub/graphql/queries/branch.pyLength of output: 949
🏁 Script executed:
#!/bin/bash
# Find schema field definitions - look for where the query fields are built
rg -n 'define_query_field\|query_field_\|InfrahubBranch.*field' backend/infrahub --type=py -C 3 | head -100Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check if ids parameter handling is in a wrapper or middleware
rg -n 'ids' backend/infrahub/graphql/queries/branch.py -B 5 -A 5Length of output: 655
🏁 Script executed:
#!/bin/bash
# Look for where the parameters are defined in the GraphQL schema
rg -n '^\s*ids:.*ID' schema/schema.graphql -B 5 -A 2Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check around line 10789-10790 in the schema file
sed -n '10785,10795p' schema/schema.graphqlLength of output: 946
🏁 Script executed:
#!/bin/bash
# Search more broadly for the InfrahubBranch field and its parameters
sed -n '10780,10800p' schema/schema.graphqlLength of output: 5632
🏁 Script executed:
#!/bin/bash
# Look at the InfrahubBranchType definition
rg -n 'type InfrahubBranchType' schema/schema.graphql -A 20Length of output: 598
🏁 Script executed:
#!/bin/bash
# Check if there's a resolver parameter mismatch documented
rg -n 'ids.*name' backend/infrahub/graphql/queries/branch.pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Look at what parameters InfrahubBranch.get_list accepts
fd -name '*.py' -path '*/infrahub/*' | xargs grep -n 'class InfrahubBranch' | head -5Length of output: 230
🏁 Script executed:
#!/bin/bash
# Find and examine the InfrahubBranch class
rg -n 'class InfrahubBranch' backend/infrahub --type=py -A 30 | head -50Length of output: 4177
🏁 Script executed:
#!/bin/bash
# Look for get_list method in InfrahubBranch
rg -n 'def get_list' backend/infrahub --type=py -B 2 -A 20 | grep -B 2 -A 20 'InfrahubBranch\|class.*Branch'Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check the actual GraphQL field definition more carefully
python3 << 'EOF'
import re
# Read schema.graphql
with open('schema/schema.graphql', 'r') as f:
content = f.read()
# Find the line with InfrahubBranch query
lines = content.split('\n')
for i, line in enumerate(lines):
if 'InfrahubBranch' in line and 'Retrieve paginated' in lines[i-1]:
# Print context around this line
start = max(0, i-5)
end = min(len(lines), i+10)
for j in range(start, end):
print(f"{j+1}: {lines[j]}")
break
EOFLength of output: 1216
Resolver missing ids and name parameter handling.
The InfrahubBranch query field declares ids and name parameters, but the infrahub_branch_resolver function (backend/infrahub/graphql/queries/branch.py:33-48) only accepts limit and offset. These parameters are not forwarded to InfrahubBranch.get_list().
Add the missing parameters to the resolver signature and pass them through:
async def infrahub_branch_resolver(
root: dict,
info: GraphQLResolveInfo,
ids: list[str] | None = None,
name: str | None = None,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
# ... update get_list call to include ids and nameAlternatively, adopt the **kwargs pattern used in branch_resolver (line 16) to handle all parameters.
🤖 Prompt for AI Agents
In schema/schema.graphql around lines 10789-10790, the InfrahubBranch query
declares ids and name args but the resolver
backend/infrahub/graphql/queries/branch.py (lines ~33-48) only accepts limit and
offset and does not forward ids/name to InfrahubBranch.get_list(); update the
resolver signature to accept ids: list[str] | None and name: str | None (or
switch to **kwargs like branch_resolver at line 16) and pass these parameters
through to InfrahubBranch.get_list(ids=ids, name=name, limit=limit,
offset=offset) so the query arguments are respected.
|
@solababs, I think we're close to the first iteration of this query now. I see that there's a slight issue with how the branch IDs are presented if you start an environment and compare the responses to these two queries: query MyQuery {
InfrahubBranch {
count
edges {
node {
id
name {
value
}
}
}
}
Branch {
id
name
}
}I think you're returning the internal Neo4j ID of these nodes which is typically different from the ID we present to the external world. |
| InfrahubBranchQueryList = Field( | ||
| InfrahubBranchType, | ||
| ids=List(of_type=NonNull(ID)), | ||
| name=String(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These parameters doesn't seem to work now.
query MyQuery {
InfrahubBranch(name: "branch2") {
count
edges {
node {
id
name {
value
}
}
}
}
}I.e infrahub_branch_resolver() doesn't accept the parameters ids or name so the above query would fail. I think perhaps we can just remove both of them in this first iteration and then consider which filter options we want and need. We will want to add some of them for sure. But it could potentially be quite verbose if we start to add all of the variations that we dynamically generate for other node types, so I think that we can have a discussion around that after this PR is merge.
I.e. if we were to follow the current approach we'd have:
- name__value
- name__values
- description__value
- description__values
- etc for all attributes.
…github.com/opsmill/infrahub into sb-20251013-infrahub-branch-query-ifc-1886
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (6)
backend/infrahub/graphql/types/branch.py (6)
36-38: Add method docstring.This method still lacks a docstring as flagged in previous reviews. Per coding guidelines, all Python methods require Google-style docstrings.
56-70: Add docstrings to value wrapper classes.All four value wrapper classes remain undocumented despite previous review feedback. The purpose and usage pattern of these wrapper types is unclear without docstrings.
72-91: Add class docstring.The
InfrahubBranchclass lacks a docstring despite previous review feedback. Per coding guidelines, document the class purpose and explain why certain fields remain unwrapped while others use value wrapper types.
92-110: Add method docstring.This method lacks a docstring as flagged in previous reviews. Document the mapping logic, especially the value-wrapping behavior on lines 105-108 and the ID-to-UUID mapping on line 102.
113-114: Add class docstring.The
InfrahubBranchEdgeclass lacks a docstring despite previous review feedback. Per coding guidelines, document that this is an edge wrapper for the GraphQL connection pattern.
117-120: Add class docstring.The
InfrahubBranchTypeclass lacks a docstring despite previous review feedback. Per coding guidelines, document this as a paginated response type with metadata and edges.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
backend/infrahub/graphql/types/branch.py(3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run backend tests with
pytestor viainvoketasks
Files:
backend/infrahub/graphql/types/branch.py
**/*.py
📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)
**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpfulUse ruff and mypy to validate and lint Python files
**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions withasync def
Useawaitfor asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking
**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions withasync def
Await asynchronous calls withawait
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy
Files:
backend/infrahub/graphql/types/branch.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
📚 Learning: 2025-10-30T12:43:45.629Z
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
Applied to files:
backend/infrahub/graphql/types/branch.py
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
PR: opsmill/infrahub#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Documentation generated for Infrahub must reflect its novel approach, providing clarity around new concepts and demonstrating integration with familiar patterns from existing tools like Git, infrastructure-as-code, and CI/CD pipelines
Applied to files:
backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (3)
backend/infrahub/core/branch/models.py (3)
Branch(31-550)get_list_count(173-184)get_by_name(137-154)backend/infrahub/exceptions.py (1)
BranchNotFoundError(194-200)backend/infrahub/core/node/standard.py (2)
to_graphql(71-86)get(131-138)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: backend-tests-integration
- GitHub Check: validate-generated-documentation
- GitHub Check: backend-tests-functional
- GitHub Check: backend-tests-unit
- GitHub Check: backend-docker-integration
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)
101-103: The review comment concern is unfounded; the code correctly uses the external UUID.The Branch model inherits from StandardNode, which populates the
uuidattribute fromnode["uuid"](node properties), not from the Neo4j internal node ID. Inbackend/infrahub/core/node/standard.pylines 113-114, the initialization is:self.id = node.element_id self.uuid = UUID(node["uuid"])Line 102 of the GraphQL file correctly maps
"id"toobj.uuid, which is the external UUID. The PR concern about returning internal Neo4j node IDs does not apply here—obj.uuidis already the external UUID from node properties.
| @classmethod | ||
| async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int: | ||
| async with graphql_context.db.start_session(read_only=True) as db: | ||
| count = await Branch.get_list_count(db=db, **kwargs) | ||
| try: | ||
| await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db) | ||
| return count - 1 | ||
| except BranchNotFoundError: | ||
| return count |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add method docstring (regression).
This method's docstring was marked as addressed in a previous commit (32c72ad) but is missing in the current code. Per coding guidelines, add a Google-style docstring documenting the purpose, parameters, and return value.
Apply this diff:
@classmethod
async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
+ """Get the total count of branches excluding the global branch.
+
+ Args:
+ graphql_context: The GraphQL context containing database connection.
+ **kwargs: Additional filter parameters passed to Branch.get_list_count.
+
+ Returns:
+ The count of branches, excluding GLOBAL_BRANCH_NAME if it exists.
+ """
async with graphql_context.db.start_session(read_only=True) as db:🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 121 to 129, the async
classmethod get_list_count is missing its Google-style docstring; add a
docstring directly under the method definition that briefly describes what the
method does (returns the number of Branch records accessible via the GraphQL
context, excluding the GLOBAL_BRANCH_NAME when present), documents parameters
(graphql_context: GraphqlContext, **kwargs: Any) and their roles, and specifies
the return type (int) and behavior (subtracts one if GLOBAL_BRANCH_NAME exists,
otherwise returns full count); keep wording concise and follow Google docstring
format for purpose, Args, and Returns.
Summary by CodeRabbit
New Features
Bug Fixes / Data
Tests
Chores