Skip to content

Conversation

@solababs
Copy link
Contributor

@solababs solababs commented Oct 16, 2025

Summary by CodeRabbit

  • New Features

    • Added InfrahubBranch GraphQL types and query for paginated branch listing (offset/limit), edges, total count, and expanded branch fields (id, name, status, origin, created_at, description, sync_with_git, is_default, has_schema_changes); deprecates non-isolated flag.
  • Bug Fixes / Data

    • Branch listing now excludes deleting branches and returns corrected total counts.
  • Tests

    • Added unit test validating paginated InfrahubBranch query (offset/limit, count, and returned edges).
  • Chores

    • Updated pre-commit configuration to ignore the schema file for EOF fixes.

@solababs solababs requested a review from a team as a code owner October 16, 2025 09:44
@github-actions github-actions bot added the group/backend Issue related to the backend (API Server, Git Agent) label Oct 16, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 16, 2025

Walkthrough

Adds paginated branch support to the GraphQL API. Introduces InfrahubBranch, InfrahubBranchEdge, InfrahubBranchType, status and value-field wrapper types, and a new query field InfrahubBranch (InfrahubBranchQueryList) with resolver exposed on InfrahubBaseQuery. Reworks branch listing to use BranchNodeGetListQuery (which excludes DELETING via a raw_filter) and adds Branch.get_list_count. Updates GraphQL schema, exports, unit tests for pagination, and pre-commit configuration.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "IFC-1886: Paginated branch graphql query" directly and accurately reflects the primary objective of this pull request. The changeset introduces pagination support for branch GraphQL queries by adding a new InfrahubBranch query field with offset and limit parameters, implementing associated resolver logic, new GraphQL types (InfrahubBranchType, InfrahubBranchEdge), and supporting backend query infrastructure. The title is concise, specific, and avoids vague language; a teammate reviewing the commit history would immediately understand that this PR adds pagination to branch queries. The issue reference (IFC-1886) provides additional tracking context without cluttering the message.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sb-20251013-infrahub-branch-query-ifc-1886

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 089b015 and 1157cc4.

📒 Files selected for processing (1)
  • schema/schema.graphql (5 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: ogenstad
Repo: opsmill/infrahub PR: 7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
📚 Learning: 2025-10-30T12:43:45.629Z
Learnt from: ogenstad
Repo: opsmill/infrahub PR: 7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.

Applied to files:

  • schema/schema.graphql
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
Repo: opsmill/infrahub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Infrahub offers a central hub to manage the data, templates and playbooks that powers your infrastructure by combining the version control and branch management capabilities similar to Git with the flexible data model and UI of a graph database

Applied to files:

  • schema/schema.graphql
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: validate-generated-documentation
  • GitHub Check: backend-validate-generated
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (6)
schema/schema.graphql (6)

7249-7249: Verify that the id field returns the external/public ID, not the internal Neo4j node ID.

The PR comments flag that the paginated branch query returns incorrect IDs. The id field on InfrahubBranch (line 7249) must resolve to the external/public identifier, not the internal database node ID. Confirm the resolver maps to the correct ID source, as this is the root cause of the issue raised by ogenstad.


9406-9412: LGTM on value wrapper types.

The NonRequiredBooleanValueField and NonRequiredStringValueField type definitions are appropriately structured to represent optional wrapped scalar values. These support consistent field wrapping across the schema.


10789-10790: Query field parameters: confirm completeness.

The InfrahubBranch query accepts only limit and offset. Earlier review comments referenced missing ids and name parameters. Verify that this parameter set is intentional (e.g., filtering moved to a separate query) or if ids and name filters should be added to support targeted branch queries.


10938-10940: LGTM on required string value wrapper.

The RequiredStringValueField type correctly represents a non-null wrapped string value. Consistent with the optional variants.


11035-11037: Verify BranchStatus enum is defined and includes required status values.

The StatusField type references BranchStatus! enum. Confirm the enum is defined in the schema and includes all required branch statuses (including NEED_UPGRADE_REBASE per the learnings on conflict resolution capabilities).


7258-7260: Edge cardinality is now correct.

The InfrahubBranchEdge.node field is correctly defined as a singular InfrahubBranch! (not an array), which aligns with standard relay-style pagination semantics. The previous cardinality issue has been properly addressed.


Summary

The schema changes look largely solid with proper pagination structure, but three critical items need verification:

  1. ID source (Line 7249) — This directly addresses ogenstad's reported issue. The resolver must return external/public IDs, not internal Neo4j node IDs. This is the blocking concern.

  2. created_at consistency (Line 7246) — Should be wrapped in NonRequiredStringValueField to match the established pattern for optional fields in this type. This is a straightforward consistency fix.

  3. Query parameters (Line 10790) — Confirm whether the limit/offset-only design is intentional or if filtering parameters were deferred.

The value wrapper types and edge cardinality have been correctly implemented per the earlier review feedback.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
backend/tests/unit/graphql/queries/test_branch.py (1)

152-228: Add docstring and expand test coverage.

The test method is missing a docstring, which violates the coding guidelines. Additionally, the test only covers the happy path scenario. Consider adding test cases for edge cases to ensure robustness of the pagination implementation.

As per coding guidelines, add a docstring:

     async def test_paginated_branch_query(
         self,
         db: InfrahubDatabase,
         default_branch: Branch,
         register_core_models_schema,
         session_admin,
         client,
         service,
     ):
+        """Test paginated branch query returns correct pagination metadata.
+        
+        Creates 10 test branches and verifies that querying page 2 with limit 5
+        returns the correct page metadata and 5 branches.
+        """
         for i in range(10):

Additionally, consider adding test cases for:

  • Page 1 to verify first page behavior
  • Last page with fewer items than limit
  • Page beyond available data (should return empty list)
  • Invalid pagination parameters (page=0, negative limit)
  • Default pagination parameters when not specified
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0deed2d and c0b3b31.

📒 Files selected for processing (6)
  • backend/infrahub/graphql/queries/__init__.py (2 hunks)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/schema.py (2 hunks)
  • backend/infrahub/graphql/types/__init__.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (2 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/graphql/queries/__init__.py
  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/types/branch.py
  • backend/infrahub/graphql/types/__init__.py
  • backend/infrahub/graphql/schema.py
  • backend/infrahub/graphql/queries/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/graphql/queries/__init__.py
  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/types/branch.py
  • backend/infrahub/graphql/types/__init__.py
  • backend/infrahub/graphql/schema.py
  • backend/infrahub/graphql/queries/branch.py
backend/tests/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Place backend tests in backend/tests/

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
🧬 Code graph analysis (3)
backend/tests/unit/graphql/queries/test_branch.py (6)
backend/tests/conftest.py (4)
  • db (122-138)
  • default_branch (161-162)
  • register_core_models_schema (217-218)
  • branch (1239-1243)
backend/tests/helpers/test_app.py (3)
  • default_branch (54-61)
  • client (146-173)
  • service (112-113)
backend/tests/unit/conftest.py (1)
  • session_admin (2679-2681)
backend/infrahub/services/__init__.py (1)
  • client (154-158)
backend/infrahub/graphql/initialization.py (1)
  • prepare_graphql_params (79-125)
backend/tests/helpers/graphql.py (1)
  • graphql (26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (2)
  • BranchType (16-46)
  • InfrahubBranchType (49-52)
backend/infrahub/graphql/queries/branch.py (2)
backend/infrahub/graphql/field_extractor.py (1)
  • extract_graphql_fields (67-69)
backend/infrahub/graphql/types/branch.py (3)
  • BranchType (16-46)
  • InfrahubBranchType (49-52)
  • get_list (34-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: E2E-testing-version-upgrade / From 1.3.0
  • GitHub Check: validate-generated-documentation
  • GitHub Check: E2E-testing-playwright
  • GitHub Check: graphql-schema
  • GitHub Check: backend-docker-integration
  • GitHub Check: backend-tests-functional
  • GitHub Check: backend-tests-unit
  • GitHub Check: backend-tests-integration
  • GitHub Check: backend-benchmark

@codspeed-hq
Copy link

codspeed-hq bot commented Oct 16, 2025

CodSpeed Performance Report

Merging #7418 will not alter performance

Comparing sb-20251013-infrahub-branch-query-ifc-1886 (1157cc4) with develop (bbe96ae)

Summary

✅ 10 untouched

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0b3b31 and 9a07e23.

📒 Files selected for processing (7)
  • backend/infrahub/core/branch/models.py (1 hunks)
  • backend/infrahub/core/node/standard.py (1 hunks)
  • backend/infrahub/core/query/__init__.py (2 hunks)
  • backend/infrahub/core/query/standard_node.py (1 hunks)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (2 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/core/node/standard.py
  • backend/infrahub/core/query/standard_node.py
  • backend/infrahub/core/branch/models.py
  • backend/infrahub/graphql/types/branch.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/core/query/__init__.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/core/node/standard.py
  • backend/infrahub/core/query/standard_node.py
  • backend/infrahub/core/branch/models.py
  • backend/infrahub/graphql/types/branch.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/core/query/__init__.py
🧬 Code graph analysis (4)
backend/infrahub/core/node/standard.py (4)
backend/infrahub/core/branch/models.py (1)
  • get_list_and_count (169-178)
backend/infrahub/graphql/types/branch.py (1)
  • get_list_and_count (58-70)
backend/infrahub/core/query/__init__.py (5)
  • Query (339-697)
  • init (393-406)
  • execute (530-563)
  • get (222-223)
  • count (588-605)
backend/infrahub/core/query/standard_node.py (1)
  • StandardNodeGetListQuery (132-171)
backend/infrahub/core/branch/models.py (3)
backend/infrahub/core/node/standard.py (1)
  • get_list_and_count (232-245)
backend/infrahub/graphql/types/branch.py (1)
  • get_list_and_count (58-70)
backend/infrahub/core/branch/enums.py (1)
  • BranchStatus (4-8)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (2)
  • get_list_and_count (169-178)
  • Branch (29-544)
backend/infrahub/core/node/standard.py (2)
  • get_list_and_count (232-245)
  • to_graphql (71-86)
backend/infrahub/graphql/initialization.py (1)
  • GraphqlContext (34-76)
backend/infrahub/database/__init__.py (1)
  • start_session (222-239)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
  • extract_graphql_fields (67-69)
backend/infrahub/graphql/types/branch.py (3)
  • BranchType (16-46)
  • InfrahubBranchType (53-70)
  • get_list_and_count (58-70)
backend/infrahub/core/branch/models.py (1)
  • get_list_and_count (169-178)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: graphql-schema
  • GitHub Check: validate-generated-documentation
  • GitHub Check: backend-tests-unit
  • GitHub Check: javascript-lint
🔇 Additional comments (2)
backend/infrahub/core/node/standard.py (1)

231-245: LGTM! Clean pagination implementation.

The get_list_and_count method correctly mirrors get_list while adding count information. The count reflects total matching records (not just the page) since Query.count() excludes LIMIT/OFFSET, which is the standard pagination pattern.

backend/infrahub/core/branch/models.py (1)

168-178: LGTM! Consistent filtering of DELETING branches.

The method correctly filters out branches with DELETING status, maintaining consistency with the existing get_list method (lines 163-166).

Comment on lines 33 to 48
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
fields=fields.get("edges", {}).get("node", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
return {"count": count, "edges": {"node": branches}}
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Non-standard pagination structure.

The return structure {"count": count, "edges": {"node": branches}} doesn't follow standard GraphQL pagination patterns. Standard Relay cursor connections use:

type Connection {
  edges: [Edge!]!
  pageInfo: PageInfo!
}
type Edge {
  node: Node!
  cursor: String!
}

Your implementation has edges as a single object containing node as an array, rather than edges as an array of edge objects. This makes the InfrahubBranchEdge type redundant and could confuse API consumers familiar with standard pagination.

Consider refactoring to a simpler, clearer structure:

 async def infrahub_branch_resolver(
     root: dict,  # noqa: ARG001
     info: GraphQLResolveInfo,
     **kwargs: Any,
 ) -> dict[str, Any]:
     limit = kwargs.pop("limit", 100)
     offset = kwargs.pop("offset", 0)
     fields = extract_graphql_fields(info)
     branches, count = await InfrahubBranchType.get_list_and_count(
-        fields=fields.get("edges", {}).get("node", {}),
+        fields=fields.get("branches", {}),
         graphql_context=info.context,
         limit=limit,
         offset=offset,
         **kwargs,
     )
-    return {"count": count, "edges": {"node": branches}}
+    return {"count": count, "branches": branches}

Then update InfrahubBranchType (in types/branch.py) to:

class InfrahubBranchType(InfrahubObjectType):
    count = Field(Int, required=True)
    branches = Field(List(of_type=NonNull(BranchType)), required=True)

This is simpler and more intuitive than the current edge wrapper.

🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 48, the
resolver returns a non-standard pagination shape {"count": count, "edges":
{"node": branches"}}; change it to return a simple, conventional shape like
{"count": count, "branches": branches} and update any callers to expect
"branches" instead of edges.node. Also update the GraphQL type in
types/branch.py to expose count: Int! and branches: [BranchType!]! (replace the
InfrahubBranchEdge wrapper) and adjust InfrahubBranchType.get_list_and_count
usage/field extraction if needed so fields map to the BranchType nodes directly.
Ensure tests and schema refs are updated to the new field name.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the robot is right about this. the structure should be

edges: [
    {
        node: {
            name: "branch-2", ...
        }
    }, ...
]

you'll need to update the test too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ajtmccarty You're absolutely right! The structure needs to follow the standard Relay pattern. Here's the corrected resolver:

async def infrahub_branch_resolver(
    root: dict,  # noqa: ARG001
    info: GraphQLResolveInfo,
    **kwargs: Any,
) -> dict[str, Any]:
    limit = kwargs.pop("limit", 100)
    offset = kwargs.pop("offset", 0)
    fields = extract_graphql_fields(info)
    branches, count = await InfrahubBranchType.get_list_and_count(
        fields=fields.get("edges", {}).get("node", {}),
        graphql_context=info.context,
        limit=limit,
        offset=offset,
        **kwargs,
    )
    # Transform branches into proper edge structure
    edges = [{"node": branch} for branch in branches]
    return {"count": count, "edges": edges}

Additionally, you'll need to fix the InfrahubBranchEdge type in backend/infrahub/graphql/types/branch.py. The node field should be a single BranchType, not a List:

class InfrahubBranchEdge(InfrahubObjectType):
    node = Field(BranchType, required=True)  # Single BranchType, not List

And update InfrahubBranchType:

class InfrahubBranchType(InfrahubObjectType):
    count = Field(Int, required=True)
    edges = Field(List(of_type=NonNull(InfrahubBranchEdge)), required=True)  # List of edges

The test in backend/tests/unit/graphql/queries/test_branch.py will also need to be updated to expect edges as an array and access each edge's node field accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ajtmccarty we are using this because we want to maintain a consistent response like some of the other queries i.e.

query MyQuery {
  BuiltinTag {
    count
    edges {
      node {
        name {
          value
        }
        description {
          value
        }
      }
    }
  }
}

cc @ogenstad

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the confusing part here is that coderabbit spots an issue and then hallucinates a solution that is incorrect. What we want is that "edges" should be a list/array and each "node" should be an object as an entry in that array. A potential area of confusion here is that you don't see this within the GraphQL query which is correct as per the above however I think the response might not be? What I'd do is to create an environment with a few BuiltinTag objects and issue the above query and observe what the response looks like and then see if the structure matches what we now get from the InfrahubBranch query.

name: str | None = None,
**kwargs: Any,
) -> tuple[list[Self], int]:
kwargs["raw_filter"] = f"n.status <> '{BranchStatus.DELETING.value}'"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's be better to use a named parameter for raw_filter instead of kwargs

Comment on lines 33 to 48
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
**kwargs: Any,
) -> dict[str, Any]:
limit = kwargs.pop("limit", 100)
offset = kwargs.pop("offset", 0)
fields = extract_graphql_fields(info)
branches, count = await InfrahubBranchType.get_list_and_count(
fields=fields.get("edges", {}).get("node", {}),
graphql_context=info.context,
limit=limit,
offset=offset,
**kwargs,
)
return {"count": count, "edges": {"node": branches}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the robot is right about this. the structure should be

edges: [
    {
        node: {
            name: "branch-2", ...
        }
    }, ...
]

you'll need to update the test too

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/infrahub/core/branch/models.py (1)

156-169: **Fix incorrect type hint for kwargs.

The type hint **kwargs: dict[str, Any] is incorrect. The **kwargs syntax already indicates a dictionary of keyword arguments, so the type hint should be **kwargs: Any to indicate the types of individual values.

Apply this diff:

     async def get_list(
         cls,
         db: InfrahubDatabase,
         limit: int = 1000,
         ids: list[str] | None = None,
         name: str | None = None,
-        **kwargs: dict[str, Any],
+        **kwargs: Any,
     ) -> list[Self]:
         query: Query = await BranchNodeGetListQuery.init(
             db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs
         )
         await query.execute(db=db)

         return [cls.from_db(result.get("n")) for result in query.get_results()]

     @classmethod
     async def get_list_count(
         cls,
         db: InfrahubDatabase,
         limit: int = 1000,
         ids: list[str] | None = None,
         name: str | None = None,
-        **kwargs: dict[str, Any],
+        **kwargs: Any,
     ) -> int:
         query: Query = await BranchNodeGetListQuery.init(
             db=db, node_class=cls, ids=ids, node_name=name, limit=limit, **kwargs
         )
         return await query.count(db=db)

Also applies to: 171-183

♻️ Duplicate comments (2)
backend/infrahub/graphql/queries/branch.py (2)

40-44: Critical: Incorrect Relay pagination structure for edges.

The resolver returns {"count": count, "edges": {"node": branches}} where edges is a single object. According to the standard Relay pagination pattern (confirmed by past review comments), edges should be an array of edge objects, each containing a node field: [{"node": branch1}, {"node": branch2}, ...].

Apply this diff:

     fields = extract_graphql_fields(info)
     branches = await BranchType.get_list(
         graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
     )
     count = await InfrahubBranchType.get_list_count(graphql_context=info.context)
-    return {"count": count, "edges": {"node": branches}}
+    edges = [{"node": branch} for branch in branches]
+    return {"count": count, "edges": edges}

Additionally, update the field extraction to handle edges as an array:

     fields = extract_graphql_fields(info)
+    # Extract node fields from edges array
+    edge_fields = fields.get("edges", {})
+    if isinstance(edge_fields, dict):
+        node_fields = edge_fields.get("node", {})
+    else:
+        node_fields = {}
+    
     branches = await BranchType.get_list(
-        graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
+        graphql_context=info.context, fields=node_fields, limit=limit, offset=offset
     )

33-44: Missing docstring for resolver function.

The infrahub_branch_resolver function lacks a docstring, which violates the coding guidelines requiring Google-style docstrings for all functions.

Add a docstring following the Google style format:

 async def infrahub_branch_resolver(
     root: dict,  # noqa: ARG001
     info: GraphQLResolveInfo,
     limit: int | None = None,
     offset: int | None = None,
 ) -> dict[str, Any]:
+    """Resolve paginated branch query.
+    
+    Retrieves a paginated list of branches with total count.
+    
+    Args:
+        root: GraphQL root value (unused)
+        info: GraphQL resolve info containing context and field information
+        limit: Maximum number of branches to return
+        offset: Number of branches to skip for pagination
+    
+    Returns:
+        Dictionary with count (total branches) and edges (list of branch nodes)
+    """
     fields = extract_graphql_fields(info)

Based on coding guidelines.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c8fbca1 and 9f1dc4a.

📒 Files selected for processing (8)
  • backend/infrahub/core/branch/models.py (2 hunks)
  • backend/infrahub/core/query/__init__.py (1 hunks)
  • backend/infrahub/core/query/branch.py (2 hunks)
  • backend/infrahub/core/query/standard_node.py (2 hunks)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (2 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (4 hunks)
  • schema/schema.graphql (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • backend/infrahub/core/query/standard_node.py
  • backend/infrahub/core/query/init.py
  • backend/infrahub/graphql/types/branch.py
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Place backend tests in backend/tests/

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/core/branch/models.py
  • backend/infrahub/core/query/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/core/branch/models.py
  • backend/infrahub/core/query/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (2)
backend/infrahub/graphql/initialization.py (1)
  • prepare_graphql_params (79-125)
backend/tests/helpers/graphql.py (1)
  • graphql (26-61)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
  • extract_graphql_fields (67-69)
backend/infrahub/graphql/types/branch.py (4)
  • BranchType (16-46)
  • InfrahubBranchType (53-60)
  • get_list (34-46)
  • get_list_count (58-60)
backend/infrahub/core/branch/models.py (2)
  • get_list (156-169)
  • get_list_count (172-183)
backend/infrahub/core/branch/models.py (4)
backend/infrahub/core/query/__init__.py (7)
  • Query (339-695)
  • QueryType (140-142)
  • init (391-404)
  • execute (528-561)
  • get (222-223)
  • get_results (619-627)
  • count (586-603)
backend/infrahub/core/query/branch.py (1)
  • BranchNodeGetListQuery (153-154)
backend/infrahub/core/node/standard.py (2)
  • from_db (152-180)
  • get (131-138)
backend/infrahub/graphql/types/branch.py (1)
  • get_list_count (58-60)
backend/infrahub/core/query/branch.py (2)
backend/infrahub/core/branch/enums.py (1)
  • BranchStatus (4-8)
backend/infrahub/core/query/standard_node.py (1)
  • StandardNodeGetListQuery (132-172)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: E2E-testing-version-upgrade / From 1.3.0
  • GitHub Check: backend-benchmark
  • GitHub Check: E2E-testing-playwright
  • GitHub Check: validate-generated-documentation
  • GitHub Check: graphql-schema
  • GitHub Check: backend-tests-unit
  • GitHub Check: backend-docker-integration
🔇 Additional comments (3)
backend/infrahub/core/query/branch.py (1)

6-6: LGTM! Required imports for the new query class.

The imports are necessary for the new BranchNodeGetListQuery class implementation.

Also applies to: 9-9

schema/schema.graphql (1)

10410-10411: Query definition is well-structured pending the schema fix above.

The InfrahubBranch query parameters (ids, limit, name, offset) and return type align with pagination expectations. Ensure the type schema fix is applied so the query returns the correct structure.

backend/tests/unit/graphql/queries/test_branch.py (1)

63-63: LGTM: Removal of include_subscription parameter aligns with guidance.

The removal of include_subscription=False from these prepare_graphql_params calls aligns with the guidance to avoid this parameter in new code.

Also applies to: 108-108, 132-132

Comment on lines +153 to +154
class BranchNodeGetListQuery(StandardNodeGetListQuery):
raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstring and type annotation.

The class is missing a docstring and type annotation, which are required by the coding guidelines.

Apply this diff to add the missing docstring and type annotation:

 class BranchNodeGetListQuery(StandardNodeGetListQuery):
+    """Query to retrieve a list of branches, excluding those with DELETING status.
+    
+    This query extends StandardNodeGetListQuery with a filter to exclude branches
+    that are currently being deleted from the results.
+    """
+    
-    raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'"
+    raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'"

As per coding guidelines.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class BranchNodeGetListQuery(StandardNodeGetListQuery):
raw_filter = f"n.status <> '{BranchStatus.DELETING.value}'"
class BranchNodeGetListQuery(StandardNodeGetListQuery):
"""Query to retrieve a list of branches, excluding those with DELETING status.
This query extends StandardNodeGetListQuery with a filter to exclude branches
that are currently being deleted from the results.
"""
raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'"
🤖 Prompt for AI Agents
In backend/infrahub/core/query/branch.py around lines 153-154, the class
BranchNodeGetListQuery is missing a docstring and a type annotation for its
raw_filter attribute; add a concise class docstring that explains this Query
filters out deleting Branch nodes and add an explicit type annotation on
raw_filter (raw_filter: str = f"n.status <> '{BranchStatus.DELETING.value}'") to
satisfy the coding guidelines.

Comment on lines +33 to +38
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add validation for pagination parameters.

The limit and offset parameters lack validation, which could lead to issues with negative values or excessively large limits. Per past review feedback, parameter defaults were removed from the field definition, but the resolver signature still has None defaults and no validation.

Apply this diff to add validation:

 async def infrahub_branch_resolver(
     root: dict,  # noqa: ARG001
     info: GraphQLResolveInfo,
     limit: int | None = None,
     offset: int | None = None,
 ) -> dict[str, Any]:
+    """..."""  # Add docstring as per previous comment
+    
+    # Validate and set defaults for pagination parameters
+    if limit is None:
+        limit = 100
+    if offset is None:
+        offset = 0
+    
+    if limit < 1:
+        raise ValueError("limit must be at least 1")
+    if offset < 0:
+        raise ValueError("offset must be non-negative")
+    
     fields = extract_graphql_fields(info)
🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 38, the
resolver accepts limit and offset but does not validate them; add input
validation that enforces offset >= 0, limit >= 0 and limit <= MAX_LIMIT (choose
a sensible cap like 100 or use a configured constant), treat None as not
provided (or apply a safe default if your schema requires one), and raise a
GraphQLError with a clear message for invalid values. Ensure the validation runs
at the start of infrahub_branch_resolver and that any error raised is
descriptive (e.g., "limit must be between 0 and 100" or "offset must be >= 0"),
avoiding silent truncation or database queries with bad parameters.

Comment on lines 228 to 230
assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort(
key=operator.itemgetter("name")
) == expected_branches.sort(key=operator.itemgetter("name"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Incorrect edges data structure access.

The test accesses edges["node"] as if edges is a single object, but according to the GraphQL Relay pagination pattern (confirmed in past review comments), edges should be an array of edge objects, each containing a node field.

Apply this diff to fix the assertion:

-        assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort(
-            key=operator.itemgetter("name")
-        ) == expected_branches.sort(key=operator.itemgetter("name"))
+        edges = all_branches.data["InfrahubBranch"]["edges"]
+        actual_branches = [edge["node"] for edge in edges]
+        assert sorted(actual_branches, key=operator.itemgetter("name")) == sorted(
+            expected_branches, key=operator.itemgetter("name")
+        )

Note: This also fixes the comparison issue where .sort() returns None rather than the sorted list.

🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 228 to 230,
the assertion incorrectly treats edges as a single dict and uses .sort() (which
returns None); extract the list of nodes from edges (e.g. map or list
comprehension to get each edge["node"]) for both actual and expected data if
needed, then use sorted(..., key=operator.itemgetter("name")) on each list and
compare the two sorted lists.

graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
)
count = await InfrahubBranchType.get_list_count(graphql_context=info.context)
return {"count": count, "edges": {"node": branches}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what Aaron meant earlier regarding the separation of count and edges is that users can send in queries that request different information:

In this case we'd expect to return both edges and branches:

query MyQuery {
  InfrahubBranch {
    count
    edges {
      node {
        name
      }
    }
  }
}

If we however have a query like the following one we don't need to send an additional query for the "count" since the request doesn't expect to have this information:

query MyQuery {
  InfrahubBranch {
    edges {
      node {
        name
      }
    }
  }
}

Finally it could be that someone just wants to count the branches, and in this case we don't need to query the database for the details of each branch.

query MyQuery {
  InfrahubBranch {
    count
  }
}

One thing that I noticed is that we have different filters for BranchType.get_list compared to InfrahubBranchType.get_list_count where we filter out the -global- branch with .get_list() but not with get_list_count(). So the count returns an extra number which is never seen.

A comment regarding the structure. We'd expect "edges" to be an array, instead of having "node" be an array. I.e. "edges" should be an array and "node" should be an object.

return {"count": count, "edges": {"node": branches}}

On the topic of structure I think we should probably align the output that we get so that it matches the other nodes. I.e. more of a move away from what we had in the original Branch query. I.e. I think that a query would instead look like this:

```graphql
query MyQuery {
  InfrahubBranch {
    edges {
      node {
        name {
           value   # <-- This is the new part we don't currently have.
        }
      }
    }
  }
}

)
assert all_branches.errors is None
assert all_branches.data
assert all_branches.data["InfrahubBranch"]["count"] == 13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This highlights the issue I mentioned above with the count. When we start out with the default branch and then create 10 branches. Probably the -global- branch is part of the count. And also one other branch. I'm not sure if this is due to the testing environment an we'd have some duplicate of the default branch. It would be good to determine where the extra branch that isn't the global branch is.

for i in range(10)
],
]
assert all_branches.data["InfrahubBranch"]["edges"]["node"].sort(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As CodeRabbit is mentioning all_branches.data["InfrahubBranch"]["edges"]["node"] should not be a valid field as all_branches.data["InfrahubBranch"]["edges"] should be an array.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
backend/infrahub/graphql/types/branch.py (1)

54-56: Add class docstring.

The InfrahubBranchType class is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.

As per coding guidelines, apply this diff:

 class InfrahubBranchType(InfrahubObjectType):
+    """Paginated response type for branch queries.
+    
+    Provides edges-based pagination with total count for branch listings.
+    The count excludes the global branch.
+    """
     count = Field(Int, description="Total number of items")
     edges = Field(List(of_type=NonNull(InfrahubBranchEdge)))
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e659340 and 2ad4b2d.

📒 Files selected for processing (3)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (2 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/graphql/types/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
  • Branch (31-550)
  • get_list_count (173-184)
  • get_by_name (137-154)
backend/infrahub/exceptions.py (1)
  • BranchNotFoundError (194-200)
backend/infrahub/core/query/__init__.py (1)
  • count (586-603)
backend/infrahub/graphql/initialization.py (1)
  • GraphqlContext (34-76)
backend/infrahub/database/__init__.py (1)
  • start_session (222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: validate-generated-documentation
  • GitHub Check: backend-tests-unit
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)

5-5: LGTM! Imports are appropriate.

The added imports (Int, List, NonNull from graphene, and BranchNotFoundError) are correctly placed and necessary for the new pagination types.

Also applies to: 10-10

Comment on lines 50 to 51
class InfrahubBranchEdge(InfrahubObjectType):
node = Field(BranchType, required=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add class docstring.

The InfrahubBranchEdge class is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python classes.

As per coding guidelines, apply this diff:

 class InfrahubBranchEdge(InfrahubObjectType):
+    """Edge type for paginated branch queries.
+    
+    Wraps a BranchType node in the edges/nodes pagination pattern.
+    """
     node = Field(BranchType, required=True)
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 50 to 51, the
InfrahubBranchEdge class is missing a Google-style class docstring; add a
concise docstring immediately below the class definition describing the purpose
of the class (an Edge wrapper for BranchType in GraphQL connections) and
document its attributes (e.g., node: BranchType — the branch node at the edge),
following the project’s Google-style docstring format and keeping wording brief
and descriptive.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
backend/infrahub/graphql/types/branch.py (2)

50-51: Add class docstring for InfrahubBranchEdge.
Missing Google-style docstring per coding guidelines. Keep it concise.

Apply this diff:

 class InfrahubBranchEdge(InfrahubObjectType):
+    """Edge wrapper for BranchType.
+
+    Attributes:
+        node: The branch node at this edge.
+    """
     node = Field(BranchType, required=True)

Based on coding guidelines.


58-66: Count misreported when filters are used; add docstring and respect filters.
Current logic subtracts 1 whenever the global branch exists, even if a name/ids filter excludes it. This undercounts. Also missing docstring.

Apply this minimal, safe fix (handles name; guards against negatives) and add a Google-style docstring:

 @classmethod
 async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
-        async with graphql_context.db.start_session(read_only=True) as db:
-            count = await Branch.get_list_count(db=db, **kwargs)
-            try:
-                await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db)
-                return count - 1
-            except BranchNotFoundError:
-                return count
+        """Return filtered branch count excluding the global branch.
+
+        Args:
+            graphql_context: GraphQL context containing database connection.
+            **kwargs: Filters forwarded to Branch.get_list_count (e.g., ids, name).
+
+        Returns:
+            Count of branches matching filters, excluding GLOBAL_BRANCH_NAME.
+        """
+        async with graphql_context.db.start_session(read_only=True) as db:
+            count = await Branch.get_list_count(db=db, **kwargs)
+            if count == 0:
+                return 0
+            # If a name filter is provided and it's not the global branch, do not adjust.
+            name = kwargs.get("name")
+            if name is not None and name != GLOBAL_BRANCH_NAME:
+                return count
+            try:
+                await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db)
+                return max(count - 1, 0)
+            except BranchNotFoundError:
+                return count

If ids filtering is used, adjust only when the global branch id is included. Please confirm which attribute represents the GraphQL id (e.g., uuid vs id) so we can add that branch safely.

Run this to locate callers and confirm whether ids/name filters are passed to get_list_count:

#!/bin/bash
# Find usages and inspect how filters are forwarded.
rg -nP -C3 'get_list_count\s*\(' backend | sed -n '1,200p'
rg -nP -C3 'InfrahubBranchType\s*\.\s*get_list_count' backend
rg -nP -C3 'InfrahubBranchQueryList|InfrahubBranch' backend/infrahub/graphql/queries
🧹 Nitpick comments (1)
backend/infrahub/graphql/types/branch.py (1)

54-57: Docstring + field declarations: make required and use Int/List directly.
Add class docstring; make edges/count required; prefer Int()/List() over Field(Int, ...) for scalars; simplify NonNull wrapping.

Apply this diff:

-class InfrahubBranchType(InfrahubObjectType):
-    count = Field(Int, description="Total number of items")
-    edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge))))
+class InfrahubBranchType(InfrahubObjectType):
+    """Paginated branch list response.
+
+    Attributes:
+        count: Total number of items matching the filters (excludes global).
+        edges: The paginated list of branch edges.
+    """
+    count = Int(required=True, description="Total number of items matching the filters (excludes global)")
+    edges = List(NonNull(InfrahubBranchEdge), required=True, description="Edges for the current page")

Optional: consider renaming counttotal_count to align with common GraphQL conventions; if done, update schema/resolvers accordingly. Based on coding guidelines.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2ad4b2d and 32c72ad.

📒 Files selected for processing (3)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (2 hunks)
  • schema/schema.graphql (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/infrahub/graphql/queries/branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/graphql/types/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (4)
backend/infrahub/core/branch/models.py (3)
  • Branch (31-550)
  • get_list_count (173-184)
  • get_by_name (137-154)
backend/infrahub/exceptions.py (1)
  • BranchNotFoundError (194-200)
backend/infrahub/graphql/initialization.py (1)
  • GraphqlContext (34-76)
backend/infrahub/database/__init__.py (1)
  • start_session (222-239)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: E2E-testing-version-upgrade / From 1.3.0
  • GitHub Check: backend-benchmark
  • GitHub Check: E2E-testing-playwright
  • GitHub Check: validate-generated-documentation
  • GitHub Check: graphql-schema
  • GitHub Check: backend-tests-functional
  • GitHub Check: backend-docker-integration
  • GitHub Check: backend-tests-integration
  • GitHub Check: backend-tests-unit
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
schema/schema.graphql (2)

6883-6891: ✅ Previous critical issue resolved—node field is now singular.

The node field in InfrahubBranchEdge is now correctly declared as node: Branch! (singular), fixing the relay-style pagination cardinality issue flagged in the previous review. The schema now properly structures pagination with:

  • InfrahubBranchType.edges: [InfrahubBranchEdge!]! — array of edges
  • Each InfrahubBranchEdge.node: Branch! — single node per edge

This correctly implements offset/limit pagination semantics.


10410-10411: Query field definition looks good.

The new InfrahubBranch query with pagination (offset, limit) and filtering (ids, name) parameters returns the correct InfrahubBranchType! structure.

@solababs solababs requested a review from ogenstad October 29, 2025 16:02
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
backend/infrahub/graphql/types/__init__.py (1)

24-24: Export InfrahubBranchEdge to keep the public API consistent.

You expose TaskLogEdge here; InfrahubBranchEdge should be exported similarly for consumers importing from types.

Apply this diff:

-from .branch import BranchType, InfrahubBranch, InfrahubBranchType
+from .branch import BranchType, InfrahubBranch, InfrahubBranchType, InfrahubBranchEdge
@@
     "IPNetworkType",
+    "InfrahubBranchEdge",
     "InfrahubBranch",
     "InfrahubBranchType",

Also applies to: 44-45

backend/infrahub/graphql/types/branch.py (4)

34-36: Add a short docstring for _map_fields_to_graphql.

Document the purpose (filter out GLOBAL and map selected fields) and expected shapes of inputs/outputs.

As per coding guidelines.


54-64: Docstrings missing for new GraphQL types.

Please add Google‑style class docstrings for RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, and InfrahubBranch to explain the value‑wrapper pattern and field mapping logic.

As per coding guidelines.

Also applies to: 66-101


103-110: Add class docstring to InfrahubBranchEdge.

Briefly describe the edge wrapper and its node semantics.

As per coding guidelines.


111-119: Add docstring to get_list_count.

Explain that it returns the count excluding GLOBAL_BRANCH_NAME and why the existence check is needed.

As per coding guidelines.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 32c72ad and 84fac0a.

📒 Files selected for processing (4)
  • backend/infrahub/graphql/queries/branch.py (2 hunks)
  • backend/infrahub/graphql/types/__init__.py (2 hunks)
  • backend/infrahub/graphql/types/branch.py (3 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
backend/tests/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Place backend tests in backend/tests/

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/types/__init__.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/graphql/types/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/tests/unit/graphql/queries/test_branch.py
  • backend/infrahub/graphql/types/__init__.py
  • backend/infrahub/graphql/queries/branch.py
  • backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (4)
backend/tests/unit/graphql/queries/test_branch.py (4)
backend/infrahub/graphql/initialization.py (1)
  • prepare_graphql_params (79-125)
backend/tests/helpers/test_app.py (3)
  • default_branch (54-61)
  • service (112-113)
  • client (146-173)
backend/tests/unit/conftest.py (1)
  • session_admin (2679-2681)
backend/tests/helpers/graphql.py (1)
  • graphql (26-61)
backend/infrahub/graphql/types/__init__.py (1)
backend/infrahub/graphql/types/branch.py (3)
  • BranchType (17-51)
  • InfrahubBranch (66-100)
  • InfrahubBranchType (107-119)
backend/infrahub/graphql/queries/branch.py (3)
backend/infrahub/graphql/field_extractor.py (1)
  • extract_graphql_fields (67-69)
backend/infrahub/graphql/types/branch.py (5)
  • BranchType (17-51)
  • InfrahubBranch (66-100)
  • InfrahubBranchType (107-119)
  • get_list (39-51)
  • get_list_count (112-119)
backend/infrahub/core/branch/models.py (2)
  • get_list (157-170)
  • get_list_count (173-184)
backend/infrahub/graphql/types/branch.py (5)
backend/infrahub/core/branch/models.py (3)
  • Branch (31-550)
  • get_list_count (173-184)
  • get_by_name (137-154)
backend/infrahub/exceptions.py (1)
  • BranchNotFoundError (194-200)
backend/infrahub/core/node/standard.py (2)
  • to_graphql (71-86)
  • get (131-138)
backend/infrahub/database/__init__.py (2)
  • get (84-88)
  • start_session (222-239)
backend/infrahub/graphql/initialization.py (1)
  • GraphqlContext (34-76)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages

Comment on lines +33 to +48
async def infrahub_branch_resolver(
root: dict, # noqa: ARG001
info: GraphQLResolveInfo,
limit: int | None = None,
offset: int | None = None,
) -> dict[str, Any]:
fields = extract_graphql_fields(info)
result: dict[str, Any] = {}
if "edges" in fields:
branches = await InfrahubBranch.get_list(
graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
)
result["edges"] = [{"node": branch} for branch in branches]
if "count" in fields:
result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Resolver ignores ids/name and forwards None pagination; fix signature, validation, and forwarding.

Currently ids/name are accepted by the field but dropped by the resolver; passing limit/offset=None can also override model defaults. Add proper params, validate, and only pass provided args.

As per coding guidelines, add a docstring too. Apply:

-async def infrahub_branch_resolver(
-    root: dict,  # noqa: ARG001
-    info: GraphQLResolveInfo,
-    limit: int | None = None,
-    offset: int | None = None,
-) -> dict[str, Any]:
-    fields = extract_graphql_fields(info)
-    result: dict[str, Any] = {}
-    if "edges" in fields:
-        branches = await InfrahubBranch.get_list(
-            graphql_context=info.context, fields=fields.get("edges", {}).get("node", {}), limit=limit, offset=offset
-        )
-        result["edges"] = [{"node": branch} for branch in branches]
-    if "count" in fields:
-        result["count"] = await InfrahubBranchType.get_list_count(graphql_context=info.context)
-    return result
+async def infrahub_branch_resolver(
+    root: dict,  # noqa: ARG001
+    info: GraphQLResolveInfo,
+    ids: list[str] | None = None,
+    name: str | None = None,
+    limit: int | None = None,
+    offset: int | None = None,
+) -> dict[str, Any]:
+    """Resolve paginated InfrahubBranch query.
+    
+    Args:
+        root: Unused GraphQL root value.
+        info: Resolve info with context and selection set.
+        ids: Optional list of branch IDs to filter.
+        name: Optional branch name filter.
+        limit: Max items to return; must be >= 1 if provided.
+        offset: Number of items to skip; must be >= 0 if provided.
+    Returns:
+        Dict containing requested fields (edges and/or count).
+    """
+    fields = extract_graphql_fields(info)
+    result: dict[str, Any] = {}
+
+    # Normalize pagination; only pass if provided and valid
+    query_kwargs: dict[str, int] = {}
+    if isinstance(limit, int):
+        if limit < 1:
+            raise ValueError("limit must be >= 1")
+        query_kwargs["limit"] = limit
+    if isinstance(offset, int):
+        if offset < 0:
+            raise ValueError("offset must be >= 0")
+        query_kwargs["offset"] = offset
+
+    if "edges" in fields:
+        node_fields = fields.get("edges", {}).get("node", {})
+        branches = await InfrahubBranch.get_list(
+            graphql_context=info.context,
+            fields=node_fields,
+            ids=ids,
+            name=name,
+            **query_kwargs,
+        )
+        result["edges"] = [{"node": branch} for branch in branches]
+
+    if "count" in fields:
+        result["count"] = await InfrahubBranchType.get_list_count(
+            graphql_context=info.context, ids=ids, name=name
+        )
+    return result

As per coding guidelines.

🤖 Prompt for AI Agents
In backend/infrahub/graphql/queries/branch.py around lines 33 to 48, the
resolver currently drops incoming ids/name arguments, and unconditionally
forwards limit/offset even when None (overriding model defaults) and lacks a
docstring; update the function signature to accept optional ids: list[str] |
None and name: str | None (with proper typing), add a short docstring describing
purpose and params, validate inputs (ensure ids is a non-empty list if provided
and name is a non-empty string), build kwargs for InfrahubBranch.get_list and
InfrahubBranchType.get_list_count that include only provided arguments (i.e.,
include limit/offset only when not None, include ids/name when provided), and
pass fields extracted from info for edges as before so the resolver respects and
forwards only explicit arguments to the model calls.

Comment on lines +203 to +231
gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service)
all_branches = await graphql(
schema=gql_params.schema,
source=query,
context_value=gql_params.context,
root_value=None,
variable_values={},
)
assert all_branches.errors is None
assert all_branches.data
assert all_branches.data["InfrahubBranch"]["count"] == 12 # 10 created here + 1 created above + main branch

expected_branches = [
{
"description": {"value": "Default Branch"},
"name": {"value": "main"},
},
{
"description": {"value": "my description"},
"name": {"value": "branch3"},
},
*[
{
"description": {"value": f"sample description {i}"},
"name": {"value": f"sample-branch-{i}"},
}
for i in range(10)
],
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Fix count and expected set; avoid cross-test state assumptions.

  • Count should exclude the global branch and only include branches created in this test + main. That’s 11, not 12.
  • expected_branches includes branch3 which is created in a different test; fixtures reset DB, so it won’t exist here.
  • Comparing a paginated page to the full set is incorrect.

Refactor to derive ordering from the API, then compare the correct slice:

-        assert all_branches.data["InfrahubBranch"]["count"] == 12  # 10 created here + 1 created above + main branch
-
-        expected_branches = [
-            {
-                "description": {"value": "Default Branch"},
-                "name": {"value": "main"},
-            },
-            {
-                "description": {"value": "my description"},
-                "name": {"value": "branch3"},
-            },
-            *[
-                {
-                    "description": {"value": f"sample description {i}"},
-                    "name": {"value": f"sample-branch-{i}"},
-                }
-                for i in range(10)
-            ],
-        ]
-        all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]]
-        assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort(
-            key=lambda x: x["name"]["value"]
-        )
+        # Verify total count (10 created here + main), global branch is excluded by the API.
+        assert all_branches.data["InfrahubBranch"]["count"] == 11
+
+        # Establish server ordering with a baseline query, then validate the paginated slice.
+        baseline_query = """
+            query {
+                InfrahubBranch(offset: 0, limit: 1000) {
+                    edges {
+                        node {
+                            name { value }
+                            description { value }
+                        }
+                    }
+                }
+            }
+        """
+        gql_params = await prepare_graphql_params(db=db, branch=default_branch, service=service)
+        baseline = await graphql(schema=gql_params.schema, source=baseline_query, context_value=gql_params.context)
+        assert baseline.errors is None
+        all_nodes = [e["node"] for e in baseline.data["InfrahubBranch"]["edges"]]
+
+        # Current page: offset=2, limit=5
+        page_nodes = [e["node"] for e in all_branches.data["InfrahubBranch"]["edges"]]
+        assert len(page_nodes) == 5
+        assert page_nodes == all_nodes[2:7]

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +232 to +235
all_branches_data_only = [branch.get("node") for branch in all_branches.data["InfrahubBranch"]["edges"]]
assert all_branches_data_only.sort(key=lambda x: x["name"]["value"]) == expected_branches.sort(
key=lambda x: x["name"]["value"]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Replace list.sort() with sorted; current assertion always compares None to None.

.list.sort() returns None. Use sorted(...) or sort in-place and compare the lists. The refactor above removes this pitfall and validates the actual page slice deterministically.

🤖 Prompt for AI Agents
In backend/tests/unit/graphql/queries/test_branch.py around lines 232 to 235,
the test uses list.sort() inside the assertion which returns None so the
assertion compares None to None; replace list.sort() with sorted(...) (or call
.sort() on each list before the assertion) so you compare the actual sorted
lists deterministically. Ensure both all_branches_data_only and
expected_branches are sorted by the same key (lambda x: x["name"]["value"]) and
then assert equality.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (3)
backend/infrahub/graphql/types/branch.py (3)

118-126: Add method docstring.

The get_list_count method is missing a docstring, which violates the coding guidelines. While a past review comment indicated this was addressed in commit 32c72ad, the docstring is still missing in the current code.

As per coding guidelines, apply this diff:

     @classmethod
     async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
+        """Get the total count of branches excluding the global branch.
+        
+        Args:
+            graphql_context: The GraphQL context containing database connection.
+            **kwargs: Additional filter parameters passed to Branch.get_list_count.
+            
+        Returns:
+            The count of branches, excluding GLOBAL_BRANCH_NAME if it exists.
+        """
         async with graphql_context.db.start_session(read_only=True) as db:
             count = await Branch.get_list_count(db=db, **kwargs)
             try:
                 await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db)
                 return count - 1
             except BranchNotFoundError:
                 return count

110-111: Add class docstring.

The InfrahubBranchEdge class is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.

As per coding guidelines, apply this diff:

 class InfrahubBranchEdge(InfrahubObjectType):
+    """Edge type for paginated branch queries.
+    
+    Wraps an InfrahubBranch node in the edges/nodes pagination pattern.
+    """
     node = Field(InfrahubBranch, required=True)

114-117: Add class docstring.

The InfrahubBranchType class is missing a docstring, which violates the coding guidelines. This was previously flagged but the docstring appears to still be missing in the current code.

As per coding guidelines, apply this diff:

 class InfrahubBranchType(InfrahubObjectType):
+    """Paginated response type for branch queries.
+    
+    Provides pagination metadata (count) and a list of branch edges following
+    the GraphQL connection pattern.
+    """
     count = Field(Int, description="Total number of items")
     edges = Field(NonNull(List(of_type=NonNull(InfrahubBranchEdge))))
🧹 Nitpick comments (2)
schema/schema.graphql (2)

10789-10790: Query field design for paginated branch retrieval.

The InfrahubBranch query accepts ids, limit, name, and offset parameters. Consider documenting the behavior when conflicting filters are provided (e.g., when both ids and offset/limit are specified). Also, consider adding a documented maximum limit or validation constraint to prevent unbounded queries.

Would you like me to check how this is handled in the resolver and suggest updates to the schema documentation if needed?


9406-9412: Value wrapper types introduced.

Three new wrapper types are added: NonRequiredBooleanValueField, NonRequiredStringValueField, RequiredStringValueField, and StatusField. Ensure this pattern is consistent with existing similar types in the schema (e.g., check if other query results use similar wrapper patterns). If this is a new convention, consider documenting it for future maintainers.

Also applies to: 10938-10940, 11035-11037

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 84fac0a and 491ad04.

📒 Files selected for processing (3)
  • backend/infrahub/graphql/types/branch.py (3 hunks)
  • backend/tests/unit/graphql/queries/test_branch.py (1 hunks)
  • schema/schema.graphql (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/tests/unit/graphql/queries/test_branch.py
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/graphql/types/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/graphql/types/branch.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: E2E-testing-version-upgrade / From 1.3.0
  • GitHub Check: backend-benchmark
  • GitHub Check: E2E-testing-playwright
  • GitHub Check: backend-tests-unit
  • GitHub Check: backend-docker-integration
🔇 Additional comments (2)
schema/schema.graphql (2)

7258-7266: Standard relay-style pagination structure.

The InfrahubBranchEdge and InfrahubBranchType follow proper GraphQL pagination semantics: each edge contains a single node (not an array), and the collection is on the parent's edges field. The cardinality is correct.


11173-11173: Minor directive formatting change.

Line 11173 adjusts the @expand directive definition. This appears to be a cosmetic change with no semantic impact.

Comment on lines +36 to +38
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add method docstring.

The _map_fields_to_graphql static method is missing a docstring, which violates the coding guidelines requiring Google-style docstrings for all Python methods.

As per coding guidelines, apply this diff:

     @staticmethod
     async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+        """Map Branch objects to GraphQL field dictionaries, excluding the global branch.
+        
+        Args:
+            objs: List of Branch objects to map.
+            fields: Dictionary of field names to include in the output.
+            
+        Returns:
+            List of dictionaries containing GraphQL field data for each branch.
+        """
         return [await obj.to_graphql(fields=fields) for obj in objs if obj.name != GLOBAL_BRANCH_NAME]
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 36 to 38, the async
static method _map_fields_to_graphql is missing a Google-style docstring; add a
docstring above the method that briefly describes its purpose (map a list of
Branch objects to GraphQL dicts while excluding branches named
GLOBAL_BRANCH_NAME), documents parameters (objs: list[Branch], fields: dict) and
their types, notes that the method is asynchronous and returns list[dict[str,
Any]], and describes the return value and behavior (awaits each obj.to_graphql
and filters out GLOBAL_BRANCH_NAME).

Comment on lines +56 to +70
class RequiredStringValueField(InfrahubObjectType):
value = String(required=True)


class NonRequiredStringValueField(InfrahubObjectType):
value = String(required=False)


class NonRequiredBooleanValueField(InfrahubObjectType):
value = Boolean(required=False)


class StatusField(InfrahubObjectType):
value = InfrahubBranchStatus(required=True)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add docstrings and clarify the purpose of value wrapper classes.

All four value wrapper classes (RequiredStringValueField, NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) are missing docstrings, which violates the coding guidelines. Additionally, the purpose of these wrapper types is not immediately clear—they add a layer of indirection without explanation.

As per coding guidelines, add docstrings to each class explaining their purpose:

 class RequiredStringValueField(InfrahubObjectType):
+    """Wrapper type for required string values in GraphQL responses.
+    
+    Provides a consistent structure for string fields that must always have a value.
+    """
     value = String(required=True)


 class NonRequiredStringValueField(InfrahubObjectType):
+    """Wrapper type for optional string values in GraphQL responses.
+    
+    Provides a consistent structure for string fields that may be null.
+    """
     value = String(required=False)


 class NonRequiredBooleanValueField(InfrahubObjectType):
+    """Wrapper type for optional boolean values in GraphQL responses.
+    
+    Provides a consistent structure for boolean fields that may be null.
+    """
     value = Boolean(required=False)


 class StatusField(InfrahubObjectType):
+    """Wrapper type for branch status enum values in GraphQL responses.
+    
+    Provides a consistent structure for status fields.
+    """
     value = InfrahubBranchStatus(required=True)
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 56 to 70, the four
small wrapper classes lack docstrings and do not explain their purpose; add
brief docstrings to each class (RequiredStringValueField,
NonRequiredStringValueField, NonRequiredBooleanValueField, StatusField) that
state they are simple GraphQL value-wrapper types used to standardize field
payloads/typing (e.g., wrapping a single value for consistency across
mutations/queries), note whether the inner value is required or optional and the
expected type (String/Boolean/InfrahubBranchStatus), and include one-line
examples or usage intent if helpful — keep them concise and follow project
docstring style.

Comment on lines +72 to +91
class InfrahubBranch(BranchType):
id = String(required=True)
created_at = String(required=False)

name = Field(RequiredStringValueField, required=True)
description = Field(NonRequiredStringValueField, required=False)
origin_branch = Field(NonRequiredStringValueField, required=False)
branched_from = Field(NonRequiredStringValueField, required=False)
status = Field(StatusField, required=True)
sync_with_git = Field(NonRequiredBooleanValueField, required=False)
is_default = Field(NonRequiredBooleanValueField, required=False)
is_isolated = Field(
NonRequiredBooleanValueField, required=False, deprecation_reason="non isolated mode is not supported anymore"
)
has_schema_changes = Field(NonRequiredBooleanValueField, required=False)

class Meta:
description = "InfrahubBranch"
name = "InfrahubBranch"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add class docstring and clarify field definition pattern.

The InfrahubBranch class is missing a docstring, which violates the coding guidelines. Additionally, there's an inconsistency in field definitions: id and created_at are defined as plain String fields, while other fields use Field() wrappers with value wrapper types. This inconsistency is confusing.

As per coding guidelines, apply this diff:

 class InfrahubBranch(BranchType):
+    """Extended branch type with wrapped field values for GraphQL pagination queries.
+    
+    Provides an alternative representation where most fields are wrapped in value objects
+    to support consistent field resolution in paginated responses.
+    """
     id = String(required=True)
     created_at = String(required=False)

Consider documenting why id and created_at remain unwrapped while other fields use the value wrapper pattern.

Comment on lines 92 to 107
@staticmethod
async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
field_keys = fields.keys()
result: list[dict[str, Any]] = []
for obj in objs:
if obj.name == GLOBAL_BRANCH_NAME:
continue
data = {}
for field in field_keys:
value = getattr(obj, field, None)
if isinstance(fields.get(field), dict):
data[field] = {"value": value}
else:
data[field] = value
result.append(data)
return result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add method docstring and clarify field wrapping logic.

The _map_fields_to_graphql method is missing a docstring, which violates the coding guidelines. The field wrapping logic (lines 102-105) is also complex and not immediately clear—it wraps values in {"value": ...} dictionaries based on whether fields.get(field) is a dict, but the reasoning isn't documented.

As per coding guidelines, apply this diff:

     @staticmethod
     async def _map_fields_to_graphql(objs: list[Branch], fields: dict) -> list[dict[str, Any]]:
+        """Map Branch objects to GraphQL field dictionaries with wrapped field values.
+        
+        Fields that have nested field selections in the GraphQL query (indicated by
+        dict values in the fields parameter) are wrapped in {"value": ...} structures
+        to match the value wrapper types. Other fields are returned as-is.
+        
+        Args:
+            objs: List of Branch objects to map.
+            fields: Dictionary mapping field names to their selection sets.
+            
+        Returns:
+            List of dictionaries with field data, excluding GLOBAL_BRANCH_NAME.
+        """
         field_keys = fields.keys()
         result: list[dict[str, Any]] = []
         for obj in objs:
             if obj.name == GLOBAL_BRANCH_NAME:
                 continue
             data = {}
             for field in field_keys:
                 value = getattr(obj, field, None)
+                # Wrap value if the field has nested selections (value wrapper types)
                 if isinstance(fields.get(field), dict):
                     data[field] = {"value": value}
                 else:
                     data[field] = value
             result.append(data)
         return result
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 92 to 107, add a
concise docstring to _map_fields_to_graphql describing the method’s purpose,
parameters (objs: list[Branch], fields: dict) and return type (list[dict[str,
Any]]), and clarify the field-wrapping logic: when fields.get(field) is a dict
it indicates GraphQL requested subfields so the code should wrap the raw
attribute value in {"value": <attr>} to match the expected nested GraphQL shape;
update the docstring to explain this behavior and its rationale so the wrapping
on lines 102-105 is clear to readers and maintainers.

Comment on lines +7243 to +7256
"""InfrahubBranch"""
type InfrahubBranch {
branched_from: NonRequiredStringValueField
created_at: String
description: NonRequiredStringValueField
has_schema_changes: NonRequiredBooleanValueField
id: String!
is_default: NonRequiredBooleanValueField
is_isolated: NonRequiredBooleanValueField @deprecated(reason: "non isolated mode is not supported anymore")
name: RequiredStringValueField!
origin_branch: NonRequiredStringValueField
status: StatusField!
sync_with_git: NonRequiredBooleanValueField
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Inconsistent handling of optional fields in InfrahubBranch type.

The created_at field (line 7246) is defined as a direct String type, while other optional fields (e.g., branched_from, description, has_schema_changes, etc.) use wrapper types like NonRequiredStringValueField or NonRequiredBooleanValueField. This inconsistency should be addressed—either wrap created_at as NonRequiredStringValueField or align the pattern across all optional fields for consistency.

Please clarify the intended pattern and ensure all optional fields follow the same convention.



---

🏁 Script executed:

```shell
#!/bin/bash
# Get full context around the InfrahubBranch type definition (lines 7240-7270)
sed -n '7240,7270p' schema/schema.graphql | cat -n

Length of output: 1048


🏁 Script executed:

#!/bin/bash
# Search for all created_at field definitions to see the pattern
rg -n 'created_at:' schema/schema.graphql | head -20

Length of output: 726


🏁 Script executed:

#!/bin/bash
# Search for other optional String fields to identify the wrapping pattern
rg -nP '\w+:\s+NonRequired(String|Boolean)ValueField' schema/schema.graphql | head -20

Length of output: 417


🏁 Script executed:

#!/bin/bash
# Search for fields defined as plain String (not wrapped) to see if there's a pattern
rg -nP '^\s+\w+:\s+String(\s*$|@|#)' schema/schema.graphql | head -20

Length of output: 506


🏁 Script executed:

#!/bin/bash
# Look for the wrapper type definitions to understand their purpose
rg -nA 2 'type NonRequiredStringValueField' schema/schema.graphql

Length of output: 67


Wrap created_at as NonRequiredStringValueField to maintain consistency within the InfrahubBranch type.

Within this type, all other optional String and Boolean fields use wrapper types (NonRequiredStringValueField, NonRequiredBooleanValueField). The created_at field (line 7246) breaks this pattern by using a plain String type. Update it to created_at: NonRequiredStringValueField to align with the established pattern.

🤖 Prompt for AI Agents
In schema/schema.graphql around lines 7243 to 7256, the InfrahubBranch type uses
a plain String for created_at while all other optional string/boolean fields use
wrapper types; change created_at to use NonRequiredStringValueField to match the
pattern and maintain consistency across optional fields in this type.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
schema/schema.graphql (1)

7243-7256: created_at field should use NonRequiredStringValueField for consistency.

This issue was flagged in a previous review but remains unresolved. The created_at field (line 7246) uses a plain String type, while all other optional fields in the InfrahubBranch type use wrapper types (NonRequiredStringValueField, NonRequiredBooleanValueField). Update created_at to match the established pattern.

Apply this diff:

 type InfrahubBranch {
   branched_from: NonRequiredStringValueField
-  created_at: String
+  created_at: NonRequiredStringValueField
   description: NonRequiredStringValueField
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 491ad04 and 86dbefa.

📒 Files selected for processing (2)
  • .pre-commit-config.yaml (1 hunks)
  • schema/schema.graphql (5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
.pre-commit-config.{yaml,yml}

📄 CodeRabbit inference engine (.github/instructions/tooling.instructions.md)

Don't use Git precommit

Files:

  • .pre-commit-config.yaml
🔇 Additional comments (2)
.pre-commit-config.yaml (1)

13-13: Clarify the purpose of excluding schema/schema.graphql from end-of-file-fixer.

The original concern about violating coding guidelines is incorrect—precommit is already established in this codebase. Adding an exclude pattern to an existing hook is not adopting precommit; it's a configuration adjustment to existing infrastructure.

However, the substantive question remains: Why does this file need exclusion? The schema file appears to be version-controlled and manually maintained (not auto-generated). Before approving this pattern, confirm:

  • Is schema/schema.graphql auto-generated (and if so, by what process)?
  • Does it have special formatting requirements that conflict with end-of-file-fixer?
  • Is there a reason it cannot conform to standard EOF validation?

If the exclusion is necessary, document the rationale in a comment. If it's a temporary workaround, consider the alternatives mentioned in the original review.

schema/schema.graphql (1)

7258-7266: Pagination structure is now correctly defined.

The InfrahubBranchEdge.node field is correctly singular (node: InfrahubBranch!), and the InfrahubBranchType properly exposes count and edges for standard offset/limit pagination semantics. This fixes the earlier cardinality issue from the previous review.

Comment on lines 10789 to 10790
"""Retrieve paginated information about active branches."""
InfrahubBranch(ids: [ID!], limit: Int, name: String, offset: Int): InfrahubBranchType!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Query field descriptor and resolver alignment.

The new InfrahubBranch query accepts ids, limit, name, and offset parameters for flexible pagination and filtering. Verify that the backend resolver (infrahub_branch_resolver per the summary) correctly handles these parameters, particularly the ids filter which allows querying specific branches by identifier.

Run the following script to verify resolver implementation:


🏁 Script executed:

#!/bin/bash
# Search for the infrahub_branch_resolver implementation
rg -n 'infrahub_branch_resolver' --type=py -A 10

Length of output: 1153


I need to examine the complete resolver implementation and verify parameter handling.


🏁 Script executed:

#!/bin/bash
# Get the full infrahub_branch_resolver implementation
cat backend/infrahub/graphql/queries/branch.py | sed -n '33,57p'

Length of output: 885


🏁 Script executed:

#!/bin/bash
# Search for the GraphQL field descriptor definition for InfrahubBranch
rg -n 'InfrahubBranch' backend/infrahub/graphql/queries/branch.py -A 5 -B 5

Length of output: 1162


🏁 Script executed:

#!/bin/bash
# Search for InfrahubBranch.get_list to see if it handles ids/name
rg -n 'def get_list' backend/infrahub --type=py -A 15 | grep -A 15 'InfrahubBranch\|class InfrahubBranch'

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for the GraphQL field definition in schema generation or resolvers
rg -n 'ids.*limit.*name.*offset' backend/infrahub/graphql --type=py -C 3

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Look at the complete branch.py file structure
cat backend/infrahub/graphql/queries/branch.py

Length of output: 1782


🏁 Script executed:

#!/bin/bash
# Search for where InfrahubBranch query field is defined with parameters
rg -n 'InfrahubBranch.*ids.*limit.*name.*offset' backend/infrahub --type=py

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for graphql schema generation that might define these parameters
rg -n 'ids.*limit.*name' backend/infrahub/graphql --type=py -B 5 -A 5

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Find where the GraphQL schema fields are defined or generated
fd -e py 'schema' backend/infrahub/graphql | head -20

Length of output: 119


🏁 Script executed:

#!/bin/bash
# Look for the actual schema file or mutation/query definitions
rg -n 'class.*Query\|def.*query' backend/infrahub/graphql --type=py -A 10 | head -60

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if there's a schema generation file or mutation definitions
find backend/infrahub/graphql -name '*.py' -type f | xargs grep -l 'InfrahubBranch' | head -10

Length of output: 246


🏁 Script executed:

#!/bin/bash
# Look at the schema.graphql file directly around the InfrahubBranch definition
rg -n 'InfrahubBranch' schema/schema.graphql -B 5 -A 5

Length of output: 1886


🏁 Script executed:

#!/bin/bash
# Check if the resolver accepts **kwargs to handle additional parameters
rg -n 'infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -A 20

Length of output: 893


🏁 Script executed:

#!/bin/bash
# Search for the specific line where the resolver is registered
rg -n 'resolver=infrahub_branch_resolver' backend/infrahub/graphql/queries/branch.py -B 10 -A 5

Length of output: 337


🏁 Script executed:

#!/bin/bash
# Find the field descriptor definition to see all parameters
rg -n 'class InfrahubBranch\|Field.*InfrahubBranch\|query_field' backend/infrahub/graphql --type=py -A 20 | head -80

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Look for name parameter handling in the resolver or nearby
rg -n 'name' backend/infrahub/graphql/queries/branch.py -C 3

Length of output: 476


🏁 Script executed:

#!/bin/bash
# Check if there are multiple resolver definitions or overloads
rg -n 'def.*branch.*resolver\|@.*resolver' backend/infrahub/graphql/queries/branch.py

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Search for the field descriptor in the schema generation or graphene code
rg -n 'GraphQLField.*InfrahubBranch\|InfrahubBranchType' backend/infrahub --type=py -B 2 -A 8 | head -100

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if parameters are dynamically extracted from the GraphQL query
rg -n 'extract_graphql_fields\|extract.*param' backend/infrahub/graphql --type=py -C 2 | head -60

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Look at the complete resolver with more context
sed -n '30,70p' backend/infrahub/graphql/queries/branch.py

Length of output: 949


🏁 Script executed:

#!/bin/bash
# Find schema field definitions - look for where the query fields are built
rg -n 'define_query_field\|query_field_\|InfrahubBranch.*field' backend/infrahub --type=py -C 3 | head -100

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check if ids parameter handling is in a wrapper or middleware
rg -n 'ids' backend/infrahub/graphql/queries/branch.py -B 5 -A 5

Length of output: 655


🏁 Script executed:

#!/bin/bash
# Look for where the parameters are defined in the GraphQL schema
rg -n '^\s*ids:.*ID' schema/schema.graphql -B 5 -A 2

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check around line 10789-10790 in the schema file
sed -n '10785,10795p' schema/schema.graphql

Length of output: 946


🏁 Script executed:

#!/bin/bash
# Search more broadly for the InfrahubBranch field and its parameters
sed -n '10780,10800p' schema/schema.graphql

Length of output: 5632


🏁 Script executed:

#!/bin/bash
# Look at the InfrahubBranchType definition
rg -n 'type InfrahubBranchType' schema/schema.graphql -A 20

Length of output: 598


🏁 Script executed:

#!/bin/bash
# Check if there's a resolver parameter mismatch documented
rg -n 'ids.*name' backend/infrahub/graphql/queries/branch.py

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Look at what parameters InfrahubBranch.get_list accepts
fd -name '*.py' -path '*/infrahub/*' | xargs grep -n 'class InfrahubBranch' | head -5

Length of output: 230


🏁 Script executed:

#!/bin/bash
# Find and examine the InfrahubBranch class
rg -n 'class InfrahubBranch' backend/infrahub --type=py -A 30 | head -50

Length of output: 4177


🏁 Script executed:

#!/bin/bash
# Look for get_list method in InfrahubBranch
rg -n 'def get_list' backend/infrahub --type=py -B 2 -A 20 | grep -B 2 -A 20 'InfrahubBranch\|class.*Branch'

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Check the actual GraphQL field definition more carefully
python3 << 'EOF'
import re

# Read schema.graphql
with open('schema/schema.graphql', 'r') as f:
    content = f.read()

# Find the line with InfrahubBranch query
lines = content.split('\n')
for i, line in enumerate(lines):
    if 'InfrahubBranch' in line and 'Retrieve paginated' in lines[i-1]:
        # Print context around this line
        start = max(0, i-5)
        end = min(len(lines), i+10)
        for j in range(start, end):
            print(f"{j+1}: {lines[j]}")
        break
EOF

Length of output: 1216


Resolver missing ids and name parameter handling.

The InfrahubBranch query field declares ids and name parameters, but the infrahub_branch_resolver function (backend/infrahub/graphql/queries/branch.py:33-48) only accepts limit and offset. These parameters are not forwarded to InfrahubBranch.get_list().

Add the missing parameters to the resolver signature and pass them through:

async def infrahub_branch_resolver(
    root: dict,
    info: GraphQLResolveInfo,
    ids: list[str] | None = None,
    name: str | None = None,
    limit: int | None = None,
    offset: int | None = None,
) -> dict[str, Any]:
    # ... update get_list call to include ids and name

Alternatively, adopt the **kwargs pattern used in branch_resolver (line 16) to handle all parameters.

🤖 Prompt for AI Agents
In schema/schema.graphql around lines 10789-10790, the InfrahubBranch query
declares ids and name args but the resolver
backend/infrahub/graphql/queries/branch.py (lines ~33-48) only accepts limit and
offset and does not forward ids/name to InfrahubBranch.get_list(); update the
resolver signature to accept ids: list[str] | None and name: str | None (or
switch to **kwargs like branch_resolver at line 16) and pass these parameters
through to InfrahubBranch.get_list(ids=ids, name=name, limit=limit,
offset=offset) so the query arguments are respected.

@ogenstad
Copy link
Contributor

@solababs, I think we're close to the first iteration of this query now. I see that there's a slight issue with how the branch IDs are presented if you start an environment and compare the responses to these two queries:

query MyQuery {
  InfrahubBranch {
    count
    edges {
      node {
        id
        name {
          value
        }
      }
    }
  }
  Branch {
    id
    name
  }
}

I think you're returning the internal Neo4j ID of these nodes which is typically different from the ID we present to the external world.

InfrahubBranchQueryList = Field(
InfrahubBranchType,
ids=List(of_type=NonNull(ID)),
name=String(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These parameters doesn't seem to work now.

query MyQuery {
  InfrahubBranch(name: "branch2") {
    count
    edges {
      node {
        id
        name {
          value
        }
      }
    }
  }
}

I.e infrahub_branch_resolver() doesn't accept the parameters ids or name so the above query would fail. I think perhaps we can just remove both of them in this first iteration and then consider which filter options we want and need. We will want to add some of them for sure. But it could potentially be quite verbose if we start to add all of the variations that we dynamically generate for other node types, so I think that we can have a discussion around that after this PR is merge.

I.e. if we were to follow the current approach we'd have:

  • name__value
  • name__values
  • description__value
  • description__values
  • etc for all attributes.

@solababs solababs requested a review from ogenstad October 30, 2025 16:17
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (6)
backend/infrahub/graphql/types/branch.py (6)

36-38: Add method docstring.

This method still lacks a docstring as flagged in previous reviews. Per coding guidelines, all Python methods require Google-style docstrings.


56-70: Add docstrings to value wrapper classes.

All four value wrapper classes remain undocumented despite previous review feedback. The purpose and usage pattern of these wrapper types is unclear without docstrings.


72-91: Add class docstring.

The InfrahubBranch class lacks a docstring despite previous review feedback. Per coding guidelines, document the class purpose and explain why certain fields remain unwrapped while others use value wrapper types.


92-110: Add method docstring.

This method lacks a docstring as flagged in previous reviews. Document the mapping logic, especially the value-wrapping behavior on lines 105-108 and the ID-to-UUID mapping on line 102.


113-114: Add class docstring.

The InfrahubBranchEdge class lacks a docstring despite previous review feedback. Per coding guidelines, document that this is an edge wrapper for the GraphQL connection pattern.


117-120: Add class docstring.

The InfrahubBranchType class lacks a docstring despite previous review feedback. Per coding guidelines, document this as a paginated response type with metadata and edges.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86dbefa and f0aa561.

📒 Files selected for processing (1)
  • backend/infrahub/graphql/types/branch.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
backend/**/*

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Run backend tests with pytest or via invoke tasks

Files:

  • backend/infrahub/graphql/types/branch.py
**/*.py

📄 CodeRabbit inference engine (.github/instructions/python-docstring.instructions.md)

**/*.py: Use triple double quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include a brief one-line description at the top of each docstring
Add a detailed description section when additional context is needed
Document function/method parameters under an Args/Parameters section without typing information
Include a Returns section describing the return value
Include a Raises section listing possible exceptions
Provide an Examples section demonstrating usage when helpful

Use ruff and mypy to validate and lint Python files

**/*.py: Use type hints for all Python function parameters and return values
Prefer asynchronous code in Python when feasible
Define asynchronous Python functions with async def
Use await for asynchronous calls in Python
Use Pydantic models instead of standard dataclasses
Use ruff and mypy for linting and type checking

**/*.py: Use type hints for all Python function parameters and return values
Use async/await whenever possible in Python code
Define asynchronous functions with async def
Await asynchronous calls with await
Use Pydantic models instead of standard dataclasses for data modeling
Use triple quotes (""") for all Python docstrings
Write docstrings in Google-style format
Include docstring sections when applicable: one-line summary, optional details, Args (without types), Returns, Raises, Examples
Validate and lint Python with ruff and mypy

Files:

  • backend/infrahub/graphql/types/branch.py
🧠 Learnings (3)
📓 Common learnings
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.
📚 Learning: 2025-10-30T12:43:45.629Z
Learnt from: ogenstad
PR: opsmill/infrahub#7447
File: schema/schema.graphql:324-329
Timestamp: 2025-10-30T12:43:45.629Z
Learning: In the GraphQL schema (schema/schema.graphql), the BranchStatus enum includes NEED_UPGRADE_REBASE. Branches with NEED_UPGRADE_REBASE status must support conflict resolution capabilities.

Applied to files:

  • backend/infrahub/graphql/types/branch.py
📚 Learning: 2025-07-22T08:13:56.198Z
Learnt from: CR
PR: opsmill/infrahub#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-22T08:13:56.198Z
Learning: Documentation generated for Infrahub must reflect its novel approach, providing clarity around new concepts and demonstrating integration with familiar patterns from existing tools like Git, infrastructure-as-code, and CI/CD pipelines

Applied to files:

  • backend/infrahub/graphql/types/branch.py
🧬 Code graph analysis (1)
backend/infrahub/graphql/types/branch.py (3)
backend/infrahub/core/branch/models.py (3)
  • Branch (31-550)
  • get_list_count (173-184)
  • get_by_name (137-154)
backend/infrahub/exceptions.py (1)
  • BranchNotFoundError (194-200)
backend/infrahub/core/node/standard.py (2)
  • to_graphql (71-86)
  • get (131-138)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: backend-tests-integration
  • GitHub Check: validate-generated-documentation
  • GitHub Check: backend-tests-functional
  • GitHub Check: backend-tests-unit
  • GitHub Check: backend-docker-integration
🔇 Additional comments (1)
backend/infrahub/graphql/types/branch.py (1)

101-103: The review comment concern is unfounded; the code correctly uses the external UUID.

The Branch model inherits from StandardNode, which populates the uuid attribute from node["uuid"] (node properties), not from the Neo4j internal node ID. In backend/infrahub/core/node/standard.py lines 113-114, the initialization is:

self.id = node.element_id
self.uuid = UUID(node["uuid"])

Line 102 of the GraphQL file correctly maps "id" to obj.uuid, which is the external UUID. The PR concern about returning internal Neo4j node IDs does not apply here—obj.uuid is already the external UUID from node properties.

Comment on lines +121 to +129
@classmethod
async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
async with graphql_context.db.start_session(read_only=True) as db:
count = await Branch.get_list_count(db=db, **kwargs)
try:
await Branch.get_by_name(name=GLOBAL_BRANCH_NAME, db=db)
return count - 1
except BranchNotFoundError:
return count
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add method docstring (regression).

This method's docstring was marked as addressed in a previous commit (32c72ad) but is missing in the current code. Per coding guidelines, add a Google-style docstring documenting the purpose, parameters, and return value.

Apply this diff:

     @classmethod
     async def get_list_count(cls, graphql_context: GraphqlContext, **kwargs: Any) -> int:
+        """Get the total count of branches excluding the global branch.
+        
+        Args:
+            graphql_context: The GraphQL context containing database connection.
+            **kwargs: Additional filter parameters passed to Branch.get_list_count.
+            
+        Returns:
+            The count of branches, excluding GLOBAL_BRANCH_NAME if it exists.
+        """
         async with graphql_context.db.start_session(read_only=True) as db:
🤖 Prompt for AI Agents
In backend/infrahub/graphql/types/branch.py around lines 121 to 129, the async
classmethod get_list_count is missing its Google-style docstring; add a
docstring directly under the method definition that briefly describes what the
method does (returns the number of Branch records accessible via the GraphQL
context, excluding the GLOBAL_BRANCH_NAME when present), documents parameters
(graphql_context: GraphqlContext, **kwargs: Any) and their roles, and specifies
the return type (int) and behavior (subtracts one if GLOBAL_BRANCH_NAME exists,
otherwise returns full count); keep wording concise and follow Google docstring
format for purpose, Args, and Returns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

group/backend Issue related to the backend (API Server, Git Agent)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants