-
Couldn't load subscription status.
- Fork 183
ROB-2290 workload issues testing #1074
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
# Conflicts: # holmes/core/supabase_dal.py # holmes/plugins/toolsets/robusta/robusta.py
WalkthroughRefactors Robusta toolset fetch APIs from change-history to issue-focused Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User as User/Test
participant Toolset as RobustaToolset
participant Tool as FetchResourceIssuesMetadata
participant DAL as MockSupabaseDal / SupabaseDal
Note over User,Toolset: resource-scoped issue fetch request (namespace, workload, time window)
User->>Toolset: fetch_resource_issues(params)
Toolset->>Tool: dispatch to _fetch_issues(params)
Tool->>DAL: get_issues_metadata(start_datetime, end_datetime, limit, workload, ns, cluster, finding_type)
alt DAL has fixture data
DAL-->>Tool: filtered issue metadata list
else no fixture / empty
DAL-->>Tool: None or []
end
Tool-->>Toolset: metadata + parameterized one-liner
Toolset-->>User: aggregated response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
holmes/plugins/toolsets/robusta/robusta_instructions.jinja2 (1)
6-7: Tighten language and capitalization for new guidance bullets.Minor clarity/grammar tweaks; no behavior change.
-* When investigating a resource (pod, deployment, or job), if no relevant information is available from the live cluster at the time of investigation, call the fetch_resource_issues_metadata function to retrieve its historical alert data. -* You can use fetch_resource_issues_metadata to get issues context for a specific kubernetes resource. Start with a 4 hours window and try to expand to 24 hours windows if nothing comes up. +* When investigating a resource (Pod, Deployment, or Job), if no relevant information is available from the live cluster, call fetch_resource_issues_metadata to retrieve its historical alert data. +* Use fetch_resource_issues_metadata to get issue context for a specific Kubernetes resource. Start with a 4‑hour window and expand to a 24‑hour window if nothing surfaces.tests/llm/utils/mock_toolset.py (1)
649-655: Pass DAL by keyword for clarity; behavior unchanged.Small readability tweak to avoid confusion if the function signature changes.
- builtin_toolsets = load_builtin_toolsets(mock_dal) + builtin_toolsets = load_builtin_toolsets(dal=mock_dal)tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/issues_metadata.json (1)
1-97: Normalize creation_date to RFC3339 with timezone to avoid parsing errors.Mock DAL currently parses creation_date as tz‑aware; naive timestamps can break comparisons. Recommend adding Z to creation_date values.
- "creation_date": "2025-10-19 10:59:27.643325", + "creation_date": "2025-10-19 10:59:27.643325Z", @@ - "creation_date": "2025-10-19 10:59:56.536309", + "creation_date": "2025-10-19 10:59:56.536309Z", @@ - "creation_date": "2025-10-19 10:59:56.536309", + "creation_date": "2025-10-19 10:59:56.536309Z",Optional: align service_key/labels with subject_name for consistency (not functionally required).
tests/llm/utils/mock_dal.py (1)
136-147: Preserve JSON shape when generating mocks.When data is an empty list, json.dumps(data or {}) writes {}. Prefer writing [] for lists and {} for dicts.
- with open(file_path, "w") as f: - f.write(json.dumps(data or {}, indent=2)) + with open(file_path, "w") as f: + payload = data if data is not None else [] + f.write(json.dumps(payload, indent=2))Apply similarly in get_issue_data/resource_instructions writers.
holmes/core/supabase_dal.py (1)
246-255: Update debug string to reflect API rename.Verification confirmed no stale references to the old
get_configuration_changes_metadataAPI—all call sites properly useget_issues_metadata. However, the debug string at line 295 still says "Change history metadata" and should be updated to "Issues metadata" for consistency with the renamed method.holmes/plugins/toolsets/robusta/robusta.py (1)
308-309: Consider consistent capitalization for uniformity.The one-liner starts with lowercase "fetch" while other tools use "Fetch" (line 83), "Check" (line 147), or "Search" (lines 251, 289). Consider using "Fetch" for consistency.
Apply this diff:
def get_parameterized_one_liner(self, params: Dict) -> str: - return f"Robusta: fetch resource issues metadata {params}" + return f"Robusta: Fetch Resource Issues Metadata {params}"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
holmes/core/supabase_dal.py(4 hunks)holmes/plugins/toolsets/robusta/robusta.py(9 hunks)holmes/plugins/toolsets/robusta/robusta_instructions.jinja2(1 hunks)tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/issues_metadata.json(1 hunks)tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/test_case.yaml(1 hunks)tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml(1 hunks)tests/llm/fixtures/test_ask_holmes/93_calling_datadog/toolsets.yaml(0 hunks)tests/llm/utils/mock_dal.py(4 hunks)tests/llm/utils/mock_toolset.py(2 hunks)
💤 Files with no reviewable changes (1)
- tests/llm/fixtures/test_ask_holmes/93_calling_datadog/toolsets.yaml
🧰 Additional context used
📓 Path-based instructions (8)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Ruff for formatting and linting (configured in pyproject.toml) for all Python code
Type hints are required; code should pass mypy (configured in pyproject.toml)
ALWAYS place Python imports at the top of the file, not inside functions or methods
Files:
holmes/core/supabase_dal.pytests/llm/utils/mock_toolset.pytests/llm/utils/mock_dal.pyholmes/plugins/toolsets/robusta/robusta.py
holmes/plugins/toolsets/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Toolsets must be located as holmes/plugins/toolsets/{name}.yaml or holmes/plugins/toolsets/{name}/
Files:
holmes/plugins/toolsets/robusta/robusta_instructions.jinja2holmes/plugins/toolsets/robusta/robusta.py
tests/**
📄 CodeRabbit inference engine (CLAUDE.md)
Test files should mirror the source structure under tests/
Files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/issues_metadata.jsontests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/test_case.yamltests/llm/utils/mock_toolset.pytests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yamltests/llm/utils/mock_dal.py
tests/llm/**/*.yaml
📄 CodeRabbit inference engine (CLAUDE.md)
tests/llm/**/*.yaml: In eval manifests, ALWAYS use Kubernetes Secrets for scripts rather than inline manifests or ConfigMaps to prevent script exposure via kubectl describe
Eval resources must use neutral names, each test must use a dedicated namespace 'app-', and all pod names must be unique across tests
Files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/test_case.yamltests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml
tests/**/test_case.yaml
📄 CodeRabbit inference engine (CLAUDE.md)
Do NOT put toolset configuration directly in test_case.yaml; keep toolset config in a separate toolsets.yaml
Files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/test_case.yaml
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers/tags
Files:
tests/llm/utils/mock_toolset.pytests/llm/utils/mock_dal.py
tests/**/toolsets.yaml
📄 CodeRabbit inference engine (CLAUDE.md)
In eval toolsets.yaml, ALL toolset-specific configuration must be under a 'config' field; do not place toolset config at the top level
Files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml
{holmes/plugins/toolsets/**/*.{yaml,yml},tests/**/toolsets.yaml}
📄 CodeRabbit inference engine (CLAUDE.md)
Only the following top-level fields are valid in toolset YAML: enabled, name, description, additional_instructions, prerequisites, tools, docs_url, icon_url, installation_instructions, config, url (MCP toolsets only)
Files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml
🧠 Learnings (1)
📚 Learning: 2025-08-05T00:42:23.792Z
Learnt from: vishiy
PR: robusta-dev/holmesgpt#782
File: config.example.yaml:31-49
Timestamp: 2025-08-05T00:42:23.792Z
Learning: In robusta-dev/holmesgpt config.example.yaml, the azuremonitorlogs toolset configuration shows "enabled: true" as an example of how to enable the toolset, not as a default setting. The toolset is disabled by default and requires explicit enablement in user configurations.
Applied to files:
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml
🧬 Code graph analysis (4)
holmes/core/supabase_dal.py (1)
tests/llm/utils/mock_dal.py (1)
get_issues_metadata(83-149)
tests/llm/utils/mock_toolset.py (2)
tests/llm/utils/mock_dal.py (1)
load_mock_dal(155-181)holmes/plugins/toolsets/__init__.py (1)
load_builtin_toolsets(121-145)
tests/llm/utils/mock_dal.py (2)
holmes/core/supabase_dal.py (2)
FindingType(57-59)get_issues_metadata(246-303)tests/llm/utils/test_case_utils.py (1)
read_file(50-52)
holmes/plugins/toolsets/robusta/robusta.py (2)
holmes/core/supabase_dal.py (2)
FindingType(57-59)get_issues_metadata(246-303)tests/llm/utils/mock_dal.py (1)
get_issues_metadata(83-149)
🪛 Ruff (0.14.1)
tests/llm/utils/mock_dal.py
26-26: Do not catch blind exception: Exception
(BLE001)
holmes/plugins/toolsets/robusta/robusta.py
226-226: Unused method argument: context
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: build (3.12)
- GitHub Check: build (3.11)
- GitHub Check: build (3.10)
- GitHub Check: llm_evals
- GitHub Check: build
🔇 Additional comments (10)
tests/llm/utils/mock_toolset.py (1)
13-15: Imports placement LGTM.Top-level imports for load_mock_dal and Path follow our guidelines.
holmes/core/supabase_dal.py (1)
57-60: Enum introduction LGTM.FindingType with string values is appropriate for PostgREST filtering.
tests/llm/utils/mock_dal.py (1)
161-165: Mock loader additions LGTM.Reading issues_metadata.json and plumb-through to MockSupabaseDal is correct.
Also applies to: 175-181
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/toolsets.yaml (1)
1-7: Toolset structure is valid; kubernetes/logs is always available.The top-level structure and use of
enabled: trueare correct for eval toolsets.yaml per coding guidelines. The concern aboutUSE_LEGACY_KUBERNETES_LOGSgating availability ofkubernetes/logsis incorrect. WhenUSE_LEGACY_KUBERNETES_LOGSis False (default), the toolset is loaded fromKubernetesLogsToolset()class; when True, it's loaded fromkubernetes_logs.yaml. Either way,kubernetes/logsremains available as a builtin toolset.Likely an incorrect or invalid review comment.
holmes/plugins/toolsets/robusta/robusta.py (6)
7-7: LGTM!The
FindingTypeimport is correctly added and properly used throughout the file to support the new parameterized finding type functionality.
205-223: Well-refactored generalization!The rename from
_fetch_change_historyto_fetch_issueswith the addedfinding_typeparameter effectively generalizes the method to support multiple finding types while maintaining backward compatibility through the default value.
238-238: Good improvement for subclass reusability.Using
self.nameinstead of a hardcoded string makes the error message adapt automatically to subclasses, improving maintainability.
254-264: LGTM!The updated description using "specific kubernetes resource" is more accurate than the previous "specific workload" terminology.
285-286: Implementation is correct.The override correctly hardcodes
cluster="external"for external configuration changes. The# type: ignoresuppresses the type checker's complaint about signature narrowing, which is intentional here for this internal method.
323-328: LGTM!The new
FetchResourceIssuesMetadatatool is correctly registered in the toolset, following the same pattern as existing tools.
tests/llm/fixtures/test_ask_holmes/18_oom_kill_from_issues_history/test_case.yaml
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
holmes/plugins/toolsets/robusta/robusta.py (1)
285-286: Method override uses narrower signature.The
# type: ignorecomment suppresses a mypy warning about the method signature being narrower than the parent's. While this works at runtime, consider whether the parent method signature could be made more flexible to avoid needing type ignores.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
holmes/plugins/toolsets/robusta/robusta.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Ruff for formatting and linting (configured in pyproject.toml) for all Python code
Type hints are required; code should pass mypy (configured in pyproject.toml)
ALWAYS place Python imports at the top of the file, not inside functions or methods
Files:
holmes/plugins/toolsets/robusta/robusta.py
holmes/plugins/toolsets/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Toolsets must be located as holmes/plugins/toolsets/{name}.yaml or holmes/plugins/toolsets/{name}/
Files:
holmes/plugins/toolsets/robusta/robusta.py
🧬 Code graph analysis (1)
holmes/plugins/toolsets/robusta/robusta.py (3)
holmes/core/supabase_dal.py (3)
SupabaseDal(70-603)FindingType(57-59)get_issues_metadata(246-303)tests/llm/utils/mock_dal.py (1)
get_issues_metadata(83-149)holmes/core/tools.py (3)
ToolInvokeContext(164-170)StructuredToolResult(79-103)ToolParameter(155-161)
🪛 Ruff (0.14.1)
holmes/plugins/toolsets/robusta/robusta.py
226-226: Unused method argument: context
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Pre-commit checks
- GitHub Check: llm_evals
- GitHub Check: build
🔇 Additional comments (5)
holmes/plugins/toolsets/robusta/robusta.py (5)
7-7: LGTM: Import added correctly.The
FindingTypeenum import is properly added and aligns with its usage throughout the file.
205-223: LGTM: Method refactoring is well-structured.The rename from
_fetch_change_historyto_fetch_issueswith the addedfinding_typeparameter generalizes the functionality appropriately. The default valueFindingType.CONFIGURATION_CHANGEmaintains backward compatibility.
226-248: LGTM: Improved message genericity.The update to use
self.namein the no-data message (line 238) makes the method more reusable across subclasses.Note: The static analysis hint about unused
contextparameter is a false positive—it's required by the base class signature.
319-320: Method override pattern consistent with external changes tool.The override correctly passes
FindingType.ISSUEto fetch issues instead of configuration changes. The# type: ignoreusage is consistent with the pattern inFetchExternalConfigurationChangesMetadata.
337-342: LGTM: New tool properly integrated.The
FetchResourceIssuesMetadatatool is correctly instantiated and added to the toolset's tools list.
# Conflicts: # holmes/plugins/toolsets/robusta/robusta.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (5)
holmes/plugins/toolsets/robusta/robusta.py (5)
205-224: Harden _fetch_issues: key constants, limit validation, and precise typingUse constants for param keys to avoid drift, validate/normalize limit (avoid treating 0 as “missing” unless desired), and tighten types.
- def _fetch_issues( - self, - params: Dict, - cluster: Optional[str] = None, - finding_type: FindingType = FindingType.CONFIGURATION_CHANGE, - ) -> Optional[List[Dict]]: + def _fetch_issues( + self, + params: Dict[str, Any], + cluster: Optional[str] = None, + finding_type: FindingType = FindingType.CONFIGURATION_CHANGE, + ) -> Optional[List[Dict[str, Any]]]: if self._dal and self._dal.enabled: - return self._dal.get_issues_metadata( - start_datetime=params["start_datetime"], - end_datetime=params["end_datetime"], - limit=min( - params.get("limit") or DEFAULT_LIMIT_CHANGE_ROWS, - MAX_LIMIT_CHANGE_ROWS, - ), - ns=params.get("namespace"), - workload=params.get("workload"), - cluster=cluster, - finding_type=finding_type, - ) + # normalize and bound limit + raw_limit = params.get("limit", DEFAULT_LIMIT_CHANGE_ROWS) + try: + limit_val = int(raw_limit) + except (TypeError, ValueError): + limit_val = DEFAULT_LIMIT_CHANGE_ROWS + limit_val = max(1, min(limit_val, MAX_LIMIT_CHANGE_ROWS)) + + return self._dal.get_issues_metadata( + start_datetime=params[START_TIME], + end_datetime=params[END_TIME], + limit=limit_val, + ns=params.get(NAMESPACE), + workload=params.get(WORKLOAD), + cluster=cluster, + finding_type=finding_type, + ) return None
226-249: Neutral naming/messages and stronger typing in _invokeMake this base path wording generic (works for changes and issues) and tighten the
paramstype.- def _invoke(self, params: dict, context: ToolInvokeContext) -> StructuredToolResult: + def _invoke(self, params: Dict[str, Any], context: ToolInvokeContext) -> StructuredToolResult: try: - changes = self._fetch_issues(params) - if changes: + items = self._fetch_issues(params) + if items: return StructuredToolResult( status=StructuredToolResultStatus.SUCCESS, - data=changes, + data=items, params=params, ) else: return StructuredToolResult( status=StructuredToolResultStatus.NO_DATA, - data=f"{self.name} found no data. {params}", + data=f"{self.name} found no results. {params}", params=params, ) except Exception as e: - msg = f"There was an internal error while fetching changes for {params}. {str(e)}" + msg = f"There was an internal error while fetching results for {params}. {str(e)}" logging.exception(msg) return StructuredToolResult( status=StructuredToolResultStatus.ERROR, data=msg, params=params, )
285-286: Remove type: ignore by matching base signatureAlign the override with the base method signature and pass through
finding_type. This avoids suppressions and keeps mypy happy.- def _fetch_issues(self, params: Dict) -> Optional[List[Dict]]: # type: ignore - return super()._fetch_issues(params, cluster="external") + def _fetch_issues( + self, + params: Dict[str, Any], + cluster: Optional[str] = None, + finding_type: FindingType = FindingType.CONFIGURATION_CHANGE, + ) -> Optional[List[Dict[str, Any]]]: + return super()._fetch_issues( + params, + cluster="external", + finding_type=finding_type, + )
302-316: Use key constants for parameters to avoid driftYou already define NAMESPACE/WORKLOAD constants; use them here for consistency and safety.
- self.parameters.update( + self.parameters.update( { - "namespace": ToolParameter( + NAMESPACE: ToolParameter( description="The Kubernetes namespace name for filtering issues and alerts", type="string", required=True, ), - "workload": ToolParameter( + WORKLOAD: ToolParameter( description="Kubernetes resource name to filter issues and alerts (e.g., Pod, Deployment, Job, etc.). Must be the full name. For Pods, include the exact generated suffix.", type="string", required=True, ), } )
319-320: Align override signature and drop suppressionMatch the base signature and remove
# type: ignore; fix default to ISSUE to reflect the tool’s purpose.- def _fetch_issues(self, params: Dict) -> Optional[List[Dict]]: # type: ignore - return super()._fetch_issues(params, finding_type=FindingType.ISSUE) + def _fetch_issues( + self, + params: Dict[str, Any], + cluster: Optional[str] = None, + finding_type: FindingType = FindingType.ISSUE, + ) -> Optional[List[Dict[str, Any]]]: + return super()._fetch_issues( + params, + cluster=cluster, + finding_type=FindingType.ISSUE, + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
holmes/plugins/toolsets/robusta/robusta.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Ruff for formatting and linting (configured in pyproject.toml) for all Python code
Type hints are required; code should pass mypy (configured in pyproject.toml)
ALWAYS place Python imports at the top of the file, not inside functions or methods
Files:
holmes/plugins/toolsets/robusta/robusta.py
holmes/plugins/toolsets/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Toolsets must be located as holmes/plugins/toolsets/{name}.yaml or holmes/plugins/toolsets/{name}/
Files:
holmes/plugins/toolsets/robusta/robusta.py
🧬 Code graph analysis (1)
holmes/plugins/toolsets/robusta/robusta.py (2)
holmes/core/tools.py (1)
ToolParameter(155-161)holmes/core/supabase_dal.py (1)
FindingType(57-59)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: build
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/llm/utils/mock_dal.py (1)
63-76: Bug: get_resource_instructions returns None when generate_mocks is False.Return is inside the if-block, so the method returns None otherwise. Dedent. Based on coding guidelines.
if self._generate_mocks: file_path = self._get_mock_file_path("resource_instructions") @@ - logging.warning( - f"A mock file was generated for you at {file_path} with the contentof dal.get_resource_instructions({type}, {name})" - ) - - return data + logging.warning( + "A mock file was generated for you at %s with the content of " + "dal.get_resource_instructions(%s, %s)", + file_path, + type, + name, + ) + return data
♻️ Duplicate comments (2)
tests/llm/utils/mock_dal.py (2)
24-31: Replace blind except; keep mock-mode defaults and log the exception.Catching bare Exception violates Ruff BLE001 and hides the error context. Capture the exception, keep the mock defaults, and annotate the intentional broad catch. Based on coding guidelines.
- try: - super().__init__(cluster="test") - except Exception: - self.enabled = True - self.cluster = "test" - logging.warning( - "Mocksupabase dal could not connect to db. Running in pure mock mode. real db calls and --generate-mock will fail." - ) + try: + super().__init__(cluster="test") + except Exception as e: # noqa: BLE001 - broad on purpose for offline mock mode + # Ensure sane defaults for pure mock mode + self.enabled = True + self.cluster = "test" + logging.warning( + "MockSupabaseDal could not connect to DB; running in pure mock mode. " + "Live DB calls and --generate-mocks will fail. (%s)", + e, + )
101-106: Fix timezone handling: .astimezone on naive datetimes raises; normalize to aware UTC.Current list-comprehension calls astimezone on possibly naive datetimes, causing runtime errors. Normalize once and reuse start/end. Based on past reviews.
- for item in self._issues_metadata: - creation_date, start, end = [ - datetime.fromisoformat(dt.replace("Z", "+00:00")).astimezone( - timezone.utc - ) - for dt in (item["creation_date"], start_datetime, end_datetime) - ] + def _to_utc(dt_str: str) -> datetime: + s = dt_str.replace("Z", "+00:00") + dt = datetime.fromisoformat(s) + if dt.tzinfo is None: + return dt.replace(tzinfo=timezone.utc) + return dt.astimezone(timezone.utc) + + start = _to_utc(start_datetime) + end = _to_utc(end_datetime) + + for item in self._issues_metadata: + creation_date = _to_utc(item["creation_date"])Also applies to: 96-100, 107-113
🧹 Nitpick comments (4)
tests/llm/utils/mock_dal.py (4)
47-49: Remove redundant f.close and set encoding in with-open blocks.with closes files automatically; add encoding for consistency.
- with open(file_path, "w") as f: - f.write(json.dumps(data or {}, indent=2)) - f.close() + with open(file_path, "w", encoding="utf-8") as f: + f.write(json.dumps(data or {}, indent=2)) @@ - with open(file_path, "w") as f: - f.write(json.dumps(data or {}, indent=2)) - f.close() + with open(file_path, "w", encoding="utf-8") as f: + f.write(json.dumps(data or {}, indent=2)) @@ - with open(file_path, "w") as f: - f.write(json.dumps(data or {}, indent=2)) - f.close() + with open(file_path, "w", encoding="utf-8") as f: + f.write(json.dumps(data or {}, indent=2))Also applies to: 67-69, 141-143
51-53: Tighten logging: fix typos, prefer structured placeholders.Minor cleanups: “contentof” typo; consistent class name; structured logging instead of f-strings.
- logging.warning( - f"A mock file was generated for you at {file_path} with the contentof dal.get_issue_data({issue_id})" - ) + logging.warning( + "A mock file was generated for you at %s with the content of dal.get_issue_data(%s)", + file_path, + issue_id, + ) @@ - logging.warning( - f"A mock file was generated for you at {file_path} " - f"with the content of dal.get_issues_metadata(" - f"{start_datetime}, {end_datetime}, {limit}, " - f"{workload}, {ns}, {finding_type})" - ) + logging.warning( + "A mock file was generated for you at %s with the content of " + "dal.get_issues_metadata(%s, %s, %s, %s, %s, %s)", + file_path, + start_datetime, + end_datetime, + limit, + workload, + ns, + finding_type, + )Also applies to: 71-73, 145-150
57-59: Avoid shadowing built-in ‘type’ in get_resource_instructions signature.Rename parameter to resource_type to improve clarity.
- def get_resource_instructions( - self, type: str, name: Optional[str] + def get_resource_instructions( + self, resource_type: str, name: Optional[str] ) -> Optional[ResourceInstructions]: @@ - data = super().get_resource_instructions(type, name) + data = super().get_resource_instructions(resource_type, name) @@ - "dal.get_resource_instructions(%s, %s)", - file_path, - type, - name, + "dal.get_resource_instructions(%s, %s)", file_path, resource_type, name
164-168: Harden fixture parsing for issues_metadata.json.Be defensive against invalid JSON or wrong top-level type.
- if issues_metadata_path.exists(): - issues_metadata = json.loads(read_file(issues_metadata_path)) + if issues_metadata_path.exists(): + try: + raw = json.loads(read_file(issues_metadata_path)) + if isinstance(raw, list): + issues_metadata = raw + else: + logging.warning( + "issues_metadata.json should contain a list; got %s", + type(raw).__name__, + ) + issues_metadata = [] + except json.JSONDecodeError as e: + logging.warning("Failed to parse issues_metadata.json: %s", e) + issues_metadata = []
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/llm/utils/mock_dal.py(4 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Ruff for formatting and linting (configured in pyproject.toml) for all Python code
Type hints are required; code should pass mypy (configured in pyproject.toml)
ALWAYS place Python imports at the top of the file, not inside functions or methods
Files:
tests/llm/utils/mock_dal.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers/tags
Files:
tests/llm/utils/mock_dal.py
tests/**
📄 CodeRabbit inference engine (CLAUDE.md)
Test files should mirror the source structure under tests/
Files:
tests/llm/utils/mock_dal.py
🧬 Code graph analysis (1)
tests/llm/utils/mock_dal.py (4)
holmes/core/supabase_dal.py (3)
SupabaseDal(70-603)FindingType(57-59)get_issues_metadata(246-303)holmes/utils/global_instructions.py (1)
Instructions(6-7)holmes/core/resource_instruction.py (1)
ResourceInstructions(15-17)tests/llm/utils/test_case_utils.py (1)
read_file(50-52)
🪛 Ruff (0.14.1)
tests/llm/utils/mock_dal.py
26-26: Do not catch blind exception: Exception
(BLE001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: llm_evals
- GitHub Check: Pre-commit checks
- GitHub Check: build
# Conflicts: # tests/llm/utils/mock_dal.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
tests/llm/utils/mock_dal.py (5)
1-5: Remove file-level type ignore and add proper type hints.The
# type: ignoreat line 1 suppresses all type checking, violating the coding guideline that "code should pass mypy". ImportAnyand use parameterized types throughout.Apply this diff:
-# type: ignore import json import logging from pathlib import Path -from typing import Dict, List, Optional +from typing import Any, Dict, List, OptionalThen update type hints on lines 22, 143 as shown in subsequent comments.
22-22: Use parameterized Dict type hint.The
issues_metadataparameter should useDict[str, Any]instead of unparameterizedDict.Apply this diff:
- issues_metadata: Optional[List[Dict]], + issues_metadata: Optional[List[Dict[str, Any]]],
28-35: Fix exception handling: capture exception and improve logging.The bare
except:violates Ruff BLE001 and doesn't capture the exception details. The log message also contains a typo.Apply this diff:
if initialize_base: try: super().__init__(cluster="test") - except: # noqa: E722 + except Exception as e: # noqa: BLE001 - broad on purpose for offline mock mode self.enabled = True self.cluster = "test" logging.warning( - "Mocksupabase dal could not connect to db. Running in pure mock mode. real db calls and --generate-mock will fail." + "MockSupabaseDal could not connect to DB; running in pure mock mode. Live DB calls and --generate-mocks will fail. (%s)", + e, )
134-143: Use parameterized return type.The return type should use
List[Dict[str, Any]]instead of unparameterizedList[Dict].Apply this diff:
def get_issues_metadata( self, start_datetime: str, end_datetime: str, limit: int = 100, workload: Optional[str] = None, ns: Optional[str] = None, cluster: Optional[str] = None, finding_type: FindingType = FindingType.CONFIGURATION_CHANGE, - ) -> Optional[List[Dict]]: + ) -> Optional[List[Dict[str, Any]]]:
148-154: Fix timezone handling to prevent runtime errors on naive datetimes.Calling
.astimezone()on a naive datetime (one withouttzinfo) raisesValueError. Ifcreation_dateor the input strings lack timezone info, this will crash at runtime.Apply this diff to normalize all datetimes to aware UTC before comparison:
+ def _to_utc(dt_str: str) -> datetime: + # Accept '...Z', '+00:00', or naive -> assume UTC + s = dt_str.replace("Z", "+00:00") + dt = datetime.fromisoformat(s) + if dt.tzinfo is None: + return dt.replace(tzinfo=timezone.utc) + return dt.astimezone(timezone.utc) + for item in self._issues_metadata: - creation_date, start, end = [ - datetime.fromisoformat(dt.replace("Z", "+00:00")).astimezone( - timezone.utc - ) - for dt in (item["creation_date"], start_datetime, end_datetime) - ] + creation_date = _to_utc(item["creation_date"]) + start = _to_utc(start_datetime) + end = _to_utc(end_datetime) if not (start <= creation_date <= end): continue
🧹 Nitpick comments (1)
tests/llm/utils/mock_dal.py (1)
168-177: Consider dict comprehension for cleaner code.The manual dict construction could be simplified using a dict comprehension or by selecting only the needed keys upfront.
For example:
keys = ["id", "title", "subject_name", "subject_namespace", "subject_type", "description", "starts_at", "ends_at"] filtered_item = {k: item.get(k) for k in keys} filtered_data.append(filtered_item)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/llm/utils/mock_dal.py(5 hunks)tests/llm/utils/mock_toolset.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/llm/utils/mock_toolset.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Use Ruff for formatting and linting (configured in pyproject.toml) for all Python code
Type hints are required; code should pass mypy (configured in pyproject.toml)
ALWAYS place Python imports at the top of the file, not inside functions or methods
Files:
tests/llm/utils/mock_dal.py
tests/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Only use pytest markers that are defined in pyproject.toml; never introduce undefined markers/tags
Files:
tests/llm/utils/mock_dal.py
tests/**
📄 CodeRabbit inference engine (CLAUDE.md)
Test files should mirror the source structure under tests/
Files:
tests/llm/utils/mock_dal.py
🧬 Code graph analysis (1)
tests/llm/utils/mock_dal.py (2)
holmes/core/supabase_dal.py (3)
SupabaseDal(71-677)FindingType(58-60)get_issues_metadata(247-304)tests/llm/utils/test_case_utils.py (1)
read_file(50-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Pre-commit checks
- GitHub Check: llm_evals
- GitHub Check: build
🔇 Additional comments (1)
tests/llm/utils/mock_dal.py (1)
215-218: LGTM! Consistent mock loading pattern.The loading of
issues_metadata.jsonfollows the same pattern as existing mock data files (issue_data.json,resource_instructions.json), maintaining consistency in the mock data infrastructure.Also applies to: 233-233
No description provided.