Skip to content

Linters reusable workflow#37

Merged
joselsegura merged 4 commits intoRedHatInsights:mainfrom
joselsegura:linters_reusable_workflow
Mar 17, 2026
Merged

Linters reusable workflow#37
joselsegura merged 4 commits intoRedHatInsights:mainfrom
joselsegura:linters_reusable_workflow

Conversation

@joselsegura
Copy link
Contributor

Update pre-commit configuration and start using the reusable workflow.

Fixes #CCXDEV-15353

Type of change

  • Refactor (refactoring code, removing useless files)
  • Configuration update

Testing steps

Tested locally.

Checklist

  • pre-commit run --all passes
  • updated documentation wherever necessary
  • added or modified tests if necessary
  • updated schemas and validators in insights-data-schemas in case of input/output change

@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

Summary by CodeRabbit

  • Bug Fixes

    • Improved error handling with clearer exception responses and more robust 404 handling.
  • Improvements

    • Added metrics/monitoring integration for better observability.
    • Modernized public type annotations and updated some public model fields.
    • Renamed public exception classes for clearer error semantics.
  • Chores

    • Reworked linting and pre-commit configuration; replaced legacy workflow and added a consolidated CI lint workflow.
  • Tests

    • Expanded and refactored tests to reflect updated behavior and signatures.

Walkthrough

Renames two auth exceptions, modernizes type annotations across models and RHOBS, replaces a lint workflow and expands pre-commit hooks, adds Prometheus Instrumentator typing and metrics calls, tightens RHOBS error handling, and applies formatting/import/test updates across the codebase.

Changes

Cohort / File(s) Summary
CI/CD & Pre-commit
.github/workflows/lint.yml, .github/workflows/linters.yaml, .pre-commit-config.yaml
Removed legacy lint.yml; added linters.yaml delegating to a shared workflow; replaced and expanded pre-commit configuration with multi-language hooks (shell, Go, ruff, centralized checks).
Auth & Exceptions
ccx_upgrades_data_eng/auth.py, ccx_upgrades_data_eng/tests/test_sso_token_refresh.py, ccx_upgrades_data_eng/main.py
Renamed exceptions SessionManagerExceptionSessionManagerError and TokenExceptionTokenError; updated raises, imports, and tests; removed parentheses from @lru_cache() to @lru_cache.
Typing & Models
ccx_upgrades_data_eng/models.py, ccx_upgrades_data_eng/inference.py
Modernized typing (Listlist[...], Optional[T]→`T
RHOBS & Error Handling
ccx_upgrades_data_eng/rhobs.py, ccx_upgrades_data_eng/tests/test_rhobs.py
Updated function signatures to Python 3.9+ generics (tuple, dict, list); added raise ... from e chaining, explicit 404 handling, safer metric key access, and logging refinements.
Application & Monitoring
ccx_upgrades_data_eng/main.py, ccx_upgrades_data_eng/metrics.py, ccx_upgrades_data_eng/urls.py
Added/typed Instrumentator integration and instrumentation calls in lifespan/endpoints; restored ClustersList import; imported UpgradeApiResponse for type annotations; minor reformatting of metrics/JSONResponse calls.
Configuration & Utilities
ccx_upgrades_data_eng/config.py, ccx_upgrades_data_eng/utils.py, ccx_upgrades_data_eng/logging_utils.py, ccx_upgrades_data_eng/examples.py, ccx_upgrades_data_eng/sentry.py
Switched @lru_cache()@lru_cache; CustomTTLCache now uses get_settings() with ValidationError fallback; simplified env-missing computation; formatting/import reorderings and example string formatting; sentry LoggingIntegration reformatted.
Tests (general)
ccx_upgrades_data_eng/tests/*.py
Widespread test updates: import reorderings, expect renamed exceptions, use patch.dict for env in config tests, reflowed multi-line assertions and model_validate calls; assertions and logic preserved.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Linters reusable workflow' directly describes the main change: introducing a reusable GitHub Actions workflow for linters while updating the pre-commit configuration.
Description check ✅ Passed The description includes a JIRA link, specifies type of change (Refactor and Configuration update), provides testing steps (tested locally, pre-commit passes), and completes the checklist with relevant selections.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
ccx_upgrades_data_eng/rhobs.py (1)

104-118: ⚠️ Potential issue | 🟠 Major

Guard missing metric name before indexing __name__.

In Line 104/109/112/117, direct indexing can raise KeyError on malformed RHOBS payloads. The multi-cluster path already uses safer .get(...) handling.

Suggested fix
-        if metric["__name__"] == "console_url":
+        metric_name = metric.get("__name__")
+        if not metric_name:
+            logger.debug("result received with anonymous metric: %s", metric)
+            continue
+
+        if metric_name == "console_url":
             if "url" not in metric:
                 continue
             console_url = metric["url"]

-        elif metric["__name__"] == "alerts":
+        elif metric_name == "alerts":
             alerts.add(Alert.parse_metric(metric))

-        elif metric["__name__"] == "cluster_operator_conditions":
+        elif metric_name == "cluster_operator_conditions":
             focs.add(FOC.parse_metric(metric))

         else:
-            logger.debug(
-                "received a metric from unexpected type: %s", metric["__name__"]
-            )
+            logger.debug("received a metric from unexpected type: %s", metric_name)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/rhobs.py` around lines 104 - 118, The code indexes
metric["__name__"] directly which can raise KeyError for malformed RHOBS
payloads; update the conditional block in the function handling metrics to first
read a safe name = metric.get("__name__") (or check "__name__" in metric) and
then branch on name for "console_url", "alerts", and
"cluster_operator_conditions" so you avoid KeyError when logging or parsing
(e.g., before using console_url = metric["url"],
alerts.add(Alert.parse_metric(metric)), FOC.parse_metric(metric) and
logger.debug).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/linters.yaml:
- Around line 4-5: Update the GitHub Actions push trigger branches under the
"push" workflow entry by replacing the current branch name "master" with "main"
so pushes to main will run the lint workflow; locate the branches array
(branches: ["master"]) and change it to branches: ["main"].

In @.pre-commit-config.yaml:
- Around line 46-53: The pre-commit hook entries for the ruff hooks
('ruff-check' and 'ruff-format') are overriding the centralized configuration;
remove the hardcoded args block (the '--select=...' and '--line-length=100'
entries) from the 'ruff-check' hook and ensure 'ruff-format' has no custom args
so both hooks defer to the centralized ruff.toml/pyproject settings (or, if the
intent is to keep those specific rules, update ruff.toml/pyproject.toml to match
the same rule set and max-line-length instead of editing the pre-commit hook);
target the 'ruff-check' and 'ruff-format' hook entries in
.pre-commit-config.yaml when making the change.

In `@ccx_upgrades_data_eng/rhobs.py`:
- Around line 58-59: perform_rhobs_request currently declares non-optional
returns but returns (None, None) on failure; update the contract to avoid
mismatch by changing its return annotation to
tuple[Optional[UpgradeRisksPredictors], Optional[str]] (import Optional) or
alternatively raise a clear exception instead of returning Nones; adjust the
implementation around the failure branch that currently returns (None, None) and
update callers of perform_rhobs_request to handle the Optional return (or the
raised exception) accordingly.
- Line 10: Change the import to avoid shadowing the builtin by aliasing the
requests exception (e.g., from requests.exceptions import ConnectionError as
RequestsConnectionError, ReadTimeout) and then update any exception handlers
that currently catch ConnectionError to catch RequestsConnectionError instead
(the handlers around the code referenced at lines 66 and 156). Ensure all
references to the requests exception use the new alias (RequestsConnectionError)
while leaving built-in ConnectionError untouched.

In `@ccx_upgrades_data_eng/tests/test_config.py`:
- Around line 18-22: The test test_get_settings_fail currently relies on the
real process environment and can leak CI/developer vars; wrap the call to
get_settings() in a monkeypatch or context that isolates os.environ (e.g., use
pytest's monkeypatch.context() or monkeypatch.delenv/patch.dict with clear=True)
before calling get_settings() and after calling get_settings.cache_clear(), so
the environment is empty for the test; apply the same pattern to the other
env-dependent tests referenced (the ones also clearing cache and asserting
ValidationError) so all use an isolated os.environ (clear=True) when invoking
get_settings().

In `@ccx_upgrades_data_eng/tests/test_rhobs.py`:
- Line 11: Import the requests exception under an alias to avoid shadowing the
builtin ConnectionError: change the import of ConnectionError from
requests.exceptions to import it as RequestsConnectionError (or similar) and
update the test code where the requests exception is referenced (the usage of
ConnectionError around the test assertion/mock at line 75) to use the new alias;
keep any references to the builtin ConnectionError intact if they exist
elsewhere.

---

Outside diff comments:
In `@ccx_upgrades_data_eng/rhobs.py`:
- Around line 104-118: The code indexes metric["__name__"] directly which can
raise KeyError for malformed RHOBS payloads; update the conditional block in the
function handling metrics to first read a safe name = metric.get("__name__") (or
check "__name__" in metric) and then branch on name for "console_url", "alerts",
and "cluster_operator_conditions" so you avoid KeyError when logging or parsing
(e.g., before using console_url = metric["url"],
alerts.add(Alert.parse_metric(metric)), FOC.parse_metric(metric) and
logger.debug).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cc8b8d5b-a65a-4fe4-afb3-8f1afe71404b

📥 Commits

Reviewing files that changed from the base of the PR and between 39cd7c5 and df9f2fd.

📒 Files selected for processing (23)
  • .github/workflows/lint.yml
  • .github/workflows/linters.yaml
  • .pre-commit-config.yaml
  • ccx_upgrades_data_eng/auth.py
  • ccx_upgrades_data_eng/config.py
  • ccx_upgrades_data_eng/examples.py
  • ccx_upgrades_data_eng/inference.py
  • ccx_upgrades_data_eng/logging_utils.py
  • ccx_upgrades_data_eng/main.py
  • ccx_upgrades_data_eng/metrics.py
  • ccx_upgrades_data_eng/models.py
  • ccx_upgrades_data_eng/rhobs.py
  • ccx_upgrades_data_eng/sentry.py
  • ccx_upgrades_data_eng/tests/test_config.py
  • ccx_upgrades_data_eng/tests/test_inference.py
  • ccx_upgrades_data_eng/tests/test_main.py
  • ccx_upgrades_data_eng/tests/test_models.py
  • ccx_upgrades_data_eng/tests/test_rhobs.py
  • ccx_upgrades_data_eng/tests/test_sso_token_refresh.py
  • ccx_upgrades_data_eng/tests/test_urls.py
  • ccx_upgrades_data_eng/tests/test_utils.py
  • ccx_upgrades_data_eng/urls.py
  • ccx_upgrades_data_eng/utils.py
💤 Files with no reviewable changes (1)
  • .github/workflows/lint.yml

Comment on lines +4 to +5
push:
branches: ["master"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Switch the push trigger to main.

Line 5 still points at master, but this PR is targeting main. After merge, pushes to main won't execute the lint workflow.

Suggested fix
 on:
   push:
-    branches: ["master"]
+    branches: ["main"]
   pull_request:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
push:
branches: ["master"]
push:
branches: ["main"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/linters.yaml around lines 4 - 5, Update the GitHub Actions
push trigger branches under the "push" workflow entry by replacing the current
branch name "master" with "main" so pushes to main will run the lint workflow;
locate the branches array (branches: ["master"]) and change it to branches:
["main"].

Comment on lines +46 to +53
- id: ruff-check
language_version: python3.11
args:
- '--fix'
- '--line-length=100'
- '--select=UP,F632,E,W,F,I,UP,N,B,C4,SIM,D'
- id: ruff-format

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# Inspect whether Ruff settings are already centralized elsewhere.
fd -HI 'pyproject\.toml|ruff\.toml|\.ruff\.toml' | while read -r file; do
  echo "== $file =="
  rg -n '(\[tool\.ruff|\[lint|\[format|line-length|select)' "$file" || true
done

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 189


🏁 Script executed:

cat -n pyproject.toml

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 445


🏁 Script executed:

cat -n ruff.toml

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 1381


Pre-commit Ruff configuration is out of sync with centralized ruff.toml.

The .pre-commit-config.yaml hook defines --select=UP,F632,E,W,F,I,UP,N,B,C4,SIM,D with --line-length=100, but ruff.toml defines a different set of lint rules (F,B,S,BLE,T10,EXE,RUF,PLE,A,E7,E9,W6,E711) with no line-length setting. Additionally, ruff-format has no args configured, and pyproject.toml defines max-line-length under [pycodestyle] instead of [tool.ruff].

Update .pre-commit-config.yaml to remove the hardcoded args and rely on the centralized ruff.toml configuration for both ruff-check and ruff-format. Alternatively, if the pre-commit rules are intentional, update ruff.toml to match and add the line-length setting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.pre-commit-config.yaml around lines 46 - 53, The pre-commit hook entries
for the ruff hooks ('ruff-check' and 'ruff-format') are overriding the
centralized configuration; remove the hardcoded args block (the '--select=...'
and '--line-length=100' entries) from the 'ruff-check' hook and ensure
'ruff-format' has no custom args so both hooks defer to the centralized
ruff.toml/pyproject settings (or, if the intent is to keep those specific rules,
update ruff.toml/pyproject.toml to match the same rule set and max-line-length
instead of editing the pre-commit hook); target the 'ruff-check' and
'ruff-format' hook entries in .pre-commit-config.yaml when making the change.

import requests
from cachetools import cached
from fastapi import HTTPException
from requests.exceptions import ConnectionError, ReadTimeout
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify builtin-shadowing pattern and exception usage sites.
rg -nP 'from requests\.exceptions import .*ConnectionError' ccx_upgrades_data_eng/rhobs.py
rg -nP 'except\s+\(ConnectionError,\s*ReadTimeout\)|except\s+\(RequestsConnectionError,\s*ReadTimeout\)' ccx_upgrades_data_eng/rhobs.py

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 246


🏁 Script executed:

#!/bin/bash
# Check for all ConnectionError usages in the file
rg -n 'ConnectionError' ccx_upgrades_data_eng/rhobs.py

# Check for linting/ruff configuration
fd -e toml -e cfg -e ini | xargs grep -l "A004\|builtin" 2>/dev/null | head -10

# Check if ruff config exists
fd -e "ruff.toml" -o -e "pyproject.toml" | head -5

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 390


🏁 Script executed:

#!/bin/bash
# Check ruff configuration
cat ruff.toml

# Check if builtin ConnectionError is actually referenced/used in the file
# (not from imports - just bare usage)
grep -n "ConnectionError" ccx_upgrades_data_eng/rhobs.py | grep -v "from\|import\|except"

# Also check the entire rhobs.py file to understand context
wc -l ccx_upgrades_data_eng/rhobs.py

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 1192


🏁 Script executed:

#!/bin/bash
# Verify the exact context of the import and exception handlers
sed -n '8,12p' ccx_upgrades_data_eng/rhobs.py | cat -n

sed -n '64,68p' ccx_upgrades_data_eng/rhobs.py | cat -n

sed -n '154,158p' ccx_upgrades_data_eng/rhobs.py | cat -n

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 862


Avoid shadowing Python builtin ConnectionError.

Line 10 imports ConnectionError directly from requests.exceptions, which shadows the builtin and triggers Ruff rule A004 (enforced in this project's configuration).

Suggested fix
-from requests.exceptions import ConnectionError, ReadTimeout
+from requests.exceptions import ConnectionError as RequestsConnectionError, ReadTimeout

Then update the exception handlers:

-    except (ConnectionError, ReadTimeout) as e:
+    except (RequestsConnectionError, ReadTimeout) as e:

Apply at lines 10, 66, and 156.

🧰 Tools
🪛 Ruff (0.15.6)

[error] 10-10: Import ConnectionError is shadowing a Python builtin

(A004)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/rhobs.py` at line 10, Change the import to avoid
shadowing the builtin by aliasing the requests exception (e.g., from
requests.exceptions import ConnectionError as RequestsConnectionError,
ReadTimeout) and then update any exception handlers that currently catch
ConnectionError to catch RequestsConnectionError instead (the handlers around
the code referenced at lines 66 and 156). Ensure all references to the requests
exception use the new alias (RequestsConnectionError) while leaving built-in
ConnectionError untouched.

Comment on lines +58 to +59
def perform_rhobs_request(cluster_id: UUID) -> tuple[UpgradeRisksPredictors, str]:
"""Run the requests to RHOBS server and return the retrieved predictors.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix return-contract mismatch in perform_rhobs_request.

Line 58 declares non-optional return values, but Line 91 returns (None, None). This weakens the function contract and can propagate None unexpectedly.

Suggested fix
-@cached(cache=CustomTTLCache())
-def perform_rhobs_request(cluster_id: UUID) -> tuple[UpgradeRisksPredictors, str]:
+@cached(cache=CustomTTLCache())
+def perform_rhobs_request(
+    cluster_id: UUID,
+) -> tuple[UpgradeRisksPredictors | None, str | None]:

Also applies to: 90-91

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/rhobs.py` around lines 58 - 59, perform_rhobs_request
currently declares non-optional returns but returns (None, None) on failure;
update the contract to avoid mismatch by changing its return annotation to
tuple[Optional[UpgradeRisksPredictors], Optional[str]] (import Optional) or
alternatively raise a clear exception instead of returning Nones; adjust the
implementation around the failure branch that currently returns (None, None) and
update callers of perform_rhobs_request to handle the Optional return (or the
raised exception) accordingly.

Comment on lines 18 to 22
def test_get_settings_fail():
"""Test get_settings with improper environment configured."""
get_settings.cache_clear()
with pytest.raises(Exception):
with pytest.raises(ValidationError):
get_settings()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden env-dependent tests to avoid host-environment leakage.

These tests can become flaky when CI/developer environments already define related variables. Isolate os.environ with clear=True.

✅ Proposed fix
 def test_get_settings_fail():
     """Test get_settings with improper environment configured."""
     get_settings.cache_clear()
-    with pytest.raises(ValidationError):
-        get_settings()
+    with unittest.mock.patch.dict(os.environ, {}, clear=True):
+        with pytest.raises(ValidationError):
+            get_settings()
@@
-    with unittest.mock.patch.dict(os.environ, environ):
+    with unittest.mock.patch.dict(os.environ, environ, clear=True):
         settings = get_settings()
@@
-    with unittest.mock.patch.dict(os.environ, environ):
+    with unittest.mock.patch.dict(os.environ, environ, clear=True):
         settings = get_settings()

Also applies to: 35-36, 77-78

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/tests/test_config.py` around lines 18 - 22, The test
test_get_settings_fail currently relies on the real process environment and can
leak CI/developer vars; wrap the call to get_settings() in a monkeypatch or
context that isolates os.environ (e.g., use pytest's monkeypatch.context() or
monkeypatch.delenv/patch.dict with clear=True) before calling get_settings() and
after calling get_settings.cache_clear(), so the environment is empty for the
test; apply the same pattern to the other env-dependent tests referenced (the
ones also clearing cache and asserting ValidationError) so all use an isolated
os.environ (clear=True) when invoking get_settings().


import pytest
from fastapi import HTTPException
from requests.exceptions import ConnectionError
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
rg -n 'from requests\.exceptions import ConnectionError|ConnectionError\(' ccx_upgrades_data_eng/tests/test_rhobs.py

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 200


Add alias to avoid shadowing builtin ConnectionError (Ruff A004).

Line 11 imports ConnectionError directly from requests.exceptions, which shadows the Python builtin. Update the import and its usage at line 75:

Fix
-from requests.exceptions import ConnectionError
+from requests.exceptions import ConnectionError as RequestsConnectionError
-    session_mock.get.side_effect = ConnectionError("Mock failure")
+    session_mock.get.side_effect = RequestsConnectionError("Mock failure")
🧰 Tools
🪛 Ruff (0.15.6)

[error] 11-11: Import ConnectionError is shadowing a Python builtin

(A004)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/tests/test_rhobs.py` at line 11, Import the requests
exception under an alias to avoid shadowing the builtin ConnectionError: change
the import of ConnectionError from requests.exceptions to import it as
RequestsConnectionError (or similar) and update the test code where the requests
exception is referenced (the usage of ConnectionError around the test
assertion/mock at line 75) to use the new alias; keep any references to the
builtin ConnectionError intact if they exist elsewhere.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
ccx_upgrades_data_eng/models.py (2)

25-32: ⚠️ Potential issue | 🟡 Minor

Undefined Model type in signature.

Model is not defined in this module. The type hints reference a non-existent type. Since the method returns Alert.model_validate(obj), the signature should use Alert:

Proposed fix
     `@classmethod`
-    def parse_metric(cls: type["Model"], obj: Any) -> "Model":  # noqa
+    def parse_metric(cls: type["Alert"], obj: Any) -> "Alert":
         """Wrap the parsing of an Observatorium metric object and return an Alert."""
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/models.py` around lines 25 - 32, The parse_metric
method signature references an undefined type Model; change the type hints to
use the actual class Alert used in the return value. Update the classmethod
signature from def parse_metric(cls: type["Model"], obj: Any) -> "Model": to use
Alert (e.g., cls: type["Alert"] or simply -> Alert) and ensure Alert is
imported/defined in the module so Alert.model_validate(obj) remains correct;
adjust any forward references (quotes) as needed to satisfy type checkers.

206-206: ⚠️ Potential issue | 🟡 Minor

Invalid date in example: month 15 is out of range.

The example date "2011-15-04T00:05:23Z" has an invalid month value (15). Valid months are 01-12. This could cause confusion or parsing errors if used as reference.

Proposed fix
-                        "last_checked_at": "2011-15-04T00:05:23Z",
+                        "last_checked_at": "2011-04-15T00:05:23Z",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/models.py` at line 206, The example value for the
"last_checked_at" field contains an invalid month (15); update the example
string to a valid ISO 8601 datetime (e.g., change "2011-15-04T00:05:23Z" to a
valid month like "2011-05-04T00:05:23Z" or "2011-12-04T00:05:23Z") where the key
"last_checked_at" appears in the models.py example data so it parses correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ccx_upgrades_data_eng/models.py`:
- Around line 57-67: The type annotation in Model.parse_metric incorrectly
references an undefined Model type; update the signature and return annotation
to reference the concrete FOC type instead (e.g., change def parse_metric(cls:
type["Model"], obj: Any) -> "Model": to use FOC, such as def parse_metric(cls:
type[FOC], obj: Any) -> FOC: or remove the forward-ref and annotate using FOC),
and ensure imports/namespace include FOC so the function signature and return
type correctly refer to FOC while preserving the existing logic that calls
FOC.model_validate(obj); mirror the fix used in Alert.parse_metric.
- Around line 3-7: The project uses Python 3.10+ union syntax (X | None) in the
models but setup.cfg declares python_requires = >=3.6; either update setup.cfg
to python_requires = >=3.10 or change all occurrences of the new union syntax to
the 3.6-compatible typing.Optional form (e.g., import Optional from typing and
replace usages like UUID | None with Optional[UUID]) in the model classes (refer
to BaseModel/ConfigDict usages in this file and any attributes typed with
unions).

---

Outside diff comments:
In `@ccx_upgrades_data_eng/models.py`:
- Around line 25-32: The parse_metric method signature references an undefined
type Model; change the type hints to use the actual class Alert used in the
return value. Update the classmethod signature from def parse_metric(cls:
type["Model"], obj: Any) -> "Model": to use Alert (e.g., cls: type["Alert"] or
simply -> Alert) and ensure Alert is imported/defined in the module so
Alert.model_validate(obj) remains correct; adjust any forward references
(quotes) as needed to satisfy type checkers.
- Line 206: The example value for the "last_checked_at" field contains an
invalid month (15); update the example string to a valid ISO 8601 datetime
(e.g., change "2011-15-04T00:05:23Z" to a valid month like
"2011-05-04T00:05:23Z" or "2011-12-04T00:05:23Z") where the key
"last_checked_at" appears in the models.py example data so it parses correctly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: db4700fa-c360-466e-843c-be5a5b67aa8c

📥 Commits

Reviewing files that changed from the base of the PR and between df9f2fd and 541f5e7.

📒 Files selected for processing (1)
  • ccx_upgrades_data_eng/models.py

Comment on lines +3 to +7
from datetime import datetime
from typing import Any
from uuid import UUID

from pydantic import ConfigDict, BaseModel # pylint: disable=no-name-in-module
from datetime import datetime
from pydantic import BaseModel, ConfigDict # pylint: disable=no-name-in-module
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check Python version requirement in project config files
echo "=== pyproject.toml ==="
cat pyproject.toml 2>/dev/null | grep -A5 -i "python" || echo "No pyproject.toml found"
echo ""
echo "=== setup.py / setup.cfg ==="
cat setup.py 2>/dev/null | grep -i "python" || echo "No setup.py found"
cat setup.cfg 2>/dev/null | grep -i "python" || echo "No setup.cfg found"

Repository: RedHatInsights/ccx-upgrades-data-eng

Length of output: 347


Update project Python requirement or revert to Python 3.6-compatible syntax.

The code uses X | None union syntax (requires Python 3.10+), but setup.cfg specifies python_requires = >=3.6. Either:

  • Update python_requires in setup.cfg to >=3.10, or
  • Revert to Optional[X] syntax for Python 3.6+ compatibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/models.py` around lines 3 - 7, The project uses Python
3.10+ union syntax (X | None) in the models but setup.cfg declares
python_requires = >=3.6; either update setup.cfg to python_requires = >=3.10 or
change all occurrences of the new union syntax to the 3.6-compatible
typing.Optional form (e.g., import Optional from typing and replace usages like
UUID | None with Optional[UUID]) in the model classes (refer to
BaseModel/ConfigDict usages in this file and any attributes typed with unions).

Comment on lines 57 to 67
@classmethod
def parse_metric(cls: Type["Model"], obj: Any) -> "Model": # noqa
"""Wrap the parsing of an Observatorium metric object and return a FOC"""
def parse_metric(cls: type["Model"], obj: Any) -> "Model": # noqa
"""Wrap the parsing of an Observatorium metric object and return a FOC."""
obj = obj.copy() # dont modify the original obj
if "condition" in obj:
if obj["condition"] == "Available":
# because the rhobs query looks for
# cluster_operator_conditions{{condition="Available"}} == 0
# it is needed to update the condition to match
obj["condition"] = "Not Available"
if "condition" in obj and obj["condition"] == "Available":
# because the rhobs query looks for
# cluster_operator_conditions{{condition="Available"}} == 0
# it is needed to update the condition to match
obj["condition"] = "Not Available"

return FOC.model_validate(obj)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same undefined Model type issue.

Same issue as in Alert.parse_metric - Model is not defined. Should reference FOC.

Proposed fix
     `@classmethod`
-    def parse_metric(cls: type["Model"], obj: Any) -> "Model":  # noqa
+    def parse_metric(cls: type["FOC"], obj: Any) -> "FOC":
         """Wrap the parsing of an Observatorium metric object and return a FOC."""
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@classmethod
def parse_metric(cls: Type["Model"], obj: Any) -> "Model": # noqa
"""Wrap the parsing of an Observatorium metric object and return a FOC"""
def parse_metric(cls: type["Model"], obj: Any) -> "Model": # noqa
"""Wrap the parsing of an Observatorium metric object and return a FOC."""
obj = obj.copy() # dont modify the original obj
if "condition" in obj:
if obj["condition"] == "Available":
# because the rhobs query looks for
# cluster_operator_conditions{{condition="Available"}} == 0
# it is needed to update the condition to match
obj["condition"] = "Not Available"
if "condition" in obj and obj["condition"] == "Available":
# because the rhobs query looks for
# cluster_operator_conditions{{condition="Available"}} == 0
# it is needed to update the condition to match
obj["condition"] = "Not Available"
return FOC.model_validate(obj)
`@classmethod`
def parse_metric(cls: type["FOC"], obj: Any) -> "FOC":
"""Wrap the parsing of an Observatorium metric object and return a FOC."""
obj = obj.copy() # dont modify the original obj
if "condition" in obj and obj["condition"] == "Available":
# because the rhobs query looks for
# cluster_operator_conditions{{condition="Available"}} == 0
# it is needed to update the condition to match
obj["condition"] = "Not Available"
return FOC.model_validate(obj)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ccx_upgrades_data_eng/models.py` around lines 57 - 67, The type annotation in
Model.parse_metric incorrectly references an undefined Model type; update the
signature and return annotation to reference the concrete FOC type instead
(e.g., change def parse_metric(cls: type["Model"], obj: Any) -> "Model": to use
FOC, such as def parse_metric(cls: type[FOC], obj: Any) -> FOC: or remove the
forward-ref and annotate using FOC), and ensure imports/namespace include FOC so
the function signature and return type correctly refer to FOC while preserving
the existing logic that calls FOC.model_validate(obj); mirror the fix used in
Alert.parse_metric.

- '--select=UP,F632,E,W,F,I,UP,N,B,C4,SIM,D'
- id: ruff-format

# - repo: https://github.com/AleksaC/hadolint-py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason to keep this commented out?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I keep commented all of those that are in the "template" in the processing-tools repo, but are not easily fixable. Hadolint would require another task to be fixed, because we have warning in all our repositories

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.pre-commit-config.yaml (1)

20-26: Consider limiting SC2086 suppression to specific files rather than disabling it repo-wide.

While the repository currently contains no shell scripts, this pre-commit hook configuration will apply to any future shell scripts added to the project. Disabling SC2086 (quoting/word-splitting check) globally means new scripts could have unquoted expansions without warnings. If intentional word splitting is needed, use file- or line-local disables (# shellcheck disable=SC2086) instead of repo-wide exclusions.

Suggested change
-        args: ['--exclude=SC1090,SC2086,SC2034,SC1091']
+        args: ['--exclude=SC1090,SC2034,SC1091']

Then add inline disables in specific scripts that intentionally use word splitting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.pre-commit-config.yaml around lines 20 - 26, The pre-commit hook currently
disables SC2086 globally under the shellcheck entry (see the id: shellcheck
block and its args), so remove SC2086 from the args list to stop repo-wide
suppression and instead limit suppression to the specific scripts that need
intentional word-splitting by either adding per-file hook entries (use the
shellcheck hook's files: pattern to target those scripts) or by adding inline
directives in those scripts (e.g. # shellcheck disable=SC2086) where
intentional; update the shellcheck args to keep other exclusions (SC1090,
SC2034, SC1091) but not SC2086.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.pre-commit-config.yaml:
- Around line 20-26: The pre-commit hook currently disables SC2086 globally
under the shellcheck entry (see the id: shellcheck block and its args), so
remove SC2086 from the args list to stop repo-wide suppression and instead limit
suppression to the specific scripts that need intentional word-splitting by
either adding per-file hook entries (use the shellcheck hook's files: pattern to
target those scripts) or by adding inline directives in those scripts (e.g. #
shellcheck disable=SC2086) where intentional; update the shellcheck args to keep
other exclusions (SC1090, SC2034, SC1091) but not SC2086.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 06292849-e84b-492a-a1b1-72c546ce570e

📥 Commits

Reviewing files that changed from the base of the PR and between 541f5e7 and 375a4e1.

📒 Files selected for processing (1)
  • .pre-commit-config.yaml

@joselsegura joselsegura enabled auto-merge (squash) March 16, 2026 18:19
@joselsegura joselsegura requested a review from matysek March 17, 2026 08:16
@joselsegura joselsegura merged commit 61acbf3 into RedHatInsights:main Mar 17, 2026
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants