-
Notifications
You must be signed in to change notification settings - Fork 259
GroupedJobs + Prometheus credentials auto refresh #478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds GroupedJob support and docs: new README section; CLI/config options for job grouping; model Kind update; Kubernetes listing logic that groups Jobs by configured labels (per-group limits, CronJob exclusion); Prometheus client lazy init/refresh and GroupedJob pod resolution; dependency adjustments and unit tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as main.py
participant Config
participant K8s as Kubernetes Integration
participant PromSvc as PrometheusMetricsService
participant Prom as Prometheus Client
User->>CLI: run_strategy --job-grouping-labels "app,team" --job-grouping-limit 50
CLI->>Config: Build Config(job_grouping_labels=["app","team"], job_grouping_limit=50)
CLI->>K8s: list_scannable_objects(config)
alt grouping enabled
K8s->>K8s: _list_all_groupedjobs(): list Jobs -> filter CronJob-owned -> group by (namespace,label,value) -> create GroupedJob objects
end
K8s-->>CLI: Objects (incl. GroupedJob)
CLI->>PromSvc: gather_data(objects)
loop per object
PromSvc->>PromSvc: get_prometheus()
alt client missing/expired
PromSvc->>Prom: init/refresh client
Prom-->>PromSvc: client ready
end
PromSvc->>Prom: query / load_pods(object)
alt object.kind == "GroupedJob"
PromSvc->>PromSvc: derive pod owners from object._grouped_jobs (pod_owner_kind="Job")
end
end
sequenceDiagram
autonumber
participant K8s as Kubernetes Integration
participant API as K8s API Server
K8s->>API: List Jobs across namespaces
API-->>K8s: Jobs
K8s->>K8s: Filter out CronJob-owned Jobs
K8s->>K8s: Exclude Jobs that carry grouping labels
K8s->>K8s: Group remaining Jobs by (namespace,label key,value) -> create GroupedJob objects (with selector + _grouped_jobs)
K8s-->>K8s: Return grouped and regular Job objects for scanning
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
README.md(1 hunks)pyproject.toml(1 hunks)requirements.txt(1 hunks)robusta_krr/core/integrations/kubernetes/__init__.py(4 hunks)robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py(5 hunks)robusta_krr/core/models/config.py(2 hunks)robusta_krr/core/models/objects.py(1 hunks)robusta_krr/main.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
robusta_krr/core/integrations/kubernetes/__init__.py (1)
robusta_krr/core/models/objects.py (2)
PodData(14-19)K8sObjectData(38-107)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (4)
robusta_krr/core/integrations/prometheus/metrics_service/base_metric_service.py (2)
check_connection(27-28)name(31-33)robusta_krr/core/integrations/prometheus/metrics_service/mimir_metrics_service.py (1)
check_connection(35-47)robusta_krr/core/integrations/prometheus/metrics_service/victoria_metrics_service.py (2)
check_connection(49-61)name(46-47)robusta_krr/core/integrations/prometheus/metrics_service/thanos_metrics_service.py (1)
check_connection(38-50)
🪛 Flake8 (7.3.0)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py
[error] 117-117: visually indented line with same indent as next logical line
(E129)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (1)
109-112: Initialize_last_init_atbefore callingget_prometheus()This issue was already identified in a previous review. The
get_prometheus()method at line 117 readsself._last_init_at, but it's never initialized before__init__callsget_prometheus()at line 112, causing anAttributeErroron first run.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (4)
robusta_krr/core/integrations/prometheus/prometheus_utils.py (1)
generate_prometheus_config(24-71)robusta_krr/core/integrations/prometheus/metrics_service/mimir_metrics_service.py (1)
check_connection(35-47)robusta_krr/core/integrations/prometheus/metrics_service/victoria_metrics_service.py (2)
check_connection(49-61)name(46-47)robusta_krr/core/integrations/prometheus/metrics_service/thanos_metrics_service.py (1)
check_connection(38-50)
🪛 Flake8 (7.3.0)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py
[error] 118-118: visually indented line with same indent as next logical line
(E129)
🔇 Additional comments (2)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (2)
130-130: LGTM: Consistent credential refresh across all Prometheus callsThe replacement of direct
self.prometheusaccess withself.get_prometheus()across all query methods ensures automatic credential refresh throughout the service lifecycle.Also applies to: 137-137, 145-145, 172-172, 211-211
351-357: Ignore private_api_resource._grouped_jobswarnings
GroupedJob’s_grouped_jobsis always populated by the grouping logic (non-empty), andmetadata.nameis guaranteed by the Kubernetes client. Accessing these internal attributes follows the established pattern across the codebase and requires no additional validation or error handling.Likely an incorrect or invalid review comment.
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
robusta_krr/core/integrations/kubernetes/__init__.py (1)
529-535: GroupedJob drops additional containers.Only
containers[0]is used (line 529), so any extra containers defined on the underlying Jobs are dropped. This issue was previously identified in past review comments.As suggested in the previous review, iterate over all containers and emit a grouped object per container:
- limited_jobs = namespace_jobs[:settings.job_grouping_limit] - template_job = limited_jobs[0] - template_container = template_job.spec.template.spec.containers[0] - - grouped_job = self.__build_scannable_object( - item=template_job, - container=template_container, - kind="GroupedJob" - ) - - grouped_job.name = group_name - grouped_job.namespace = namespace - grouped_job._api_resource._grouped_jobs = limited_jobs - grouped_job._api_resource._label_filter = group_name - - result.append(grouped_job) + limited_jobs = namespace_jobs[:settings.job_grouping_limit] + template_job = limited_jobs[0] + + for container in template_job.spec.template.spec.containers: + grouped_job = self.__build_scannable_object( + item=template_job, + container=container, + kind="GroupedJob" + ) + + grouped_job.name = group_name + grouped_job.namespace = namespace + grouped_job._api_resource._grouped_jobs = limited_jobs + grouped_job._api_resource._label_filter = group_name + + result.append(grouped_job)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
robusta_krr/core/integrations/kubernetes/__init__.py(4 hunks)robusta_krr/core/models/config.py(2 hunks)robusta_krr/main.py(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- robusta_krr/core/models/config.py
- robusta_krr/main.py
🧰 Additional context used
🧬 Code graph analysis (1)
robusta_krr/core/integrations/kubernetes/__init__.py (1)
robusta_krr/core/models/objects.py (2)
PodData(14-19)K8sObjectData(38-107)
🔇 Additional comments (2)
robusta_krr/core/integrations/kubernetes/__init__.py (2)
110-110: LGTM!The integration of
_list_all_groupedjobs()into the parallel workload aggregation is correct and consistent with other workload types.
462-479: LGTM!The
filter_jobsfunction correctly excludes CronJob-owned jobs and jobs with grouping labels, preventing double-listing of jobs that should be handled by GroupedJob objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
pyproject.toml(2 hunks)tests/test_grouped_jobs.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🧬 Code graph analysis (1)
tests/test_grouped_jobs.py (3)
robusta_krr/core/integrations/kubernetes/__init__.py (3)
KubernetesLoader(635-714)_list_namespaced_or_global_objects(264-303)_list_all_groupedjobs(490-545)robusta_krr/core/models/config.py (1)
Config(20-207)robusta_krr/core/models/objects.py (1)
K8sObjectData(38-107)
🪛 Flake8 (7.3.0)
tests/test_grouped_jobs.py
[error] 3-3: 'collections.defaultdict' imported but unused
(F401)
[error] 7-7: 'robusta_krr.api.models.K8sObjectData' imported but unused
(F401)
🪛 GitHub Actions: Pytest
tests/test_grouped_jobs.py
[warning] 1-1: PytestUnknownMarkWarning: Unknown pytest.mark.asyncio - is this a typo? You can register custom marks to avoid this warning.
🪛 Ruff (0.13.3)
tests/test_grouped_jobs.py
42-42: Unused function argument: mock_config
(ARG001)
62-62: Unused function argument: item
(ARG001)
62-62: Unused function argument: container
(ARG001)
62-62: Unused function argument: kind
(ARG001)
99-99: Unused function argument: mock_config
(ARG001)
112-112: Unused function argument: item
(ARG001)
112-112: Unused function argument: container
(ARG001)
112-112: Unused function argument: kind
(ARG001)
139-139: Unused function argument: mock_config
(ARG001)
153-153: Unused function argument: item
(ARG001)
153-153: Unused function argument: container
(ARG001)
153-153: Unused function argument: kind
(ARG001)
185-185: Unused function argument: mock_config
(ARG001)
197-197: Unused function argument: item
(ARG001)
197-197: Unused function argument: container
(ARG001)
197-197: Unused function argument: kind
(ARG001)
🔇 Additional comments (1)
tests/test_grouped_jobs.py (1)
62-65: Unused arguments in mock functions are expected behavior.The static analysis warnings about unused arguments in
mock_build_scannable_objectare false positives. These arguments must match the signature of the method being mocked (__build_scannable_object), even if the mock implementation doesn't use them. The same applies to themock_configfixture parameter in test functions—it's required for the pytest fixture injection mechanism and thepatchcontext manager in themock_kubernetes_loaderfixture.Also applies to: 112-115, 153-156, 197-200
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
conftest.py(1 hunks)pyproject.toml(3 hunks)tests/test_grouped_jobs.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- conftest.py
🧰 Additional context used
🧬 Code graph analysis (1)
tests/test_grouped_jobs.py (2)
robusta_krr/core/integrations/kubernetes/__init__.py (2)
_list_namespaced_or_global_objects(264-303)_list_all_groupedjobs(490-545)robusta_krr/core/models/config.py (2)
Config(20-207)get_kube_client(179-189)
🪛 Ruff (0.13.3)
tests/test_grouped_jobs.py
67-67: Unused function argument: item
(ARG001)
67-67: Unused function argument: container
(ARG001)
67-67: Unused function argument: kind
(ARG001)
119-119: Unused function argument: item
(ARG001)
119-119: Unused function argument: container
(ARG001)
119-119: Unused function argument: kind
(ARG001)
162-162: Unused function argument: item
(ARG001)
162-162: Unused function argument: container
(ARG001)
162-162: Unused function argument: kind
(ARG001)
208-208: Unused function argument: item
(ARG001)
208-208: Unused function argument: container
(ARG001)
208-208: Unused function argument: kind
(ARG001)
🔇 Additional comments (8)
pyproject.toml (2)
57-57: LGTM: pytest-asyncio dependency added.The addition of
pytest-asyncioresolves the pipeline failure where theasynciomarker was not recognized in the test suite.
33-33: Confirm prometrix downgrade.
prometrix is pinned to v0.2.5 but the latest release is v0.2.7 with no known vulnerabilities; ensure v0.2.5 supports all required functionality and the downgrade is intentional.tests/test_grouped_jobs.py (6)
1-43: LGTM: Clean test setup with fixtures and helpers.The imports, fixtures (
mock_config,mock_kubernetes_loader), and helper function (create_mock_job) are well-structured and provide a solid foundation for testing the GroupedJob functionality.
46-102: LGTM: Comprehensive test for job grouping limit.The test correctly verifies that
_list_all_groupedjobsrespects thejob_grouping_limitby creating more jobs than the limit and asserting only the first 3 jobs per group are included. The mock setup is appropriate.Note: Static analysis flags unused arguments in
mock_build_scannable_object(line 67), but these are intentional to match the signature of the method being mocked.
105-144: LGTM: Validates namespace separation in job grouping.The test correctly verifies that jobs with the same grouping label in different namespaces are grouped separately, which is the expected behavior. The assertions confirm each namespace gets its own GroupedJob object.
147-179: LGTM: Correctly validates CronJob exclusion logic.The test ensures jobs with CronJob owner references are excluded from grouping, which is the expected behavior since CronJobs manage their own job lifecycle. The test setup and assertions are appropriate.
182-192: LGTM: Validates early return when grouping is disabled.The test correctly verifies that no GroupedJob objects are created when
job_grouping_labelsisNone, ensuring the feature gracefully handles being disabled.
195-226: LGTM: Validates multiple label grouping.The test correctly verifies that jobs with different grouping labels create separate groups, with each label-value combination forming its own GroupedJob. The use of a set to verify group names is a good approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (1)
102-108: Prevent shared headers bleeding across instancesLine 102 assigns
headers = self.additional_headers, and the subsequent|=updates mutate that class-level dict in place. The first service instance will stash its Authorization header there, and every later instance (possibly for another cluster/account) inherits those credentials. Please copy the template dict before merging secrets so each service keeps its headers isolated.- headers = self.additional_headers + headers = dict(self.additional_headers)
♻️ Duplicate comments (1)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (1)
117-120: Fix the multilineifindentation to satisfy flake8 E129Flake8 is right: the continuation lines sit at the same indentation as the next logical line, so the formatter/linter will keep failing. Please reformat the condition using the standard parenthesized style so
black/flake8accept it.- if (not self.prometheus - or not self._last_init_at - or now - self._last_init_at >= timedelta(seconds=PROM_REFRESH_CREDS_SEC)): + if ( + not self.prometheus + or not self._last_init_at + or now - self._last_init_at >= timedelta(seconds=PROM_REFRESH_CREDS_SEC) + ):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
conftest.py(1 hunks)pyproject.toml(3 hunks)robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py(5 hunks)tests/test_grouped_jobs.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
🧰 Additional context used
🧬 Code graph analysis (2)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py (4)
robusta_krr/core/integrations/prometheus/prometheus_utils.py (1)
generate_prometheus_config(24-71)robusta_krr/core/integrations/prometheus/metrics_service/mimir_metrics_service.py (1)
check_connection(35-47)robusta_krr/core/integrations/prometheus/metrics_service/victoria_metrics_service.py (2)
check_connection(49-61)name(46-47)robusta_krr/core/integrations/prometheus/metrics_service/thanos_metrics_service.py (1)
check_connection(38-50)
tests/test_grouped_jobs.py (2)
robusta_krr/core/integrations/kubernetes/__init__.py (2)
_list_namespaced_or_global_objects(264-303)_list_all_groupedjobs(490-545)robusta_krr/core/models/config.py (2)
Config(20-207)get_kube_client(179-189)
🪛 Flake8 (7.3.0)
robusta_krr/core/integrations/prometheus/metrics_service/prometheus_metrics_service.py
[error] 119-119: visually indented line with same indent as next logical line
(E129)
🪛 Ruff (0.13.3)
tests/test_grouped_jobs.py
67-67: Unused function argument: item
(ARG001)
67-67: Unused function argument: container
(ARG001)
67-67: Unused function argument: kind
(ARG001)
119-119: Unused function argument: item
(ARG001)
119-119: Unused function argument: container
(ARG001)
119-119: Unused function argument: kind
(ARG001)
162-162: Unused function argument: item
(ARG001)
162-162: Unused function argument: container
(ARG001)
162-162: Unused function argument: kind
(ARG001)
208-208: Unused function argument: item
(ARG001)
208-208: Unused function argument: container
(ARG001)
208-208: Unused function argument: kind
(ARG001)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
robusta_krr/core/integrations/kubernetes/__init__.py (2)
150-164: Filter pods by the limited jobs list, not just label selector.The code uses
label_selector(line 158) to query all pods matching the grouping label, then slices the results (line 163). This can include pods from jobs outside_grouped_jobs(assigned at line 551) if they're returned first by the API. Instead, filterret.itemsto only pods whose owner references match job UIDs inobject._api_resource._grouped_jobsbefore applyingjob_grouping_limit.
514-518: Jobs with multiple grouping labels will be counted multiple times.The current implementation iterates over all
job_grouping_labelsand adds each job to a separate group for each matching label (lines 514-518). If a job has multiple grouping labels (e.g., both "app" and "version"), it will appear in multiple GroupedJob objects, leading to double-counting of pods and resources.For example, with
job_grouping_labels = ["app", "version"]and a job with labels{"app": "foo", "version": "1.0"}:
- The job is added to
grouped_jobs["app=foo"]- The job is also added to
grouped_jobs["version=1.0"]- Two separate GroupedJob objects are created, each containing this job
- The same pods are scanned twice under different GroupedJob names
Consider using only the first matching label or document that this behavior is intentional for multi-dimensional grouping.
Apply this diff to use only the first matching label:
for label_name in settings.job_grouping_labels: if label_name in job.metadata.labels: label_value = job.metadata.labels[label_name] group_key = f"{label_name}={label_value}" grouped_jobs[group_key].append(job) + break # Only group by the first matching label to avoid double-counting
🧹 Nitpick comments (2)
tests/test_grouped_jobs.py (2)
67-67: Prefix unused parameters with underscore.The mock helper functions declare parameters
item,container, andkindbut never use them, triggering Ruff ARG001 warnings. Prefix these with underscores to mark them as intentionally unused:Apply this diff to all four helper functions (lines 67, 124, 171, 217):
- def mock_build_scannable_object(item, container, kind): + def mock_build_scannable_object(_item, _container, _kind): obj = MagicMock() obj._api_resource = MagicMock() return objAlso applies to: 124-124, 171-171, 217-217
46-235: Consider mocking_should_list_resourcefor robustness.While the tests currently work because
mock_config.resources = "*"causes_should_list_resourceto returnTrue, explicitly mocking this method would make the tests more robust and document their assumptions clearly. If_should_list_resource("GroupedJob")returnsFalse, the method returns an empty list, causing test failures.Add this line after setting up the mock helpers in each test:
mock_kubernetes_loader._should_list_resource = MagicMock(return_value=True)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
robusta_krr/core/integrations/kubernetes/__init__.py(4 hunks)tests/test_grouped_jobs.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
robusta_krr/core/integrations/kubernetes/__init__.py (1)
robusta_krr/core/models/objects.py (2)
PodData(14-19)K8sObjectData(38-107)
tests/test_grouped_jobs.py (2)
robusta_krr/core/integrations/kubernetes/__init__.py (3)
ClusterLoader(34-644)_list_namespaced_or_global_objects(264-303)_list_all_groupedjobs(490-557)robusta_krr/core/models/config.py (2)
Config(20-207)get_kube_client(179-189)
🪛 Ruff (0.13.3)
tests/test_grouped_jobs.py
67-67: Unused function argument: item
(ARG001)
67-67: Unused function argument: container
(ARG001)
67-67: Unused function argument: kind
(ARG001)
124-124: Unused function argument: item
(ARG001)
124-124: Unused function argument: container
(ARG001)
124-124: Unused function argument: kind
(ARG001)
171-171: Unused function argument: item
(ARG001)
171-171: Unused function argument: container
(ARG001)
171-171: Unused function argument: kind
(ARG001)
217-217: Unused function argument: item
(ARG001)
217-217: Unused function argument: container
(ARG001)
217-217: Unused function argument: kind
(ARG001)
🔇 Additional comments (4)
robusta_krr/core/integrations/kubernetes/__init__.py (2)
110-110: LGTM!The integration of
_list_all_groupedjobs()into the parallel workload aggregation is correct and consistent with other workload types.
462-480: LGTM!The
filter_jobsfunction correctly prevents double-counting by excluding CronJob-owned jobs and jobs with grouping labels from the regular Job listing.tests/test_grouped_jobs.py (2)
1-44: LGTM!The test structure and fixtures are well-organized. The
create_mock_jobhelper effectively reduces duplication across test cases.
46-236: LGTM!The test cases are comprehensive and well-structured:
- Job grouping limit enforcement
- Namespace separation
- CronJob exclusion
- No grouping without labels
- Multi-label grouping
Each test clearly verifies the expected behavior with appropriate assertions.
arikalon1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very nice work!
Added one minor comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md(1 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[grammar] ~364-~364: There might be a mistake here.
Context: ...grouping-labels app,team ``` This will: - Group jobs that have either app or `te...
(QB_NEW_EN)
[grammar] ~366-~366: There might be a mistake here.
Context: ...ike app=frontend, team=backend, etc. - Provide resource recommendations for the...
(QB_NEW_EN)
Added support for job grouping, and refreshing prometheus credentials due to irsa token timeout.

Currently testing credential refresh.