Skip to content

Layers - 06 - B [Test] Adds Test changes for Create and List Layers api #1781

Open
bhanvimenghani wants to merge 5 commits intokruize:mvp_demofrom
bhanvimenghani:layers-test-changes
Open

Layers - 06 - B [Test] Adds Test changes for Create and List Layers api #1781
bhanvimenghani wants to merge 5 commits intokruize:mvp_demofrom
bhanvimenghani:layers-test-changes

Conversation

@bhanvimenghani
Copy link
Contributor

@bhanvimenghani bhanvimenghani commented Jan 29, 2026

Description

Please describe the issue or feature and the summary of changes made to fix this.

Fixes # (issue)

Type of change

  • Bug fix
  • New feature
  • Docs update
  • Breaking change (What changes might users need to make in their application due to this PR?)
  • Requires DB changes

How has this been tested?

Please describe the tests that were run to verify your changes and steps to reproduce. Please specify any test configuration required.

  • New Test X
  • Functional testsuite

Test Configuration

  • Kubernetes clusters tested on:

Checklist 🎯

  • Followed coding guidelines
  • Comments added
  • Dependent changes merged
  • Documentation updated
  • Tests added or updated

Additional information

Include any additional information such as links, test results, screenshots here

Summary by Sourcery

Add comprehensive test coverage and helpers for the Layers create and list REST APIs.

New Features:

  • Introduce helper functions and utilities to create, list, and clean up layers for API tests.
  • Add extensive positive and negative test suites for the createLayer and listLayers APIs, including validation of tunables and presence configurations.

Enhancements:

  • Document the planned createLayer and listLayers test scenarios in the local monitoring test plan and add a helper to locate layer manifest files.
  • Extend shared test utilities with layer-related constants, directory helpers, and a temporary DB cleanup helper for layer records.
  • Add a new pytest marker to categorize layer-related tests.

Tests:

  • Create dedicated pytest modules for createLayer and listLayers REST API behavior, covering success cases, input validation, error handling, and response schema verification.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 29, 2026

Reviewer's Guide

Adds helper functions, constants, documentation, pytest markers, and two comprehensive REST API test suites to validate the new createLayer and listLayers APIs for Kruize layers, including positive, negative, and field-level validation scenarios.

File-Level Changes

Change Details Files
Add helper utilities for interacting with layer REST APIs and cleaning up test data in the DB.
  • Introduce create_layer() helper that posts layer JSON to the /createLayer endpoint and logs request/response details.
  • Introduce list_layers() helper that calls /listLayers with optional name query parameter and conditional logging.
  • Add delete_layer_from_db() helper that shells into the kruize-db-deployment pod and deletes layers directly from the kruize_lm_layer table as a temporary cleanup workaround.
tests/scripts/helpers/kruize.py
Extend test utilities with layer-specific constants and a helper to locate layer manifest files.
  • Define user-facing message constants for layer API success and error conditions (creation, duplicates, missing fields, invalid tunables, etc.).
  • Add get_layer_dir() helper that resolves the manifests/autotune/layers directory relative to the helpers package.
tests/scripts/helpers/utils.py
Document local monitoring scenarios for Create Layer and List Layers REST APIs.
  • Describe positive and negative test scenarios for createLayer, covering tunable types, presence configurations, mandatory field validation, invalid values, presence combinations, and tunable validation.
  • Describe test scenarios for listLayers, including listing all layers, by name, invalid names, empty system, and response structure validation.
tests/scripts/local_monitoring_tests/Local_monitoring_tests.md
Introduce a pytest marker to group and filter layer-related tests.
  • Add a layers marker description to the pytest.ini markers list so tests can be selectively run with -m layers.
tests/scripts/local_monitoring_tests/pytest.ini
Add a comprehensive pytest suite for the listLayers REST API.
  • Cover behavior when no layers exist by cleaning known layers directly from the DB and asserting a 400 error with a 'no layers' message.
  • Test listing all layers and verifying required fields for each, after creating one or more layers from manifest JSON files.
  • Test listing a specific layer by name and deeply validating metadata, presence configuration, and tunables, including field-by-field comparison to the input JSON.
  • Add negative tests for non-existent layer names, invalid query parameters, and special-character layer names, checking proper 400 responses and error messages.
tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py
Add a comprehensive pytest suite for the createLayer REST API, including templated negative cases.
  • Create positive tests that validate creation of layers with various tunable types, layer_presence modes, and minimal required fields using existing manifest JSONs.
  • Add an autouse-disabled fixture to clean up a common test layer name before and after specific tests via the DB helper.
  • Add grouped negative tests that render requests from a Jinja2 JSON template to exercise missing/NULL/empty mandatory fields, invalid/negative/duplicate values, invalid layer_presence combinations, and invalid bounds/step settings for numeric tunables.
  • Add negative tests for categorical tunables with null/empty choices or mixed categorical and numeric configurations, asserting 400 responses and message substrings tied to specific validation errors.
  • Add a dedicated test ensuring duplicate layer_name creation returns HTTP 409 with an 'already exists' error message.
tests/scripts/local_monitoring_tests/rest_apis/test_create_layer.py

Assessment against linked issues

Issue Objective Addressed Explanation
#1777 Implement automated tests for the Create Layer API covering the specified positive and negative scenarios in the issue. The PR adds extensive pytest coverage for Create Layer, including positive cases (bounded, categorical, mixed tunables; presence types; minimal fields) and most negative cases (null/empty metadata.name, layer_name, layer_presence, tunables; negative layer_level; duplicate tunables; duplicate layer name; invalid layer_presence combinations; tunable bounds/step and categorical validations). However, some scenarios from the issue are not clearly covered: missing (omitted) mandatory fields such as missing metadata, missing layer_name, missing layer_presence, and missing tunables (the tests cover null and empty, but not the absence of the field), and invalid data type for layer_level. Because not all specified negative scenarios are implemented, this objective is only partially met.
#1777 Implement automated tests for the List Layers API covering the specified positive and negative scenarios in the issue.
#1777 Document the Create Layer and List Layers API test scenarios in the local monitoring test guide.

Possibly linked issues

  • #N/A: PR implements and documents the positive and negative create/list layer API tests described in the issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@bhanvimenghani bhanvimenghani moved this to In Progress in Monitoring Jan 29, 2026
@bhanvimenghani bhanvimenghani added this to the Kruize 0.9 Release milestone Jan 29, 2026
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • The listLayers API is inconsistent about the query parameter name: LayerService expects name (and KruizeSupportedTypes.LIST_LAYERS_QUERY_PARAMS_SUPPORTED only allows "name"), but the tests and helpers use layer_name, so align the supported query params and request.getParameter(...) with the external API naming to avoid spurious INVALID_QUERY_PARAM errors.
  • The layer conversion logic is duplicated between DBHelpers.Converters.KruizeObjectConverters.convertLayerObjectToLayerDBObj/convertLayerDBObjToLayerObject and LayerService.convertToLayerEntry; consider consolidating these into a single set of converter methods to avoid divergence and reduce maintenance overhead.
  • Several methods (convertLayerObjectToLayerDBObj, convertLayerDBObjToLayerObject, convertLayerEntryToLayerObject, DAO methods) catch broad Exception, log and printStackTrace(), and in some cases return null or wrap the message-only in a new Exception; tightening the exception types and avoiding swallowing stack traces (or rethrowing with the cause) would make failures easier to diagnose and reduce silent nulls.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The listLayers API is inconsistent about the query parameter name: `LayerService` expects `name` (and `KruizeSupportedTypes.LIST_LAYERS_QUERY_PARAMS_SUPPORTED` only allows `"name"`), but the tests and helpers use `layer_name`, so align the supported query params and `request.getParameter(...)` with the external API naming to avoid spurious `INVALID_QUERY_PARAM` errors.
- The layer conversion logic is duplicated between `DBHelpers.Converters.KruizeObjectConverters.convertLayerObjectToLayerDBObj`/`convertLayerDBObjToLayerObject` and `LayerService.convertToLayerEntry`; consider consolidating these into a single set of converter methods to avoid divergence and reduce maintenance overhead.
- Several methods (`convertLayerObjectToLayerDBObj`, `convertLayerDBObjToLayerObject`, `convertLayerEntryToLayerObject`, DAO methods) catch broad `Exception`, log and `printStackTrace()`, and in some cases return null or wrap the message-only in a new `Exception`; tightening the exception types and avoiding swallowing stack traces (or rethrowing with the cause) would make failures easier to diagnose and reduce silent nulls.

## Individual Comments

### Comment 1
<location> `tests/scripts/helpers/kruize.py:720-729` </location>
<code_context>
+
+# Description: This function lists layers from Kruize Autotune using GET listLayers API
+# Input Parameters: layer name (optional)
+def list_layers(layer_name=None, logging=True):
+    print("\nListing the layers...")
+
+    query_params = {}
+
+    if layer_name is not None:
+        query_params['layer_name'] = layer_name
+
+    query_string = "&".join(f"{key}={value}" for key, value in query_params.items())
+
+    url = URL + "/listLayers"
+    if query_string:
+        url += "?" + query_string
+    print("URL = ", url)
+    print("PARAMS = ", query_params)
+    response = requests.get(url)
+
+    print("Response status code = ", response.status_code)
</code_context>

<issue_to_address>
**issue (bug_risk):** The list_layers helper uses 'layer_name' as query parameter, which likely does not match the API's expected parameter name.

Because this uses `layer_name` (e.g., `?layer_name=foo`) instead of the API’s `name` parameter, the listLayers calls won’t actually filter by name and the tests will only cover the invalid-parameter path. Please update the helper (and corresponding tests) to use the correct query parameter (e.g., `name`) so they exercise the intended filtering behavior.
</issue_to_address>

### Comment 2
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:119-128` </location>
<code_context>
+def test_list_specific_layer_by_name(cluster_type, layer_file):
</code_context>

<issue_to_address>
**issue (testing):** Tests that filter by layer name rely on the 'layer_name' query parameter and may not be exercising name-based filtering at all.

This and related tests call `list_layers(layer_name=...)`, which sends `?layer_name=...` to the API, but the implementation expects `name` and rejects unknown params. That means these tests likely validate invalid-parameter handling or unfiltered listing, not name-based filtering. After updating the helper to send `name`, please re-run and update the assertions (e.g., `len(layers) == 1`) to match the actual behavior (e.g., 400 for invalid params vs 404/empty list for missing names).
</issue_to_address>

### Comment 3
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:355-364` </location>
<code_context>
+
+# ========== Negative Test Cases ==========
+
+@pytest.mark.negative
+def test_list_layer_with_non_existent_name(cluster_type):
+    """
+    Test Description: This test validates listLayers API when querying a layer that doesn't exist.
+    Expected: Should return 404 status with appropriate error message.
+    """
+    form_kruize_url(cluster_type)
+
+    non_existent_layer_name = "non-existent-layer-12345"
+
+    # Try to list a layer that doesn't exist
+    response = list_layers(layer_name=non_existent_layer_name, logging=False)
+
+    print(f"Response for non-existent layer '{non_existent_layer_name}': {response.status_code}")
+
+    assert response.status_code == ERROR_404_STATUS_CODE
+
+    data = response.json()
+    assert data['status'] == ERROR_STATUS
+    assert LAYER_NOT_FOUND_MSG % non_existent_layer_name in data['message']
+
+    print(f"✓ Correctly returned 404 for non-existent layer '{non_existent_layer_name}'")
+
+
</code_context>

<issue_to_address>
**suggestion (testing):** The non-existent layer test assumes 404 and a specific error message, which may not match the API's actual behavior.

This test currently hardcodes a 404 and `LAYER_NOT_FOUND_MSG % non_existent_layer_name`, but listLayers may legitimately return a different status/message (e.g., 400 with `INVALID_LAYER_NAME_MSG` or a generic error). After the query parameter fix, please verify the actual listLayers contract for a non-existent name and adjust the expected status and message accordingly to reflect the real behavior and avoid a brittle test.

Suggested implementation:

```python


Licensed under the Apache License, Version 2.0 (the "License");
=======
# ========== Negative Test Cases ==========

@pytest.mark.negative
def test_list_layer_with_non_existent_name(cluster_type):
    """
    Test Description:
        Validate listLayers API behavior when querying a layer that doesn't exist.

    Expectations:
        - API should return a non-success (4xx/5xx) HTTP status code.
        - Response JSON should have `status == ERROR_STATUS`.
        - Error message should reference the non-existent layer name.

    Note:
        Once the listLayers contract for non-existent layers is finalized
        (exact HTTP code and message), tighten these assertions to match
        the agreed behavior instead of relying on generic checks.
    """
    # Arrange
    form_kruize_url(cluster_type)
    non_existent_layer_name = "non-existent-layer-12345"

    # Act
    response = list_layers(layer_name=non_existent_layer_name, logging=False)
    print(f"Response for non-existent layer '{non_existent_layer_name}': {response.status_code}")

    # Assert
    # Do not hardcode 404 here; only assert that the call failed.
    assert response.status_code >= 400, (
        f"Expected non-success status for non-existent layer, got {response.status_code}"
    )

    data = response.json()
    assert data.get('status') == ERROR_STATUS, (
        f"Expected ERROR_STATUS in response, got: {data}"
    )

    # The exact error message may vary by implementation; ensure it refers to the layer name.
    message = data.get('message', '')
    assert isinstance(message, str), f"Expected 'message' to be a string, got: {type(message)}"
    assert non_existent_layer_name in message, (
        f"Expected error message to mention the non-existent layer name "
        f"'{non_existent_layer_name}', got: '{message}'"
    )

    print(f"✓ Correctly returned error for non-existent layer '{non_existent_layer_name}'")


Licensed under the Apache License, Version 2.0 (the "License");

```

To fully align this test with the real API contract once known, you should:

1. Verify the actual behavior of `list_layers(layer_name=non_existent_layer_name)` against a live or mocked API:
   - Exact HTTP status code (e.g., 404, 400).
   - Exact error message or error code field if present.
2. Update the assertions accordingly, for example:
   - Replace `response.status_code >= 400` with the specific expected code (e.g., `ERROR_404_STATUS_CODE` or `ERROR_400_STATUS_CODE`).
   - Replace the generic `non_existent_layer_name in message` check with an assertion against the exact message or a constant (e.g., `LAYER_NOT_FOUND_MSG % non_existent_layer_name` or another agreed string/structure).
3. If the API returns structured error codes (e.g., `data["error_code"]`), add an assertion for that as well to further reduce brittleness.
</issue_to_address>

### Comment 4
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:31-40` </location>
<code_context>
+def test_list_all_layers_no_parameter(cluster_type):
</code_context>

<issue_to_address>
**suggestion (testing):** Consider adding a test for the listLayers behavior when there are no layers at all.

`test_list_all_layers_no_parameter` currently creates a layer first, so it never exercises the empty-state behavior. Please add a separate test that runs from a clean state with no layers and verifies the response (status code and message) when `loadAllLayers()` returns an empty list, covering the `NO_LAYERS` branch.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@chandrams
Copy link
Contributor

@bhanvimenghani Are the sourcery comments addressed?

@bhanvimenghani
Copy link
Contributor Author

@bhanvimenghani Are the sourcery comments addressed?

@chandrams I am just adding the negative tests for create layers and also tests documentation. Will be addressing sourcery comments post that

@bhanvimenghani
Copy link
Contributor Author

@sourcery-ai review

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • In Converters.convertInputJSONToCreateLayer, several JSON keys like "layer_presence", "queries", "label", and "tunables" are hard-coded as strings; consider using the corresponding AnalyzerConstants.AutotuneConfigConstants fields for consistency and easier refactoring.
  • Layer validation currently does not enforce any constraints on layer_level, but your test utilities define LAYER_LEVEL_NEGATIVE_MSG and test negative values; consider extending LayerValidation to reject negative layer_level values with a dedicated error message.
  • Error handling in LayerService uses response.sendError(...), which typically produces a default HTML error body, whereas success responses return structured JSON; consider returning consistent JSON error payloads (e.g., with status, message, and httpcode fields) to align with other APIs and the expectations in the new tests.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `Converters.convertInputJSONToCreateLayer`, several JSON keys like `"layer_presence"`, `"queries"`, `"label"`, and `"tunables"` are hard-coded as strings; consider using the corresponding `AnalyzerConstants.AutotuneConfigConstants` fields for consistency and easier refactoring.
- Layer validation currently does not enforce any constraints on `layer_level`, but your test utilities define `LAYER_LEVEL_NEGATIVE_MSG` and test negative values; consider extending `LayerValidation` to reject negative `layer_level` values with a dedicated error message.
- Error handling in `LayerService` uses `response.sendError(...)`, which typically produces a default HTML error body, whereas success responses return structured JSON; consider returning consistent JSON error payloads (e.g., with `status`, `message`, and `httpcode` fields) to align with other APIs and the expectations in the new tests.

## Individual Comments

### Comment 1
<location> `src/main/java/com/autotune/analyzer/serviceObjects/Converters.java:600-602` </location>
<code_context>
+                if (jsonObject.has("tunables")) {
+                    JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
+                    tunables = new ArrayList<>();
+                    for (Object tunableObj : tunablesArray) {
+                        JSONObject tunableJsonObject = (JSONObject) tunableObj;
+                        Tunable tunable = new Gson().fromJson(tunableJsonObject.toString(), Tunable.class);
+                        tunables.add(tunable);
+                    }
</code_context>

<issue_to_address>
**suggestion (performance):** Reuse a single Gson instance instead of creating a new one in each loop iteration.

A new `Gson` instance is created for each tunable. Because `Gson` is thread-safe and relatively expensive to construct, initialize it once (e.g., before the loop or as a static field) and reuse it for all deserializations.

Suggested implementation:

```java
                // Parse tunables array
                ArrayList<Tunable> tunables = null;
                if (jsonObject.has("tunables")) {
                    JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
                    tunables = new ArrayList<>();
                    Gson gson = new Gson();

```

```java
                    for (Object tunableObj : tunablesArray) {
                        JSONObject tunableJsonObject = (JSONObject) tunableObj;
                        Tunable tunable = gson.fromJson(tunableJsonObject.toString(), Tunable.class);
                        tunables.add(tunable);
                    }

```

1. Ensure `com.google.gson.Gson` is imported at the top of `Converters.java`:
   `import com.google.gson.Gson;`
2. If the codebase already has a shared/static `Gson` instance for this class (e.g., `private static final Gson GSON = new Gson();`), prefer reusing that instead of creating a new local variable, and adjust the replacement to use that identifier (e.g., `GSON.fromJson(...)`).
</issue_to_address>

### Comment 2
<location> `src/main/java/com/autotune/analyzer/services/LayerService.java:107-109` </location>
<code_context>
+                sendErrorResponse(response, null, HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
+                        String.format(AnalyzerErrorConstants.APIErrors.CreateLayerAPI.ADD_LAYER_TO_DB_FAILURE, addedToDB.getMessage()));
+            }
+        } catch (MonitoringAgentNotSupportedException e) {
+            LOGGER.error("Failed to create layer: {}", e.getMessage());
+            sendErrorResponse(response, new Exception(e), HttpServletResponse.SC_BAD_REQUEST, "Validation failed: " + e.getMessage());
+        } catch (org.json.JSONException e) {
+            // JSON parsing errors are client errors
</code_context>

<issue_to_address>
**issue:** Avoid wrapping `MonitoringAgentNotSupportedException` in a generic `Exception` when sending the error response.

Here you’re wrapping `MonitoringAgentNotSupportedException` in a new `Exception` before passing it to `sendErrorResponse`, which hides the original type and stack trace. Pass `e` directly so the original exception (and its context) is preserved for logging and debugging.
</issue_to_address>

### Comment 3
<location> `src/main/java/com/autotune/analyzer/services/LayerService.java:115-124` </location>
<code_context>
+                tunable.validate();
+            } catch (InvalidBoundsException e) {
+                errors.add(e.getMessage());
+            } catch (Exception e) {
+                // Catch any unexpected exceptions during validation
+                String errorMsg = String.format("Unexpected error validating tunable '%s': %s",
</code_context>

<issue_to_address>
**suggestion:** Avoid using `printStackTrace()` in favor of structured logging only.

Here you both log the exception via SLF4J and call `e.printStackTrace()`. Please remove `printStackTrace()` and rely solely on `LOGGER.error` so errors go through the configured logging framework and logs stay consistent.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines 600 to 602
for (Object tunableObj : tunablesArray) {
JSONObject tunableJsonObject = (JSONObject) tunableObj;
Tunable tunable = new Gson().fromJson(tunableJsonObject.toString(), Tunable.class);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (performance): Reuse a single Gson instance instead of creating a new one in each loop iteration.

A new Gson instance is created for each tunable. Because Gson is thread-safe and relatively expensive to construct, initialize it once (e.g., before the loop or as a static field) and reuse it for all deserializations.

Suggested implementation:

                // Parse tunables array
                ArrayList<Tunable> tunables = null;
                if (jsonObject.has("tunables")) {
                    JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
                    tunables = new ArrayList<>();
                    Gson gson = new Gson();
                    for (Object tunableObj : tunablesArray) {
                        JSONObject tunableJsonObject = (JSONObject) tunableObj;
                        Tunable tunable = gson.fromJson(tunableJsonObject.toString(), Tunable.class);
                        tunables.add(tunable);
                    }
  1. Ensure com.google.gson.Gson is imported at the top of Converters.java:
    import com.google.gson.Gson;
  2. If the codebase already has a shared/static Gson instance for this class (e.g., private static final Gson GSON = new Gson();), prefer reusing that instead of creating a new local variable, and adjust the replacement to use that identifier (e.g., GSON.fromJson(...)).

Comment on lines 107 to 109
} catch (MonitoringAgentNotSupportedException e) {
LOGGER.error("Failed to create layer: {}", e.getMessage());
sendErrorResponse(response, new Exception(e), HttpServletResponse.SC_BAD_REQUEST, "Validation failed: " + e.getMessage());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Avoid wrapping MonitoringAgentNotSupportedException in a generic Exception when sending the error response.

Here you’re wrapping MonitoringAgentNotSupportedException in a new Exception before passing it to sendErrorResponse, which hides the original type and stack trace. Pass e directly so the original exception (and its context) is preserved for logging and debugging.

Comment on lines 115 to 124
} catch (Exception e) {
// Unexpected errors are server errors
LOGGER.error("Unexpected error creating layer: {}", e.getMessage(), e);
sendErrorResponse(response, e, HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Failed to create layer due to an internal error: " + e.getMessage());
}
}

/**
* Get list of Layers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Avoid using printStackTrace() in favor of structured logging only.

Here you both log the exception via SLF4J and call e.printStackTrace(). Please remove printStackTrace() and rely solely on LOGGER.error so errors go through the configured logging framework and logs stay consistent.

@bhanvimenghani
Copy link
Contributor Author

@sourcery-ai review

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The delete_layer_from_db() helper constructs a raw SQL command using string interpolation and hard-coded DB/namespace details; consider parameterizing these settings and avoiding direct string injection of layer_name (e.g., via a safer quoting/escaping utility) to reduce fragility and potential injection risk, even in test code.
  • In get_layer_dir(), relying on current_directory.parents[2] makes the helper sensitive to directory structure changes; consider basing this path on a known project root (e.g., via an env var or a single shared get_repo_root() helper) to make tests more robust to layout changes.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `delete_layer_from_db()` helper constructs a raw SQL command using string interpolation and hard-coded DB/namespace details; consider parameterizing these settings and avoiding direct string injection of `layer_name` (e.g., via a safer quoting/escaping utility) to reduce fragility and potential injection risk, even in test code.
- In `get_layer_dir()`, relying on `current_directory.parents[2]` makes the helper sensitive to directory structure changes; consider basing this path on a known project root (e.g., via an env var or a single shared `get_repo_root()` helper) to make tests more robust to layout changes.

## Individual Comments

### Comment 1
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_create_layer.py:153-162` </location>
<code_context>
+@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
</code_context>

<issue_to_address>
**suggestion (testing):** Negative tests rely on partial string matches; consider asserting against defined error message constants for stronger guarantees

These tests currently assert only short substrings (e.g., "name", "layer_presence") in the error output. Since you’ve added layer-specific error message constants in `helpers.utils` (e.g., `LAYER_METADATA_NAME_NULL_MSG`, `LAYER_NAME_NULL_MSG`), consider asserting against those constants instead. That will make the tests stricter, keep them aligned with canonical messages as they evolve, and clarify the expected error contract for each validation failure.

Suggested implementation:

```python
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================

@pytest.mark.layers
=======
# =============================================================================
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================

from helpers.utils import (
    LAYER_METADATA_NAME_NULL_MSG,
    LAYER_METADATA_NAME_EMPTY_MSG,
    LAYER_NAME_NULL_MSG,
    LAYER_NAME_EMPTY_MSG,
    LAYER_PRESENCE_NULL_MSG,
    LAYER_TUNABLES_NULL_MSG,
    LAYER_TUNABLES_EMPTY_MSG,
)

@pytest.mark.layers

```

```python
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
    ("null_metadata_name", LAYER_METADATA_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("empty_metadata_name", LAYER_METADATA_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_layer_name", LAYER_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("empty_layer_name", LAYER_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_layer_presence", LAYER_PRESENCE_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_tunables", LAYER_TUNABLES_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'),
    ("empty_tunables_array", LAYER_TUNABLES_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'),
])

```

1. Ensure that the constants `LAYER_METADATA_NAME_NULL_MSG`, `LAYER_METADATA_NAME_EMPTY_MSG`, `LAYER_NAME_NULL_MSG`, `LAYER_NAME_EMPTY_MSG`, `LAYER_PRESENCE_NULL_MSG`, `LAYER_TUNABLES_NULL_MSG`, and `LAYER_TUNABLES_EMPTY_MSG` are actually defined and exported from `helpers.utils` with the exact names used here.
2. If the existing negative tests currently use `in`-based substring checks (e.g., `assert expected_error_msg in resp["message"]`), you may want to switch them to strict equality (e.g., `==`) to fully leverage these canonical constants; adjust the assertion code accordingly elsewhere in the file.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +153 to +162
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
("null_metadata_name", "name", "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_metadata_name", "metadata.name", "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_name", "layer_name", "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_layer_name", "layer_name", "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_presence", "layer_presence", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_tunables", "tunables", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'),
("empty_tunables_array", "tunables", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'),
])
def test_create_layer_mandatory_fields_validation(test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables, cluster_type, cleanup_test_layers):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Negative tests rely on partial string matches; consider asserting against defined error message constants for stronger guarantees

These tests currently assert only short substrings (e.g., "name", "layer_presence") in the error output. Since you’ve added layer-specific error message constants in helpers.utils (e.g., LAYER_METADATA_NAME_NULL_MSG, LAYER_NAME_NULL_MSG), consider asserting against those constants instead. That will make the tests stricter, keep them aligned with canonical messages as they evolve, and clarify the expected error contract for each validation failure.

Suggested implementation:

# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================

@pytest.mark.layers
=======
# =============================================================================
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================

from helpers.utils import (
    LAYER_METADATA_NAME_NULL_MSG,
    LAYER_METADATA_NAME_EMPTY_MSG,
    LAYER_NAME_NULL_MSG,
    LAYER_NAME_EMPTY_MSG,
    LAYER_PRESENCE_NULL_MSG,
    LAYER_TUNABLES_NULL_MSG,
    LAYER_TUNABLES_EMPTY_MSG,
)

@pytest.mark.layers
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
    ("null_metadata_name", LAYER_METADATA_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("empty_metadata_name", LAYER_METADATA_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_layer_name", LAYER_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("empty_layer_name", LAYER_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_layer_presence", LAYER_PRESENCE_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
    ("null_tunables", LAYER_TUNABLES_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'),
    ("empty_tunables_array", LAYER_TUNABLES_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'),
])
  1. Ensure that the constants LAYER_METADATA_NAME_NULL_MSG, LAYER_METADATA_NAME_EMPTY_MSG, LAYER_NAME_NULL_MSG, LAYER_NAME_EMPTY_MSG, LAYER_PRESENCE_NULL_MSG, LAYER_TUNABLES_NULL_MSG, and LAYER_TUNABLES_EMPTY_MSG are actually defined and exported from helpers.utils with the exact names used here.
  2. If the existing negative tests currently use in-based substring checks (e.g., assert expected_error_msg in resp["message"]), you may want to switch them to strict equality (e.g., ==) to fully leverage these canonical constants; adjust the assertion code accordingly elsewhere in the file.

@bhanvimenghani bhanvimenghani moved this from In Progress to Under Review in Monitoring Feb 12, 2026

# Ensure database is clean - delete any existing layers
# This ensures we test the true empty state
for layer_name in ['container', 'semeru', 'quarkus', 'hotspot', 'test-layer']:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the layer names to an array, will help while cleaning up

assert len(layers) > 0

# Verify each layer has required fields
for layer in layers:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add code to verify the JSON returned


layers = response.json()
assert isinstance(layers, list)
assert len(layers) >= len(created_layer_names)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this condition greater than or equal to, shouldn't it be equal to?

returned_layer = layers[0]

# Verify layer name matches
assert returned_layer['layer_name'] == expected_layer_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this to a function so that it can be reused in other tests

assert data['status'] == ERROR_STATUS

# Error message should indicate either not found or invalid name
assert ("invalid" in data['message'].lower() or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validate the specific error message returned for each test, use complete error message rather than substrings like "invalid".

"""
form_kruize_url(cluster_type)

tmp_json_file = f"/tmp/create_layer_{test_name}.json"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleanup these temporary files created

@pytest.mark.layers
@pytest.mark.negative
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
("negative_layer_level", "layer_level", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", -1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the expected error messages in the parameters for each test

data = response.json()

assert response.status_code == 400
assert expected_error_msg.lower() in data['message'].lower()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please change this to match the exact message instead of using in

response2 = create_layer(input_json_file)
data2 = response2.json()

assert response2.status_code == 409
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Define 409 as a constant in utils if not present and use that

response = create_layer(tmp_json_file)
data = response.json()

assert response.status_code == 400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use the constants defined in utils instead of 400 in all occurences

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validate the complete expected error message in all the tests

@pytest.mark.layers
@pytest.mark.negative
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
("empty_layer_presence", "layer_presence", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use complete expected error message in the parameters

response = create_layer(tmp_json_file)
data = response.json()

assert response.status_code == 400
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Validate the complete expected error message in all the tests

import subprocess
try:
cmd = [
"kubectl", "exec", "-n", "monitoring",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Namespace is hard coded to monitoring, will not work with openshift

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Under Review

Development

Successfully merging this pull request may close these issues.

Test Scenarios for Layers - Create and List Functionality & Layer Detection

2 participants