Layers - 06 - B [Test] Adds Test changes for Create and List Layers api #1781
Layers - 06 - B [Test] Adds Test changes for Create and List Layers api #1781bhanvimenghani wants to merge 5 commits intokruize:mvp_demofrom
Conversation
Reviewer's GuideAdds helper functions, constants, documentation, pytest markers, and two comprehensive REST API test suites to validate the new createLayer and listLayers APIs for Kruize layers, including positive, negative, and field-level validation scenarios. File-Level Changes
Assessment against linked issues
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The listLayers API is inconsistent about the query parameter name:
LayerServiceexpectsname(andKruizeSupportedTypes.LIST_LAYERS_QUERY_PARAMS_SUPPORTEDonly allows"name"), but the tests and helpers uselayer_name, so align the supported query params andrequest.getParameter(...)with the external API naming to avoid spuriousINVALID_QUERY_PARAMerrors. - The layer conversion logic is duplicated between
DBHelpers.Converters.KruizeObjectConverters.convertLayerObjectToLayerDBObj/convertLayerDBObjToLayerObjectandLayerService.convertToLayerEntry; consider consolidating these into a single set of converter methods to avoid divergence and reduce maintenance overhead. - Several methods (
convertLayerObjectToLayerDBObj,convertLayerDBObjToLayerObject,convertLayerEntryToLayerObject, DAO methods) catch broadException, log andprintStackTrace(), and in some cases return null or wrap the message-only in a newException; tightening the exception types and avoiding swallowing stack traces (or rethrowing with the cause) would make failures easier to diagnose and reduce silent nulls.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The listLayers API is inconsistent about the query parameter name: `LayerService` expects `name` (and `KruizeSupportedTypes.LIST_LAYERS_QUERY_PARAMS_SUPPORTED` only allows `"name"`), but the tests and helpers use `layer_name`, so align the supported query params and `request.getParameter(...)` with the external API naming to avoid spurious `INVALID_QUERY_PARAM` errors.
- The layer conversion logic is duplicated between `DBHelpers.Converters.KruizeObjectConverters.convertLayerObjectToLayerDBObj`/`convertLayerDBObjToLayerObject` and `LayerService.convertToLayerEntry`; consider consolidating these into a single set of converter methods to avoid divergence and reduce maintenance overhead.
- Several methods (`convertLayerObjectToLayerDBObj`, `convertLayerDBObjToLayerObject`, `convertLayerEntryToLayerObject`, DAO methods) catch broad `Exception`, log and `printStackTrace()`, and in some cases return null or wrap the message-only in a new `Exception`; tightening the exception types and avoiding swallowing stack traces (or rethrowing with the cause) would make failures easier to diagnose and reduce silent nulls.
## Individual Comments
### Comment 1
<location> `tests/scripts/helpers/kruize.py:720-729` </location>
<code_context>
+
+# Description: This function lists layers from Kruize Autotune using GET listLayers API
+# Input Parameters: layer name (optional)
+def list_layers(layer_name=None, logging=True):
+ print("\nListing the layers...")
+
+ query_params = {}
+
+ if layer_name is not None:
+ query_params['layer_name'] = layer_name
+
+ query_string = "&".join(f"{key}={value}" for key, value in query_params.items())
+
+ url = URL + "/listLayers"
+ if query_string:
+ url += "?" + query_string
+ print("URL = ", url)
+ print("PARAMS = ", query_params)
+ response = requests.get(url)
+
+ print("Response status code = ", response.status_code)
</code_context>
<issue_to_address>
**issue (bug_risk):** The list_layers helper uses 'layer_name' as query parameter, which likely does not match the API's expected parameter name.
Because this uses `layer_name` (e.g., `?layer_name=foo`) instead of the API’s `name` parameter, the listLayers calls won’t actually filter by name and the tests will only cover the invalid-parameter path. Please update the helper (and corresponding tests) to use the correct query parameter (e.g., `name`) so they exercise the intended filtering behavior.
</issue_to_address>
### Comment 2
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:119-128` </location>
<code_context>
+def test_list_specific_layer_by_name(cluster_type, layer_file):
</code_context>
<issue_to_address>
**issue (testing):** Tests that filter by layer name rely on the 'layer_name' query parameter and may not be exercising name-based filtering at all.
This and related tests call `list_layers(layer_name=...)`, which sends `?layer_name=...` to the API, but the implementation expects `name` and rejects unknown params. That means these tests likely validate invalid-parameter handling or unfiltered listing, not name-based filtering. After updating the helper to send `name`, please re-run and update the assertions (e.g., `len(layers) == 1`) to match the actual behavior (e.g., 400 for invalid params vs 404/empty list for missing names).
</issue_to_address>
### Comment 3
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:355-364` </location>
<code_context>
+
+# ========== Negative Test Cases ==========
+
+@pytest.mark.negative
+def test_list_layer_with_non_existent_name(cluster_type):
+ """
+ Test Description: This test validates listLayers API when querying a layer that doesn't exist.
+ Expected: Should return 404 status with appropriate error message.
+ """
+ form_kruize_url(cluster_type)
+
+ non_existent_layer_name = "non-existent-layer-12345"
+
+ # Try to list a layer that doesn't exist
+ response = list_layers(layer_name=non_existent_layer_name, logging=False)
+
+ print(f"Response for non-existent layer '{non_existent_layer_name}': {response.status_code}")
+
+ assert response.status_code == ERROR_404_STATUS_CODE
+
+ data = response.json()
+ assert data['status'] == ERROR_STATUS
+ assert LAYER_NOT_FOUND_MSG % non_existent_layer_name in data['message']
+
+ print(f"✓ Correctly returned 404 for non-existent layer '{non_existent_layer_name}'")
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** The non-existent layer test assumes 404 and a specific error message, which may not match the API's actual behavior.
This test currently hardcodes a 404 and `LAYER_NOT_FOUND_MSG % non_existent_layer_name`, but listLayers may legitimately return a different status/message (e.g., 400 with `INVALID_LAYER_NAME_MSG` or a generic error). After the query parameter fix, please verify the actual listLayers contract for a non-existent name and adjust the expected status and message accordingly to reflect the real behavior and avoid a brittle test.
Suggested implementation:
```python
Licensed under the Apache License, Version 2.0 (the "License");
=======
# ========== Negative Test Cases ==========
@pytest.mark.negative
def test_list_layer_with_non_existent_name(cluster_type):
"""
Test Description:
Validate listLayers API behavior when querying a layer that doesn't exist.
Expectations:
- API should return a non-success (4xx/5xx) HTTP status code.
- Response JSON should have `status == ERROR_STATUS`.
- Error message should reference the non-existent layer name.
Note:
Once the listLayers contract for non-existent layers is finalized
(exact HTTP code and message), tighten these assertions to match
the agreed behavior instead of relying on generic checks.
"""
# Arrange
form_kruize_url(cluster_type)
non_existent_layer_name = "non-existent-layer-12345"
# Act
response = list_layers(layer_name=non_existent_layer_name, logging=False)
print(f"Response for non-existent layer '{non_existent_layer_name}': {response.status_code}")
# Assert
# Do not hardcode 404 here; only assert that the call failed.
assert response.status_code >= 400, (
f"Expected non-success status for non-existent layer, got {response.status_code}"
)
data = response.json()
assert data.get('status') == ERROR_STATUS, (
f"Expected ERROR_STATUS in response, got: {data}"
)
# The exact error message may vary by implementation; ensure it refers to the layer name.
message = data.get('message', '')
assert isinstance(message, str), f"Expected 'message' to be a string, got: {type(message)}"
assert non_existent_layer_name in message, (
f"Expected error message to mention the non-existent layer name "
f"'{non_existent_layer_name}', got: '{message}'"
)
print(f"✓ Correctly returned error for non-existent layer '{non_existent_layer_name}'")
Licensed under the Apache License, Version 2.0 (the "License");
```
To fully align this test with the real API contract once known, you should:
1. Verify the actual behavior of `list_layers(layer_name=non_existent_layer_name)` against a live or mocked API:
- Exact HTTP status code (e.g., 404, 400).
- Exact error message or error code field if present.
2. Update the assertions accordingly, for example:
- Replace `response.status_code >= 400` with the specific expected code (e.g., `ERROR_404_STATUS_CODE` or `ERROR_400_STATUS_CODE`).
- Replace the generic `non_existent_layer_name in message` check with an assertion against the exact message or a constant (e.g., `LAYER_NOT_FOUND_MSG % non_existent_layer_name` or another agreed string/structure).
3. If the API returns structured error codes (e.g., `data["error_code"]`), add an assertion for that as well to further reduce brittleness.
</issue_to_address>
### Comment 4
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_list_layers.py:31-40` </location>
<code_context>
+def test_list_all_layers_no_parameter(cluster_type):
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a test for the listLayers behavior when there are no layers at all.
`test_list_all_layers_no_parameter` currently creates a layer first, so it never exercises the empty-state behavior. Please add a separate test that runs from a clean state with no layers and verifies the response (status code and message) when `loadAllLayers()` returns an empty list, covering the `NO_LAYERS` branch.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
1223576 to
dd9a42c
Compare
|
@bhanvimenghani Are the sourcery comments addressed? |
@chandrams I am just adding the negative tests for create layers and also tests documentation. Will be addressing sourcery comments post that |
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- In
Converters.convertInputJSONToCreateLayer, several JSON keys like"layer_presence","queries","label", and"tunables"are hard-coded as strings; consider using the correspondingAnalyzerConstants.AutotuneConfigConstantsfields for consistency and easier refactoring. - Layer validation currently does not enforce any constraints on
layer_level, but your test utilities defineLAYER_LEVEL_NEGATIVE_MSGand test negative values; consider extendingLayerValidationto reject negativelayer_levelvalues with a dedicated error message. - Error handling in
LayerServiceusesresponse.sendError(...), which typically produces a default HTML error body, whereas success responses return structured JSON; consider returning consistent JSON error payloads (e.g., withstatus,message, andhttpcodefields) to align with other APIs and the expectations in the new tests.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `Converters.convertInputJSONToCreateLayer`, several JSON keys like `"layer_presence"`, `"queries"`, `"label"`, and `"tunables"` are hard-coded as strings; consider using the corresponding `AnalyzerConstants.AutotuneConfigConstants` fields for consistency and easier refactoring.
- Layer validation currently does not enforce any constraints on `layer_level`, but your test utilities define `LAYER_LEVEL_NEGATIVE_MSG` and test negative values; consider extending `LayerValidation` to reject negative `layer_level` values with a dedicated error message.
- Error handling in `LayerService` uses `response.sendError(...)`, which typically produces a default HTML error body, whereas success responses return structured JSON; consider returning consistent JSON error payloads (e.g., with `status`, `message`, and `httpcode` fields) to align with other APIs and the expectations in the new tests.
## Individual Comments
### Comment 1
<location> `src/main/java/com/autotune/analyzer/serviceObjects/Converters.java:600-602` </location>
<code_context>
+ if (jsonObject.has("tunables")) {
+ JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
+ tunables = new ArrayList<>();
+ for (Object tunableObj : tunablesArray) {
+ JSONObject tunableJsonObject = (JSONObject) tunableObj;
+ Tunable tunable = new Gson().fromJson(tunableJsonObject.toString(), Tunable.class);
+ tunables.add(tunable);
+ }
</code_context>
<issue_to_address>
**suggestion (performance):** Reuse a single Gson instance instead of creating a new one in each loop iteration.
A new `Gson` instance is created for each tunable. Because `Gson` is thread-safe and relatively expensive to construct, initialize it once (e.g., before the loop or as a static field) and reuse it for all deserializations.
Suggested implementation:
```java
// Parse tunables array
ArrayList<Tunable> tunables = null;
if (jsonObject.has("tunables")) {
JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
tunables = new ArrayList<>();
Gson gson = new Gson();
```
```java
for (Object tunableObj : tunablesArray) {
JSONObject tunableJsonObject = (JSONObject) tunableObj;
Tunable tunable = gson.fromJson(tunableJsonObject.toString(), Tunable.class);
tunables.add(tunable);
}
```
1. Ensure `com.google.gson.Gson` is imported at the top of `Converters.java`:
`import com.google.gson.Gson;`
2. If the codebase already has a shared/static `Gson` instance for this class (e.g., `private static final Gson GSON = new Gson();`), prefer reusing that instead of creating a new local variable, and adjust the replacement to use that identifier (e.g., `GSON.fromJson(...)`).
</issue_to_address>
### Comment 2
<location> `src/main/java/com/autotune/analyzer/services/LayerService.java:107-109` </location>
<code_context>
+ sendErrorResponse(response, null, HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
+ String.format(AnalyzerErrorConstants.APIErrors.CreateLayerAPI.ADD_LAYER_TO_DB_FAILURE, addedToDB.getMessage()));
+ }
+ } catch (MonitoringAgentNotSupportedException e) {
+ LOGGER.error("Failed to create layer: {}", e.getMessage());
+ sendErrorResponse(response, new Exception(e), HttpServletResponse.SC_BAD_REQUEST, "Validation failed: " + e.getMessage());
+ } catch (org.json.JSONException e) {
+ // JSON parsing errors are client errors
</code_context>
<issue_to_address>
**issue:** Avoid wrapping `MonitoringAgentNotSupportedException` in a generic `Exception` when sending the error response.
Here you’re wrapping `MonitoringAgentNotSupportedException` in a new `Exception` before passing it to `sendErrorResponse`, which hides the original type and stack trace. Pass `e` directly so the original exception (and its context) is preserved for logging and debugging.
</issue_to_address>
### Comment 3
<location> `src/main/java/com/autotune/analyzer/services/LayerService.java:115-124` </location>
<code_context>
+ tunable.validate();
+ } catch (InvalidBoundsException e) {
+ errors.add(e.getMessage());
+ } catch (Exception e) {
+ // Catch any unexpected exceptions during validation
+ String errorMsg = String.format("Unexpected error validating tunable '%s': %s",
</code_context>
<issue_to_address>
**suggestion:** Avoid using `printStackTrace()` in favor of structured logging only.
Here you both log the exception via SLF4J and call `e.printStackTrace()`. Please remove `printStackTrace()` and rely solely on `LOGGER.error` so errors go through the configured logging framework and logs stay consistent.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| for (Object tunableObj : tunablesArray) { | ||
| JSONObject tunableJsonObject = (JSONObject) tunableObj; | ||
| Tunable tunable = new Gson().fromJson(tunableJsonObject.toString(), Tunable.class); |
There was a problem hiding this comment.
suggestion (performance): Reuse a single Gson instance instead of creating a new one in each loop iteration.
A new Gson instance is created for each tunable. Because Gson is thread-safe and relatively expensive to construct, initialize it once (e.g., before the loop or as a static field) and reuse it for all deserializations.
Suggested implementation:
// Parse tunables array
ArrayList<Tunable> tunables = null;
if (jsonObject.has("tunables")) {
JSONArray tunablesArray = jsonObject.getJSONArray("tunables");
tunables = new ArrayList<>();
Gson gson = new Gson(); for (Object tunableObj : tunablesArray) {
JSONObject tunableJsonObject = (JSONObject) tunableObj;
Tunable tunable = gson.fromJson(tunableJsonObject.toString(), Tunable.class);
tunables.add(tunable);
}- Ensure
com.google.gson.Gsonis imported at the top ofConverters.java:
import com.google.gson.Gson; - If the codebase already has a shared/static
Gsoninstance for this class (e.g.,private static final Gson GSON = new Gson();), prefer reusing that instead of creating a new local variable, and adjust the replacement to use that identifier (e.g.,GSON.fromJson(...)).
| } catch (MonitoringAgentNotSupportedException e) { | ||
| LOGGER.error("Failed to create layer: {}", e.getMessage()); | ||
| sendErrorResponse(response, new Exception(e), HttpServletResponse.SC_BAD_REQUEST, "Validation failed: " + e.getMessage()); |
There was a problem hiding this comment.
issue: Avoid wrapping MonitoringAgentNotSupportedException in a generic Exception when sending the error response.
Here you’re wrapping MonitoringAgentNotSupportedException in a new Exception before passing it to sendErrorResponse, which hides the original type and stack trace. Pass e directly so the original exception (and its context) is preserved for logging and debugging.
| } catch (Exception e) { | ||
| // Unexpected errors are server errors | ||
| LOGGER.error("Unexpected error creating layer: {}", e.getMessage(), e); | ||
| sendErrorResponse(response, e, HttpServletResponse.SC_INTERNAL_SERVER_ERROR, | ||
| "Failed to create layer due to an internal error: " + e.getMessage()); | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get list of Layers |
There was a problem hiding this comment.
suggestion: Avoid using printStackTrace() in favor of structured logging only.
Here you both log the exception via SLF4J and call e.printStackTrace(). Please remove printStackTrace() and rely solely on LOGGER.error so errors go through the configured logging framework and logs stay consistent.
b144c59 to
73496eb
Compare
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The
delete_layer_from_db()helper constructs a raw SQL command using string interpolation and hard-coded DB/namespace details; consider parameterizing these settings and avoiding direct string injection oflayer_name(e.g., via a safer quoting/escaping utility) to reduce fragility and potential injection risk, even in test code. - In
get_layer_dir(), relying oncurrent_directory.parents[2]makes the helper sensitive to directory structure changes; consider basing this path on a known project root (e.g., via an env var or a single sharedget_repo_root()helper) to make tests more robust to layout changes.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `delete_layer_from_db()` helper constructs a raw SQL command using string interpolation and hard-coded DB/namespace details; consider parameterizing these settings and avoiding direct string injection of `layer_name` (e.g., via a safer quoting/escaping utility) to reduce fragility and potential injection risk, even in test code.
- In `get_layer_dir()`, relying on `current_directory.parents[2]` makes the helper sensitive to directory structure changes; consider basing this path on a known project root (e.g., via an env var or a single shared `get_repo_root()` helper) to make tests more robust to layout changes.
## Individual Comments
### Comment 1
<location> `tests/scripts/local_monitoring_tests/rest_apis/test_create_layer.py:153-162` </location>
<code_context>
+@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
</code_context>
<issue_to_address>
**suggestion (testing):** Negative tests rely on partial string matches; consider asserting against defined error message constants for stronger guarantees
These tests currently assert only short substrings (e.g., "name", "layer_presence") in the error output. Since you’ve added layer-specific error message constants in `helpers.utils` (e.g., `LAYER_METADATA_NAME_NULL_MSG`, `LAYER_NAME_NULL_MSG`), consider asserting against those constants instead. That will make the tests stricter, keep them aligned with canonical messages as they evolve, and clarify the expected error contract for each validation failure.
Suggested implementation:
```python
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================
@pytest.mark.layers
=======
# =============================================================================
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================
from helpers.utils import (
LAYER_METADATA_NAME_NULL_MSG,
LAYER_METADATA_NAME_EMPTY_MSG,
LAYER_NAME_NULL_MSG,
LAYER_NAME_EMPTY_MSG,
LAYER_PRESENCE_NULL_MSG,
LAYER_TUNABLES_NULL_MSG,
LAYER_TUNABLES_EMPTY_MSG,
)
@pytest.mark.layers
```
```python
@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
("null_metadata_name", LAYER_METADATA_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_metadata_name", LAYER_METADATA_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_name", LAYER_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_layer_name", LAYER_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_presence", LAYER_PRESENCE_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_tunables", LAYER_TUNABLES_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'),
("empty_tunables_array", LAYER_TUNABLES_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'),
])
```
1. Ensure that the constants `LAYER_METADATA_NAME_NULL_MSG`, `LAYER_METADATA_NAME_EMPTY_MSG`, `LAYER_NAME_NULL_MSG`, `LAYER_NAME_EMPTY_MSG`, `LAYER_PRESENCE_NULL_MSG`, `LAYER_TUNABLES_NULL_MSG`, and `LAYER_TUNABLES_EMPTY_MSG` are actually defined and exported from `helpers.utils` with the exact names used here.
2. If the existing negative tests currently use `in`-based substring checks (e.g., `assert expected_error_msg in resp["message"]`), you may want to switch them to strict equality (e.g., `==`) to fully leverage these canonical constants; adjust the assertion code accordingly elsewhere in the file.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| @pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [ | ||
| ("null_metadata_name", "name", "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), | ||
| ("empty_metadata_name", "metadata.name", "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), | ||
| ("null_layer_name", "layer_name", "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), | ||
| ("empty_layer_name", "layer_name", "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), | ||
| ("null_layer_presence", "layer_presence", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), | ||
| ("null_tunables", "tunables", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'), | ||
| ("empty_tunables_array", "tunables", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'), | ||
| ]) | ||
| def test_create_layer_mandatory_fields_validation(test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables, cluster_type, cleanup_test_layers): |
There was a problem hiding this comment.
suggestion (testing): Negative tests rely on partial string matches; consider asserting against defined error message constants for stronger guarantees
These tests currently assert only short substrings (e.g., "name", "layer_presence") in the error output. Since you’ve added layer-specific error message constants in helpers.utils (e.g., LAYER_METADATA_NAME_NULL_MSG, LAYER_NAME_NULL_MSG), consider asserting against those constants instead. That will make the tests stricter, keep them aligned with canonical messages as they evolve, and clarify the expected error contract for each validation failure.
Suggested implementation:
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================
@pytest.mark.layers
=======
# =============================================================================
# NEGATIVE TEST CASES - A. Mandatory Fields Missing/NULL/Empty
# =============================================================================
from helpers.utils import (
LAYER_METADATA_NAME_NULL_MSG,
LAYER_METADATA_NAME_EMPTY_MSG,
LAYER_NAME_NULL_MSG,
LAYER_NAME_EMPTY_MSG,
LAYER_PRESENCE_NULL_MSG,
LAYER_TUNABLES_NULL_MSG,
LAYER_TUNABLES_EMPTY_MSG,
)
@pytest.mark.layers@pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [
("null_metadata_name", LAYER_METADATA_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", None, "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_metadata_name", LAYER_METADATA_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "", "test-layer", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_name", LAYER_NAME_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", None, 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("empty_layer_name", LAYER_NAME_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "", 1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_layer_presence", LAYER_PRESENCE_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", 'null', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'),
("null_tunables", LAYER_TUNABLES_NULL_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', 'null'),
("empty_tunables_array", LAYER_TUNABLES_EMPTY_MSG, "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{"presence": "always"}', '[]'),
])- Ensure that the constants
LAYER_METADATA_NAME_NULL_MSG,LAYER_METADATA_NAME_EMPTY_MSG,LAYER_NAME_NULL_MSG,LAYER_NAME_EMPTY_MSG,LAYER_PRESENCE_NULL_MSG,LAYER_TUNABLES_NULL_MSG, andLAYER_TUNABLES_EMPTY_MSGare actually defined and exported fromhelpers.utilswith the exact names used here. - If the existing negative tests currently use
in-based substring checks (e.g.,assert expected_error_msg in resp["message"]), you may want to switch them to strict equality (e.g.,==) to fully leverage these canonical constants; adjust the assertion code accordingly elsewhere in the file.
521c6e8 to
5f49b76
Compare
|
|
||
| # Ensure database is clean - delete any existing layers | ||
| # This ensures we test the true empty state | ||
| for layer_name in ['container', 'semeru', 'quarkus', 'hotspot', 'test-layer']: |
There was a problem hiding this comment.
Move the layer names to an array, will help while cleaning up
| assert len(layers) > 0 | ||
|
|
||
| # Verify each layer has required fields | ||
| for layer in layers: |
There was a problem hiding this comment.
Please add code to verify the JSON returned
|
|
||
| layers = response.json() | ||
| assert isinstance(layers, list) | ||
| assert len(layers) >= len(created_layer_names) |
There was a problem hiding this comment.
Why is this condition greater than or equal to, shouldn't it be equal to?
| returned_layer = layers[0] | ||
|
|
||
| # Verify layer name matches | ||
| assert returned_layer['layer_name'] == expected_layer_name |
There was a problem hiding this comment.
Move this to a function so that it can be reused in other tests
| assert data['status'] == ERROR_STATUS | ||
|
|
||
| # Error message should indicate either not found or invalid name | ||
| assert ("invalid" in data['message'].lower() or |
There was a problem hiding this comment.
Validate the specific error message returned for each test, use complete error message rather than substrings like "invalid".
| """ | ||
| form_kruize_url(cluster_type) | ||
|
|
||
| tmp_json_file = f"/tmp/create_layer_{test_name}.json" |
There was a problem hiding this comment.
Cleanup these temporary files created
| @pytest.mark.layers | ||
| @pytest.mark.negative | ||
| @pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [ | ||
| ("negative_layer_level", "layer_level", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", -1, "test layer", '{"presence": "always"}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), |
There was a problem hiding this comment.
I don't see the expected error messages in the parameters for each test
| data = response.json() | ||
|
|
||
| assert response.status_code == 400 | ||
| assert expected_error_msg.lower() in data['message'].lower() |
There was a problem hiding this comment.
Can you please change this to match the exact message instead of using in
| response2 = create_layer(input_json_file) | ||
| data2 = response2.json() | ||
|
|
||
| assert response2.status_code == 409 |
There was a problem hiding this comment.
Define 409 as a constant in utils if not present and use that
| response = create_layer(tmp_json_file) | ||
| data = response.json() | ||
|
|
||
| assert response.status_code == 400 |
There was a problem hiding this comment.
Please use the constants defined in utils instead of 400 in all occurences
There was a problem hiding this comment.
Validate the complete expected error message in all the tests
| @pytest.mark.layers | ||
| @pytest.mark.negative | ||
| @pytest.mark.parametrize("test_name, expected_error_msg, apiVersion, kind, metadata_name, layer_name, layer_level, details, layer_presence, tunables", [ | ||
| ("empty_layer_presence", "layer_presence", "recommender.com/v1", "KruizeLayer", "test-meta", "test-layer", 1, "test layer", '{}', '[{"name": "t1", "value_type": "double", "upper_bound": "100", "lower_bound": "10", "step": 1}]'), |
There was a problem hiding this comment.
Use complete expected error message in the parameters
| response = create_layer(tmp_json_file) | ||
| data = response.json() | ||
|
|
||
| assert response.status_code == 400 |
There was a problem hiding this comment.
Validate the complete expected error message in all the tests
| import subprocess | ||
| try: | ||
| cmd = [ | ||
| "kubectl", "exec", "-n", "monitoring", |
There was a problem hiding this comment.
Namespace is hard coded to monitoring, will not work with openshift
Description
Please describe the issue or feature and the summary of changes made to fix this.
Fixes # (issue)
Type of change
How has this been tested?
Please describe the tests that were run to verify your changes and steps to reproduce. Please specify any test configuration required.
Test Configuration
Checklist 🎯
Additional information
Include any additional information such as links, test results, screenshots here
Summary by Sourcery
Add comprehensive test coverage and helpers for the Layers create and list REST APIs.
New Features:
Enhancements:
Tests: