[MPT-18559] Adopt api client to audit and remove legacy mpt client#168
[MPT-18559] Adopt api client to audit and remove legacy mpt client#168
Conversation
📝 WalkthroughWalkthroughRemoved the legacy MPT HTTP client and its factory, removed the container's public Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan
Comment |
ff5fe2c to
42ef902
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/cli/plugins/audit_plugin/test_app.py (1)
115-129:⚠️ Potential issue | 🟡 MinorAssert the created client is used in
diff-by-records-id.This happy-path test only checks the rendered output. It would still pass if the command stopped threading
mock_create_client.return_valueintoget_audit_trail()for one or both IDs, so please assert those two calls explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` around lines 115 - 129, The test_successful_comparison should assert that the client returned by mock_create_client is actually passed into the get_audit_trail calls used by the "diff-by-records-id" command: after invoking runner.invoke(app, ["diff-by-records-id", "audit1", "audit2"]), add assertions that mock_get_trail was called with mock_create_client.return_value and the respective record IDs (e.g. mock_get_trail.assert_any_call(mock_create_client.return_value, "audit1") and similarly for "audit2"), or use mock_get_trail.assert_has_calls([...]) to verify both calls in order.
🧹 Nitpick comments (1)
tests/cli/core/test_debug_logging.py (1)
24-27: Extract this repeated arrange block into a fixture.Both updated tests patch
cli.core.accounts.app.list_accountsandlogging.basicConfigthe same way. Moving that setup into a fixture will keep the file shorter and make future changes to log-file behavior easier to update in one place.As per coding guidelines, "Avoid duplicating test logic; extract shared setup into fixtures."
Also applies to: 45-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/core/test_debug_logging.py` around lines 24 - 27, Extract the duplicated arrange block that patches cli.core.accounts.app.list_accounts and logging.basicConfig into a pytest fixture (e.g., common_log_setup or log_file_fixture) and have tests test_log_file_writes_to_file and the other test (lines 45-47) use that fixture; the fixture should create the tmp_path / "test.log" value (or accept tmp_path as a parameter), perform mocker.patch("cli.core.accounts.app.list_accounts", return_value=[], autospec=True) and mocker.patch("logging.basicConfig", side_effect=mock_basic_config, autospec=True), and yield the log_file (or otherwise provide the patched state) so both tests share the setup and remove the duplicated patch calls from each test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/plugins/audit_plugin/api.py`:
- Around line 12-15: Both audit query functions are missing the "timestamp"
field in their explicit projections; update the select fields in
get_audit_trail() (where select_fields currently lists
["object","actor","details","documents","request.api.geolocation"]) and in
get_audit_records_by_object() so that "timestamp" is included, and ensure the
.select(...) calls for both functions include "timestamp" (since
get_audit_records_by_object() also sorts by "-timestamp", the timestamp must be
requested explicitly).
- Around line 42-50: The empty-result check that raises typer.Exit should be
moved out of the try/except block so the intentional exit isn't caught by the
generic except Exception; update the function that builds records (the block
using raw_records -> records and console.print(f"[red]No audit records found for
object {object_id}[/red]") and raise typer.Exit(1)) to perform the "if not
records: ..." check after the try/except returns records (similar to
get_audit_trail's pattern), leaving only actual error-prone calls inside the try
and letting exceptions be handled in the existing except Exception handler.
In `@tests/cli/plugins/audit_plugin/test_api.py`:
- Around line 23-52: The test file currently defines tests inside classes
TestGetAuditTrail and TestGetAuditRecordsByObject; refactor those tests into
plain module-level functions named with test_* (e.g., convert
TestGetAuditTrail.test_successful_retrieval, test_no_record_found,
test_api_error into top-level functions) so they follow pytest conventions;
ensure each formerly-class method becomes a standalone function using the same
fixtures/parameters (mock_client, mocker) and keep assertions and mock setup
identical; remove the unnecessary class wrappers and any self references so the
new functions run as module-level pytest tests.
- Around line 29-37: Add assertions that validate the fluent query construction
for get_audit_trail and get_audit_records_by_object instead of only asserting
final payloads: for the get_audit_trail test assert that
records_options.return_value.filter was called with RQLQuery(id=record_id), that
the subsequent select was called with the exact fields ("object", "actor",
"details", "documents", "request.api.geolocation"), and that fetch_page was
called with limit=1; for the get_audit_records_by_object test assert that
records_options.return_value.filter was called with
RQLQuery(object__id=object_id), order_by was called with "-timestamp", select
was called with the same exact fields, and that fetch_page was called with
limit=limit to ensure query args aren’t regressed.
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 37-39: Replace uses of unittest.mock.patch with pytest-mock's
mocker.patch for all patch locations that target get_active_account,
create_api_mpt_client_from_account, get_audit_records_by_object (and the other
groups at lines noted) and ensure each mock is created with autospec=True so
signatures are enforced; update test fixtures to accept the mocker param and
call mocker.patch("cli.plugins.audit_plugin.app.get_active_account",
autospec=True) etc. Also, in TestDiffByRecordsId.test_successful_comparison, add
an assertion that the client object returned from
create_api_mpt_client_from_account is forwarded into get_audit_trail (i.e.,
assert get_audit_trail_mock.called_with(created_client) or equivalent using the
autospecced mocks) to mirror the assertion present in the parallel test.
---
Outside diff comments:
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 115-129: The test_successful_comparison should assert that the
client returned by mock_create_client is actually passed into the
get_audit_trail calls used by the "diff-by-records-id" command: after invoking
runner.invoke(app, ["diff-by-records-id", "audit1", "audit2"]), add assertions
that mock_get_trail was called with mock_create_client.return_value and the
respective record IDs (e.g.
mock_get_trail.assert_any_call(mock_create_client.return_value, "audit1") and
similarly for "audit2"), or use mock_get_trail.assert_has_calls([...]) to verify
both calls in order.
---
Nitpick comments:
In `@tests/cli/core/test_debug_logging.py`:
- Around line 24-27: Extract the duplicated arrange block that patches
cli.core.accounts.app.list_accounts and logging.basicConfig into a pytest
fixture (e.g., common_log_setup or log_file_fixture) and have tests
test_log_file_writes_to_file and the other test (lines 45-47) use that fixture;
the fixture should create the tmp_path / "test.log" value (or accept tmp_path as
a parameter), perform mocker.patch("cli.core.accounts.app.list_accounts",
return_value=[], autospec=True) and mocker.patch("logging.basicConfig",
side_effect=mock_basic_config, autospec=True), and yield the log_file (or
otherwise provide the patched state) so both tests share the setup and remove
the duplicated patch calls from each test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 041717ba-8d9d-4dce-955e-a564206c9a41
📒 Files selected for processing (9)
cli/core/accounts/containers.pycli/core/mpt/client.pycli/plugins/audit_plugin/api.pycli/plugins/audit_plugin/app.pytests/cli/core/conftest.pytests/cli/core/mpt/test_client.pytests/cli/core/test_debug_logging.pytests/cli/plugins/audit_plugin/test_api.pytests/cli/plugins/audit_plugin/test_app.py
💤 Files with no reviewable changes (4)
- cli/core/accounts/containers.py
- tests/cli/core/mpt/test_client.py
- tests/cli/core/conftest.py
- cli/core/mpt/client.py
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/cli/plugins/audit_plugin/test_app.py (1)
36-144:⚠️ Potential issue | 🟠 MajorFlatten class-based tests into module-level pytest functions.
TestDiffByObjectIdandTestDiffByRecordsIdshould be converted to module-leveltest_*functions to match repo test conventions.As per coding guidelines, "Tests must be written as functions, not classes."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` around lines 36 - 144, The tests are written as methods inside TestDiffByObjectId and TestDiffByRecordsId classes but the repo requires module-level pytest functions; convert each test method (e.g., TestDiffByObjectId.test_successful_comparison, test_with_custom_positions, test_insufficient_records, test_invalid_positions and TestDiffByRecordsId.test_successful_comparison, test_different_objects) into standalone functions named test_diff_by_object_id_successful_comparison, test_diff_by_object_id_custom_positions, etc.; move the `@patch` decorators onto the new functions, keep the same parameter order (mock_get_records/mock_get_trail, mock_create_client, mock_get_active_account, sample_audit_records where used), and ensure they still call runner.invoke(app, ...) and assert the same exit codes and stdout checks so behavior is unchanged.
♻️ Duplicate comments (5)
tests/cli/plugins/audit_plugin/test_app.py (2)
115-129:⚠️ Potential issue | 🟠 MajorAssert that the created client is passed to both
get_audit_trailcalls.This test currently validates output only; it should also verify wiring by asserting the created client is forwarded with both record IDs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` around lines 115 - 129, Update test_successful_comparison to assert the created client returned by mock_create_client was passed into both get_audit_trail calls: fetch the client from mock_create_client.return_value and add assertions against mock_get_trail.call_args_list (or use assert_any_call) to verify mock_get_trail was invoked with (client, "audit1") and (client, "audit2") after runner.invoke(app, ["diff-by-records-id", "audit1", "audit2"]) in the test_successful_comparison function.
1-1:⚠️ Potential issue | 🟠 MajorReplace
unittest.mock.patchusage withpytest-mockpatching and enforceautospec/spec.This file still uses
unittest.mock.patchdirectly and unconstrained patches. Please switch tomocker.patch(..., autospec=True)(or explicitspec) consistently.As per coding guidelines, "Never use
unittest.mockdirectly; use pytest-mock instead." and "Always usespecorautospecwhen mocking in tests."Also applies to: 37-39, 57-59, 76-78, 93-95, 112-114, 131-133, 153-153
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` at line 1, Replace direct unittest.mock usage: remove "from unittest.mock import patch" and convert all patch(...) calls to use the pytest-mock fixture (mocker.patch(..., autospec=True) or mocker.patch(..., spec=SomeClass)). Update each unconstrained patch site (the ranges noted around the patch calls) to use mocker.patch with autospec=True (or an explicit spec) so mocks are autospecced; keep the same target strings and return values but pass autospec=True (or spec=...) to preserve real API shape.cli/plugins/audit_plugin/api.py (1)
43-50:⚠️ Potential issue | 🟠 Major
typer.Exitis currently swallowed by the genericexcept Exceptionblock.Raising
typer.Exit(1)at Line 45 inside thetrycauses theexcept Exceptionpath to run, which prints a failure message for a non-error control-flow case.Suggested fix
raw_records = ( audit_records_filtered.order_by(order_by).select(*select_fields).fetch_page(limit=limit) ) records = [raw_record.to_dict() for raw_record in raw_records] - if not records: - console.print(f"[red]No audit records found for object {object_id}[/red]") - raise typer.Exit(1) # noqa: TRY301 except Exception as error: console.print( f"[red]Failed to retrieve audit records for object {object_id}: {error!s}[/red]" ) raise typer.Exit(1) from error + + if not records: + console.print(f"[red]No audit records found for object {object_id}[/red]") + raise typer.Exit(1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/plugins/audit_plugin/api.py` around lines 43 - 50, The except block is catching typer.Exit raised intentionally (causing the failure message to print); update the error handling so typer.Exit is not swallowed: either add an explicit except typer.Exit: raise immediately before the generic except Exception, or move the "if not records: console.print(...); raise typer.Exit(1)" out of the try block so normal control flow doesn't hit the generic exception handler; ensure you reference typer.Exit and the object_id check (the "if not records" branch) when making the change.tests/cli/plugins/audit_plugin/test_api.py (2)
22-96:⚠️ Potential issue | 🟠 MajorConvert these pytest tests to module-level
test_*functions.The current class wrappers (
TestGetAuditTrail,TestGetAuditRecordsByObject) violate the repository test structure rule.As per coding guidelines, "Tests must be written as functions, not classes."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_api.py` around lines 22 - 96, Tests are currently defined inside class wrappers TestGetAuditTrail and TestGetAuditRecordsByObject which violates the repo rule; convert each test method into a module-level function that accepts the same fixtures (mock_client, mocker) and calls the same targets (get_audit_trail, get_audit_records_by_object). Remove the class definitions, rename each method like test_successful_retrieval/test_no_record_found/test_api_error/test_custom_limit to unique module-level test_* function names (e.g., test_get_audit_trail_successful_retrieval, test_get_audit_trail_no_record_found, test_get_audit_trail_api_error, test_get_audit_records_by_object_successful_retrieval, etc.), keep the same setup using mock_client.audit.records.options/filter/order_by/select/fetch_page and the same assertions (including pytest.raises(Exit)), and ensure fixtures (mock_client, mocker) remain function parameters.
29-37:⚠️ Potential issue | 🟠 MajorAssert fluent query construction arguments, not only payload/output.
These tests should also verify the query chain inputs (
filter(...),order_by("-timestamp"),select(...)) so regressions in query construction are caught early.Also applies to: 61-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_api.py` around lines 29 - 37, The test currently only asserts output and that records_options(render=True) was called; extend it to assert the fluent query chain calls so query construction is validated: after obtaining records_options = mock_client.audit.records.options, assert that records_options.return_value.filter was called with the expected filter (e.g., id="audit123"), that the subsequent chain called order_by with "-timestamp" (records_options.return_value.filter.return_value.order_by.assert_called_once_with("-timestamp")), and that select was called with the expected selection (records_options.return_value.filter.return_value.order_by.return_value.select.assert_called_once_with(<expected_fields>)); apply the same pattern to the other test block referenced (lines 61-71) to cover filter/order_by/select on the chain before fetch_page.
🧹 Nitpick comments (2)
cli/plugins/audit_plugin/api.py (1)
29-31: UseMPTClientforclienttyping here as well.
get_audit_records_by_object()currently usesclient: Any, whileget_audit_trail()already usesMPTClient. Aligning both signatures improves static checks and mock/spec correctness in tests.Suggested refactor
-def get_audit_records_by_object( - client: Any, object_id: str, limit: int = 10 -) -> list[dict[str, Any]]: +def get_audit_records_by_object( + client: MPTClient, object_id: str, limit: int = 10 +) -> list[dict[str, Any]]:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/plugins/audit_plugin/api.py` around lines 29 - 31, Change the parameter type for get_audit_records_by_object from client: Any to client: MPTClient to match get_audit_trail; update the function signature and ensure MPTClient is imported at the top of cli/plugins/audit_plugin/api.py (or the appropriate module) so static typing and tests/mocks use the concrete client type; keep the runtime behavior unchanged.tests/cli/plugins/audit_plugin/test_api.py (1)
56-96: Parameterize the limit-based success-path permutations.
test_successful_retrievalandtest_custom_limitcover the same behavior with different limits and expected payload sizes; this is a good fit for@pytest.mark.parametrize.As per coding guidelines, "Use
@pytest.mark.parametrizefor testing permutations of the same behavior."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_api.py` around lines 56 - 96, Combine the two similar tests into a single parameterized test that exercises different limits and expected payloads: replace test_successful_retrieval and test_custom_limit with one test decorated by `@pytest.mark.parametrize` that supplies (limit, expected_records) cases; inside the test reuse mock_client setup (records_options = mock_client.audit.records.options, filtered = ..., chain = ...) and _mock_record for building returned records, call get_audit_records_by_object(mock_client, "obj123", limit=limit), assert chain.fetch_page was called with the parametrized limit and that the returned result equals the expected_records (or their simplified form), and keep existing error/empty-record tests unchanged so only the success-path permutations are consolidated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/plugins/audit_plugin/api.py`:
- Line 11: Remove the lingering TODO by either implementing support for using
get() with render()'s query string or replacing the inline TODO with a tracked
issue reference; specifically locate the code in cli/plugins/audit_plugin/api.py
that currently uses select instead of get (see the call sites referencing
render() and the get/select usage) and do one of two fixes: (A) implement the
missing behavior so render()'s query string works with get() and update the code
to call get() with appropriate arguments and tests, or (B) remove the TODO
comment and replace it with a concise issue reference (e.g., "See ISSUE-#### for
follow-up: implement get() support for render() query strings") plus a brief
one-line justification. Ensure any change updates related tests or adds a ticket
ID so static analysis no longer flags a TODO.
---
Outside diff comments:
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 36-144: The tests are written as methods inside TestDiffByObjectId
and TestDiffByRecordsId classes but the repo requires module-level pytest
functions; convert each test method (e.g.,
TestDiffByObjectId.test_successful_comparison, test_with_custom_positions,
test_insufficient_records, test_invalid_positions and
TestDiffByRecordsId.test_successful_comparison, test_different_objects) into
standalone functions named test_diff_by_object_id_successful_comparison,
test_diff_by_object_id_custom_positions, etc.; move the `@patch` decorators onto
the new functions, keep the same parameter order
(mock_get_records/mock_get_trail, mock_create_client, mock_get_active_account,
sample_audit_records where used), and ensure they still call runner.invoke(app,
...) and assert the same exit codes and stdout checks so behavior is unchanged.
---
Duplicate comments:
In `@cli/plugins/audit_plugin/api.py`:
- Around line 43-50: The except block is catching typer.Exit raised
intentionally (causing the failure message to print); update the error handling
so typer.Exit is not swallowed: either add an explicit except typer.Exit: raise
immediately before the generic except Exception, or move the "if not records:
console.print(...); raise typer.Exit(1)" out of the try block so normal control
flow doesn't hit the generic exception handler; ensure you reference typer.Exit
and the object_id check (the "if not records" branch) when making the change.
In `@tests/cli/plugins/audit_plugin/test_api.py`:
- Around line 22-96: Tests are currently defined inside class wrappers
TestGetAuditTrail and TestGetAuditRecordsByObject which violates the repo rule;
convert each test method into a module-level function that accepts the same
fixtures (mock_client, mocker) and calls the same targets (get_audit_trail,
get_audit_records_by_object). Remove the class definitions, rename each method
like
test_successful_retrieval/test_no_record_found/test_api_error/test_custom_limit
to unique module-level test_* function names (e.g.,
test_get_audit_trail_successful_retrieval, test_get_audit_trail_no_record_found,
test_get_audit_trail_api_error,
test_get_audit_records_by_object_successful_retrieval, etc.), keep the same
setup using mock_client.audit.records.options/filter/order_by/select/fetch_page
and the same assertions (including pytest.raises(Exit)), and ensure fixtures
(mock_client, mocker) remain function parameters.
- Around line 29-37: The test currently only asserts output and that
records_options(render=True) was called; extend it to assert the fluent query
chain calls so query construction is validated: after obtaining records_options
= mock_client.audit.records.options, assert that
records_options.return_value.filter was called with the expected filter (e.g.,
id="audit123"), that the subsequent chain called order_by with "-timestamp"
(records_options.return_value.filter.return_value.order_by.assert_called_once_with("-timestamp")),
and that select was called with the expected selection
(records_options.return_value.filter.return_value.order_by.return_value.select.assert_called_once_with(<expected_fields>));
apply the same pattern to the other test block referenced (lines 61-71) to cover
filter/order_by/select on the chain before fetch_page.
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 115-129: Update test_successful_comparison to assert the created
client returned by mock_create_client was passed into both get_audit_trail
calls: fetch the client from mock_create_client.return_value and add assertions
against mock_get_trail.call_args_list (or use assert_any_call) to verify
mock_get_trail was invoked with (client, "audit1") and (client, "audit2") after
runner.invoke(app, ["diff-by-records-id", "audit1", "audit2"]) in the
test_successful_comparison function.
- Line 1: Replace direct unittest.mock usage: remove "from unittest.mock import
patch" and convert all patch(...) calls to use the pytest-mock fixture
(mocker.patch(..., autospec=True) or mocker.patch(..., spec=SomeClass)). Update
each unconstrained patch site (the ranges noted around the patch calls) to use
mocker.patch with autospec=True (or an explicit spec) so mocks are autospecced;
keep the same target strings and return values but pass autospec=True (or
spec=...) to preserve real API shape.
---
Nitpick comments:
In `@cli/plugins/audit_plugin/api.py`:
- Around line 29-31: Change the parameter type for get_audit_records_by_object
from client: Any to client: MPTClient to match get_audit_trail; update the
function signature and ensure MPTClient is imported at the top of
cli/plugins/audit_plugin/api.py (or the appropriate module) so static typing and
tests/mocks use the concrete client type; keep the runtime behavior unchanged.
In `@tests/cli/plugins/audit_plugin/test_api.py`:
- Around line 56-96: Combine the two similar tests into a single parameterized
test that exercises different limits and expected payloads: replace
test_successful_retrieval and test_custom_limit with one test decorated by
`@pytest.mark.parametrize` that supplies (limit, expected_records) cases; inside
the test reuse mock_client setup (records_options =
mock_client.audit.records.options, filtered = ..., chain = ...) and _mock_record
for building returned records, call get_audit_records_by_object(mock_client,
"obj123", limit=limit), assert chain.fetch_page was called with the parametrized
limit and that the returned result equals the expected_records (or their
simplified form), and keep existing error/empty-record tests unchanged so only
the success-path permutations are consolidated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 9539cfd5-ef46-49d1-b24c-58259e709f9f
📒 Files selected for processing (9)
cli/core/accounts/containers.pycli/core/mpt/client.pycli/plugins/audit_plugin/api.pycli/plugins/audit_plugin/app.pytests/cli/core/conftest.pytests/cli/core/mpt/test_client.pytests/cli/core/test_debug_logging.pytests/cli/plugins/audit_plugin/test_api.pytests/cli/plugins/audit_plugin/test_app.py
💤 Files with no reviewable changes (4)
- cli/core/mpt/client.py
- cli/core/accounts/containers.py
- tests/cli/core/conftest.py
- tests/cli/core/mpt/test_client.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/cli/core/test_debug_logging.py
- cli/plugins/audit_plugin/app.py
42ef902 to
e9826fc
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
cli/plugins/audit_plugin/api.py (1)
11-11:⚠️ Potential issue | 🟡 MinorResolve the remaining TODO at Line 11 before merge.
This is still an open static-analysis finding. Either implement it now or replace it with a tracked ticket reference (and remove the TODO marker).
If helpful, I can draft a concise issue text + acceptance criteria for the follow-up.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/plugins/audit_plugin/api.py` at line 11, Replace the lingering TODO comment about using select instead of get in the render() query: either fix the query so render() works with get() (update the query-building in render() to correctly encode/query parameters and switch the call from select() back to get()), or remove the TODO and replace it with a tracked-ticket reference (e.g., "Follow-up: TRACK-1234 - fix render() query to support get()") so static analysis is satisfied; ensure you update the comment near render() and the select/get usage accordingly and remove the TODO marker.
🧹 Nitpick comments (2)
tests/cli/plugins/audit_plugin/test_api.py (1)
44-47: Consider consolidating repeated fluent-chain setup into a fixture/helper.Several tests repeat the same arrange steps for
options → filter → order/select → fetch_page; centralizing this would reduce maintenance overhead.As per coding guidelines, "Avoid duplicating test logic; extract shared setup into fixtures."
Also applies to: 65-68, 82-85, 99-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_api.py` around lines 44 - 47, Multiple tests duplicate the same fluent-chain mock setup (mock_client.audit.records.options → records_options.return_value.filter → filtered.select/order → chain.fetch_page), so extract this into a shared fixture or helper function that returns the configured chain mock (with fetch_page pre-set and optionally parameterizable). Replace repeated lines that assign records_options, filtered, chain, and chain.fetch_page in tests with a call to the new fixture/helper; ensure the helper accepts optional arguments for different fetch_page return values or select/order variations and reference symbols like mock_client.audit.records.options, records_options, filtered, chain, and fetch_page to locate and replace the duplicated setup.tests/cli/plugins/audit_plugin/test_app.py (1)
35-43: Extract repeated patch setup into fixtures to reduce duplication.The same patch arrangement is repeated in many tests; a fixture would make this shorter and easier to maintain.
♻️ Example fixture extraction
+@pytest.fixture +def mock_audit_app_deps(mocker): + mock_create_client = mocker.patch( + "cli.plugins.audit_plugin.app.create_api_mpt_client_from_account", + autospec=True, + ) + mocker.patch("cli.plugins.audit_plugin.app.get_active_account", autospec=True) + return {"create_client": mock_create_client} + def test_successful_comparison(...): - mock_create_client = mocker.patch( - "cli.plugins.audit_plugin.app.create_api_mpt_client_from_account", autospec=True - ) - mocker.patch("cli.plugins.audit_plugin.app.get_active_account", autospec=True) + mock_create_client = mock_audit_app_deps["create_client"]As per coding guidelines, "Avoid duplicating test logic; extract shared setup into fixtures." and "Prefer fixtures over mocks whenever possible in tests."
Also applies to: 53-61, 70-78, 85-93, 102-107, 119-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` around lines 35 - 43, The test repeats identical mocker.patch calls (get_audit_records_by_object, create_api_mpt_client_from_account, get_active_account) and sample_audit_records in test_successful_comparison; extract a pytest fixture (e.g., audit_plugin_patches) that accepts the mocker fixture and performs those patches and sets mock_get_records.return_value = sample_audit_records, then update test_successful_comparison (and the other tests listed) to use that fixture instead of repeating the patch code so all patch setup is centralized and reusable; ensure the fixture yields the mocks (names like mock_get_records, mock_create_client) if individual tests need assertions on them.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Line 35: The test method named test_successful_comparison (currently defined
as a method on a test class) should be converted into a module-level test
function: move it out of the class, remove the self parameter, keep its
parameters (mocker, sample_audit_records) and decorator/fixtures intact, and
update any intra-test references; apply the same refactor to other test methods
in this file that remain as class methods so all tests are top-level functions
named test_*. Use the original function name and fixtures to preserve behavior.
---
Duplicate comments:
In `@cli/plugins/audit_plugin/api.py`:
- Line 11: Replace the lingering TODO comment about using select instead of get
in the render() query: either fix the query so render() works with get() (update
the query-building in render() to correctly encode/query parameters and switch
the call from select() back to get()), or remove the TODO and replace it with a
tracked-ticket reference (e.g., "Follow-up: TRACK-1234 - fix render() query to
support get()") so static analysis is satisfied; ensure you update the comment
near render() and the select/get usage accordingly and remove the TODO marker.
---
Nitpick comments:
In `@tests/cli/plugins/audit_plugin/test_api.py`:
- Around line 44-47: Multiple tests duplicate the same fluent-chain mock setup
(mock_client.audit.records.options → records_options.return_value.filter →
filtered.select/order → chain.fetch_page), so extract this into a shared fixture
or helper function that returns the configured chain mock (with fetch_page
pre-set and optionally parameterizable). Replace repeated lines that assign
records_options, filtered, chain, and chain.fetch_page in tests with a call to
the new fixture/helper; ensure the helper accepts optional arguments for
different fetch_page return values or select/order variations and reference
symbols like mock_client.audit.records.options, records_options, filtered,
chain, and fetch_page to locate and replace the duplicated setup.
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 35-43: The test repeats identical mocker.patch calls
(get_audit_records_by_object, create_api_mpt_client_from_account,
get_active_account) and sample_audit_records in test_successful_comparison;
extract a pytest fixture (e.g., audit_plugin_patches) that accepts the mocker
fixture and performs those patches and sets mock_get_records.return_value =
sample_audit_records, then update test_successful_comparison (and the other
tests listed) to use that fixture instead of repeating the patch code so all
patch setup is centralized and reusable; ensure the fixture yields the mocks
(names like mock_get_records, mock_create_client) if individual tests need
assertions on them.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 2dcd8b28-6eca-4246-8e2d-6d61fc35e122
📒 Files selected for processing (9)
cli/core/accounts/containers.pycli/core/mpt/client.pycli/plugins/audit_plugin/api.pycli/plugins/audit_plugin/app.pytests/cli/core/conftest.pytests/cli/core/mpt/test_client.pytests/cli/core/test_debug_logging.pytests/cli/plugins/audit_plugin/test_api.pytests/cli/plugins/audit_plugin/test_app.py
💤 Files with no reviewable changes (4)
- tests/cli/core/conftest.py
- cli/core/accounts/containers.py
- cli/core/mpt/client.py
- tests/cli/core/mpt/test_client.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cli/plugins/audit_plugin/app.py
e9826fc to
17bc5a3
Compare
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
cli/plugins/audit_plugin/api.py (1)
11-11:⚠️ Potential issue | 🟡 MinorResolve the lingering TODO on Line 11 before merge (or link a tracked ticket).
This TODO is still in production code and already triggers Sonar warnings in this PR.
Suggested minimal update
- # TODO: Using select instead of get because render() query string isn't working with get(). + # Follow-up tracked in MPT-####: enable get() with render() query strings.I can draft the follow-up issue text with acceptance criteria if you want.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/plugins/audit_plugin/api.py` at line 11, The lingering TODO about using select() instead of get() should be resolved: either fix the underlying render() query-string behavior so you can use get() (update the render() implementation to accept/encode the query string and replace select() calls with get() in the audit_plugin code paths that call render()), or convert the TODO into a tracked-ticket reference by removing the TODO and adding a clear "// TODO: track: ISSUE-XXXX - reason and acceptance criteria" comment (include the ticket ID and short description). Locate usages by searching for render(), select(), and get() within cli/plugins/audit_plugin/api.py and ensure the change is covered by a small unit/integration test verifying the correct query-string behavior.
🧹 Nitpick comments (2)
tests/cli/plugins/audit_plugin/test_app.py (1)
34-128: Consolidate repeated patch setup into shared fixtures.The same account/client patching pattern is repeated across multiple tests and can be moved into fixtures to simplify the file.
As per coding guidelines, "Avoid duplicating test logic; extract shared setup into fixtures."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_app.py` around lines 34 - 128, Multiple tests repeat the same client/account patching; create one or two pytest fixtures (e.g., mock_api_client and mock_active_account) in this test module that call mocker.patch for "cli.plugins.audit_plugin.app.create_api_mpt_client_from_account" and "cli.plugins.audit_plugin.app.get_active_account" and return the patched mock (or mock_create_client.return_value) so tests like test_diff_by_object_id_successful_comparison, test_diff_by_object_id_custom_positions, test_diff_by_object_id_insufficient_records, test_diff_by_object_id_invalid_positions, test_diff_by_records_id_successful_comparison, and test_diff_by_records_id_different_objects can accept these fixtures instead of repeating mocker.patch; update those tests to remove the duplicated patch lines and rely on the fixtures while keeping other per-test patches (like get_audit_records_by_object or get_audit_trail) unchanged.tests/cli/plugins/audit_plugin/test_api.py (1)
22-107: Extract repeated mock-chain setup into fixtures/helpers.The repeated Arrange blocks for
records_options -> filter -> select/order_by -> fetch_pageand repeatedselect_fieldscan be centralized to reduce test maintenance overhead.As per coding guidelines, "Avoid duplicating test logic; extract shared setup into fixtures."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/cli/plugins/audit_plugin/test_api.py` around lines 22 - 107, Tests repeat the same mock chain setup (records_options -> filter -> order_by/select -> fetch_page) and the select_fields list across multiple tests (e.g., test_get_audit_trail_successful_retrieval, test_get_audit_records_by_object_retrieval, etc.); extract a reusable fixture/helper that returns a pre-wired mock chain and the canonical SELECT_FIELDS constant and use that fixture in each test, update tests to call the fixture (or helper) to get the configured records_options/filter/chain and to reference SELECT_FIELDS instead of repeating the list, and keep existing assertions but replace duplicated setup with the shared fixture/helper and reuse _mock_record where needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@cli/plugins/audit_plugin/api.py`:
- Line 11: The lingering TODO about using select() instead of get() should be
resolved: either fix the underlying render() query-string behavior so you can
use get() (update the render() implementation to accept/encode the query string
and replace select() calls with get() in the audit_plugin code paths that call
render()), or convert the TODO into a tracked-ticket reference by removing the
TODO and adding a clear "// TODO: track: ISSUE-XXXX - reason and acceptance
criteria" comment (include the ticket ID and short description). Locate usages
by searching for render(), select(), and get() within
cli/plugins/audit_plugin/api.py and ensure the change is covered by a small
unit/integration test verifying the correct query-string behavior.
---
Nitpick comments:
In `@tests/cli/plugins/audit_plugin/test_api.py`:
- Around line 22-107: Tests repeat the same mock chain setup (records_options ->
filter -> order_by/select -> fetch_page) and the select_fields list across
multiple tests (e.g., test_get_audit_trail_successful_retrieval,
test_get_audit_records_by_object_retrieval, etc.); extract a reusable
fixture/helper that returns a pre-wired mock chain and the canonical
SELECT_FIELDS constant and use that fixture in each test, update tests to call
the fixture (or helper) to get the configured records_options/filter/chain and
to reference SELECT_FIELDS instead of repeating the list, and keep existing
assertions but replace duplicated setup with the shared fixture/helper and reuse
_mock_record where needed.
In `@tests/cli/plugins/audit_plugin/test_app.py`:
- Around line 34-128: Multiple tests repeat the same client/account patching;
create one or two pytest fixtures (e.g., mock_api_client and
mock_active_account) in this test module that call mocker.patch for
"cli.plugins.audit_plugin.app.create_api_mpt_client_from_account" and
"cli.plugins.audit_plugin.app.get_active_account" and return the patched mock
(or mock_create_client.return_value) so tests like
test_diff_by_object_id_successful_comparison,
test_diff_by_object_id_custom_positions,
test_diff_by_object_id_insufficient_records,
test_diff_by_object_id_invalid_positions,
test_diff_by_records_id_successful_comparison, and
test_diff_by_records_id_different_objects can accept these fixtures instead of
repeating mocker.patch; update those tests to remove the duplicated patch lines
and rely on the fixtures while keeping other per-test patches (like
get_audit_records_by_object or get_audit_trail) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: edcff9b3-4596-41aa-a54c-c5fdcf36996c
📒 Files selected for processing (9)
cli/core/accounts/containers.pycli/core/mpt/client.pycli/plugins/audit_plugin/api.pycli/plugins/audit_plugin/app.pytests/cli/core/conftest.pytests/cli/core/mpt/test_client.pytests/cli/core/test_debug_logging.pytests/cli/plugins/audit_plugin/test_api.pytests/cli/plugins/audit_plugin/test_app.py
💤 Files with no reviewable changes (4)
- tests/cli/core/mpt/test_client.py
- tests/cli/core/conftest.py
- cli/core/accounts/containers.py
- cli/core/mpt/client.py
🚧 Files skipped from review as they are similar to previous changes (1)
- cli/plugins/audit_plugin/app.py



This pull request removes the legacy
MPTClientimplementation and its usage throughout the codebase, fully migrating to the newmpt_api_clientpackage for Marketplace API interactions. All relevant application logic, dependency injection, and tests have been updated to use the new client, and the audit plugin API has been refactored to leverage the new query interface, resulting in cleaner and more maintainable code.Migration to new Marketplace API client:
cli.core.mpt.client.pyfile and itsMPTClientandclient_from_accountimplementations. All references to these have been replaced with the newmpt_api_clientpackage and itsMPTClientclass. [1] [2] [3] [4] [5]AccountContainerto provide only the new API client viacreate_api_mpt_client_from_account, removing the legacy client provider.audit_plugin/app.py) and tests to use the new client creation function and updated mocks accordingly. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]Audit plugin API improvements:
get_audit_trailandget_audit_records_by_objectto use the new query interface (options,filter,select,fetch_page) instead of manually constructing query strings and endpoints, resulting in more robust and readable code.Test suite updates:
[1] [2] [3] [4] [5]
Closes MPT-18559