opencti: track partial state on failure and reduce redundant API calls#3337
opencti: track partial state on failure and reduce redundant API calls#3337gks281263 wants to merge 8 commits intointelowlproject:developfrom
Conversation
…I calls Wrap OpenCTI connector run() flow in try/except to improve failure visibility. Track created entity IDs (observable, report, external reference, labels) and append them to report.errors when an exception occurs, while preserving existing failure semantics. Also cache organization_id and marking_definition_id to avoid redundant API calls within a single execution. Success path and return structure remain unchanged.
mlodic
left a comment
There was a problem hiding this comment.
the main problem about this refactors is that there are 0 unittests and that there is no live test with a real OpenCTI instance. Without doing so, how can we reliable say that this works?
|
Thank you for pointing this out — you’re absolutely right. Since this change affects failure semantics at the integration boundary, it should not rely only on manual verification. I will add dedicated unit tests that mock the OpenCTI/pycti client and simulate failures at different stages (e.g. after observable creation, after report creation). These tests will assert that: report.errors contains the expected partial-state information with the correct created IDs the success path remains unchanged organization and marking resolution are not invoked redundantly within a single execution Regarding live validation, my plan is to complement the mocked tests with a lightweight integration test using a real OpenCTI instance (via docker), so we validate actual API interaction as well. If this direction aligns with your expectations, I’ll proceed with implementing the tests accordingly. |
09a30e9 to
63083a9
Compare
…reliability add deterministic unit tests covering failure at each critical stage (observable, report, external reference) to validate partial-state contract enforce strict assertion on Created IDs message to guarantee no silent partial object leak validate success path integrity and ensure no regression in return structure verify organization and marking creation is executed only once per run (caching correctness) add env-guarded live OpenCTI integration test for real API validation without breaking CI This makes OpenCTI connector behavior testable, predictable and safe against API schema drift.
63083a9 to
f43264f
Compare
…I responses Tests (test_opencti.py): - Patch pycti classes (Identity, MarkingDefinition, StixCyberObservable, Label, Report, ExternalReference, StixDomainObject) instead of .create/.read so pycti.Class(inst).create() is intercepted in CI. - Configure instance mocks via .return_value.create.return_value (and .read, .add_external_reference, .add_stix_object_or_stix_relationship) per test. - Wrap connector.start() in try/except in failure and live tests so when after_run_failed re-raises under STAGE_CI the test still asserts on report. - Assert identity_mock.return_value.create.assert_called_once() (and marking) for the "called only once" test. - Remove OPENCTI_MODULE and _opencti_patch helper. Connector (opencti.py): - Validate Identity.create and MarkingDefinition.create responses (dict with "id"); raise ValueError with a clear message otherwise, consistent with Label.create. - In _monkeypatch, patch the same pycti classes and add _configure_pycti_mocks to set .return_value.create/.read/etc. so instance method calls are mocked when MOCK_CONNECTIONS is used.
|
Hii @mlodic The DeepSource Python and Docker checks appear to have timed out during analysis rather than failing due to specific code issues. I’ll wait for the checks to be re-run, but please let me know if you’d prefer any adjustments from my side. As a follow-up direction, once this partial-state tracking change is settled, I think we could consider extracting the entity-creation steps in run() into smaller helper methods. That would reduce complexity and make future improvements (such as idempotency or rollback handling) easier to implement in a structured way. Happy to proceed based on your preference. |
|
ok but you need to test this with a live OpenCTI isntance and provide proof of this as I mentioned, otherwise this PR won't be considered |
|
Hi @mlodic I tested the connector with a real OpenCTI instance using the official OpenCTI docker stack. Environment
Verification1. Observable creation The connector successfully created the observable 2. Report creation The corresponding report Both objects are visible in the OpenCTI UI (screenshots attached). 3. Partial-state behaviour I simulated a failure immediately after the report creation step to verify the partial-state tracking logic.
Example execution outputThis confirms that with a live OpenCTI instance:
This test was done using the same code from this PR without additional modifications. Please let me know if you would like any additional logs or verification. |
| super().tearDown() | ||
|
|
||
| # --- Test 1: Failure after observable creation --- | ||
| @patch.object(OpenCTI, "_monkeypatch", classmethod(lambda cls: None)) |
There was a problem hiding this comment.
please try to use parametrize from pytest to reduce the duplicate code here. Ok that unittests must be isolated but here the changes are not many and we could simplify this
There was a problem hiding this comment.
@mlodic
Thanks for the suggestion about using pytest.parametrize.
I refactored the failure tests accordingly to remove the duplicated scenarios. However, the CI environment currently runs the test suite with Django’s unittest runner and does not seem to have pytest installed, which causes an import error during test discovery.
Would you prefer that we:
- add pytest as a test dependency, or
- keep the parametrization idea but implement it using
unittest.subTestinstead?
Happy to adjust the implementation based on your preference.
Refactor OpenCTI connector tests by replacing three nearly identical failure tests with a single pytest.parametrized test. The scenarios (failure after observable creation, report creation, and external reference linking) are now exercised through parametrization while keeping the setup and assertions consistent. This reduces duplicated test logic, improves maintainability, and addresses review feedback suggesting the use of pytest.parametrize.
|
any update on this? |
Use unittest.subTest-driven scenarios for partial-state failures so the module runs cleanly under Django's unittest test runner without pytest dependency.


This change improves failure visibility in the OpenCTI connector.
Success path and return structure remain unchanged.