Skip to content

[All Connectors] Integration testing framework & performance stack #6055

@helene-nguyen

Description

@helene-nguyen

Context

There is currently no automated integration testing framework for connectors and no structured way to measure the volume and nature of data each connector produces. This prevents early detection of connector regressions, makes it impossible to validate STIX bundle correctness at scale, and blocks the ability to align platform sizing recommendations with real connector output.

Problem / Objective

Problem Impact
No automated integration test suite for connectors Connector regressions go undetected until production
No STIX bundle validation step Invalid or malformed bundles may be ingested silently
No volume/statistics tracking per connector Cannot understand data footprint per connector for sizing or pricing
No daily regression signal Critical connector failures (e.g. API connectivity loss) are discovered too late
No data to support platform sizing recommendations Pricing and deployment guidance is not grounded in real connector output

Proposed Solution

1. Daily integration test framework

  • Build a test framework that runs every day against all validated / most-used connectors.
  • Each connector must:
    • Execute a full run and terminate.
    • Generate a STIX bundle representing the last 2 days of information.
    • Validate the bundle (ensure all STIX objects are correctly defined).
  • If any connector fails to connect to its API or complete a run, a critical bug must be raised automatically.

2. Per-connector statistics collection

  • After each run, compute and export statistics per day:
    • Number of elements per type.
    • Number of relationships per type.
    • Number of internal references per type (including meta refs such as labels).
  • Send all statistics to Elasticsearch for analysis and trending.

3. Connector contract enrichment

  • Write analysis results back into the connector contract so they can be consumed by Composer for sizing warnings and usage guidance.

4. Sizing & pricing application

  • Develop an internal application that leverages collected performance data to recommend the right platform sizing for the right usage.
  • This enables data-driven pricing model adjustments based on actual connector volume.

5. Additional build validation

  • Test Dockerfile / docker-compose to ensure each connector compiles successfully.
  • Enhance release process error management (ref: release#13).

Additional Context

  • Statistics exploitation depends on a sufficient history of daily runs in Elasticsearch

Metadata

Metadata

Assignees

No one assigned

    Labels

    filigran support[optional] use to identify an issue related to feature developed & maintained by Filigran.needs triageuse to identify issue needing triage from Filigran Product teamtech foundationTechnical refactor or improvement is neededtest automationuse to identify issue related to test automation implementation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions