Benchmark structure setup #165
Martin187187
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
User Story: Benchmark Setup for BaSyx Endpoints
User Story
As a BaSyx developer,
I want a standardized benchmark setup for BaSyx endpoints,
so that scalability, performance, and runtime behavior can be
verified in a reproducible way.
Scope and Goals
The benchmark setup is intended to:
Benchmark Modes
1. Quick Runtime Test (Warm System)
2. Full Build-Up Test (Cold System)
Rules and Constraints
database
specifying:
Entry Points
The initial benchmark scope includes: - AAS Registry - Basic Discovery
Service - Digital Twin Registry (DTR)
Acceptance Criteria
This task is considered complete when:
Open Issue: Schema Changes in Warm Runtime Mode
Problem
Warm-mode benchmarks assume a compatible, populated database. Schema
changes introduced by new pull requests may break this assumption.
Handling Strategy
(e.g. ALTER)
Acceptance Criteria Extension
the correct mode
Deliverables
Beta Was this translation helpful? Give feedback.
All reactions