You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/howto/rally_benchmarking.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# HOWTO: Writing system benchmarks for a package
1
+
# HOWTO: Writing rally benchmarks for a package
2
2
3
3
## Introduction
4
4
Elastic Packages are comprised of data streams. A rally benchmark runs `esrally` track with a corpus of data into an Elasticsearch data stream, and reports rally stats as well as retrieving performance metrics from the Elasticsearch nodes.
@@ -10,11 +10,11 @@ Conceptually, running a rally benchmark involves the following steps:
10
10
1. Deploy the Elastic Stack, including Elasticsearch, Kibana, and the Elastic Agent(s). This step takes time so it should typically be done once as a pre-requisite to running a system benchmark scenario.
11
11
1. Install a package that configures its assets for every data stream in the package.
12
12
1. Metrics collections from the cluster starts. (**TODO**: record metrics from all Elastic Agents involved using the `system` integration.)
13
-
1. Send the collected metrics to the ES Metricstore if set.
14
13
1. Generate data (it uses the [corpus-generator-tool](https://github.com/elastic/elastic-integration-corpus-generator-tool))
15
14
1. Run an `esrally` track with the corpus of generated data. `esrally` must be installed on the system where the `elastic-package` is run and available in the `PATH`.
16
15
1. Wait for the `esrally` track to be executed.
17
16
1. Metrics collection ends and a summary report is created.
17
+
1. Send the collected metrics to the ES Metricstore if set.
18
18
1. Delete test artifacts.
19
19
1. Optionally reindex all ingested data into the ES Metricstore for further analysis.
20
20
1.**TODO**: Optionally compare results against another benchmark run.
@@ -60,7 +60,6 @@ Example:
60
60
description: Benchmark 20000 events ingested
61
61
data_stream:
62
62
name: testds
63
-
warmup_time_period: 10s
64
63
corpora:
65
64
generator:
66
65
total_events: 900000
@@ -275,7 +274,7 @@ In the directory of the `rally-track-output-dir` flag two files are saved:
275
274
Both files are required to replay the rally benchmark. The first file references the second in its content.
276
275
The command to run for replaying the track is the following:
0 commit comments