You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DCPerf is an opensource benchmarking and microbenchmarking suite, originally developed by Meta, that faithfully replicates the characteristics of various generalpurpose data center workloads. One of the key differentiators compared to alternative benchmarking software is the fidelity of micro-architectural behavior replicated by DCPerf, for example, cache misses and branch misprediction rate.
24
+
DCPerf is an open-source benchmarking and microbenchmarking suite originally developed by Meta. It faithfully replicates the characteristics of general-purpose data center workloads, with particular attention to microarchitectural fidelity. DCPerf stands out for accurate simulation of behaviors such as cache misses and branch mispredictions, which are details that many other benchmarking tools overlook.
27
25
28
-
DCPerf generates performance data to inform procurement decisions. It can also be used for regression testing to detect changes in the environment, such as kernel and compiler changes.
26
+
You can use DCPerf to generate performance data to inform procurement decisions, and for regression testing to detect changes in the environment, such as kernel and compiler changes.
29
27
30
-
You can install DCPerf on Arm-based servers. The examples below have been tested on an AWS `c7g.metal` instance running Ubuntu 22.04 LTS.
28
+
DCPerf runs on Arm-based servers. The examples below have been tested on an AWS `c7g.metal` instance running Ubuntu 22.04 LTS.
31
29
32
30
{{% notice Note %}}
33
-
When running on a server provided by a cloud service, you will have limited access to some parameters, such as UEFI settings, which can impact performance.
31
+
When running on a server provided by a cloud service, you have limited access to some parameters, such as UEFI settings, which can affect performance.
34
32
{{% /notice %}}
35
33
36
-
## Install Prerequisites
34
+
## Install prerequisites
37
35
38
36
To get started, install the required software:
39
37
@@ -47,7 +45,7 @@ It is recommended that you install Python packages in a Python virtual environme
You should see the following response. If you do not see the `Disabled` output, please refer to your Linux distribution documentation for information about how to disable SELinux.
105
+
You should see the following response:
106
106
107
107
```output
108
108
Disabled
109
109
```
110
110
111
-
The `install` argument to the `benchpress_cli.py` command line script can be used to automatically install all dependencies for each benchmark.
111
+
If you do not see the `Disabled` output, see your Linux distribution documentation for information about how to disable SELinux.
112
+
113
+
You can automatically install all dependencies for each benchmark using the `install` argument with the `benchpress_cli.py` command-line script:
Please note this may take several minutes to complete.
119
+
This step might take several minutes to complete, depending on your system's download and setup speed.
118
120
119
-
## Run the MediaWiki Benchmark
121
+
## Run the MediaWiki benchmark
120
122
121
-
For the sake of brevity, you can provide the duration and timeout arguments using a `JSON` dictionary with the `-i` argument.
123
+
For the sake of brevity, you can provide the duration and timeout arguments using a `JSON` dictionary with the `-i` argument:
122
124
123
125
```console
124
126
sudo ./benchpress_cli.py run oss_performance_mediawiki_mlp -i '{
@@ -127,11 +129,11 @@ sudo ./benchpress_cli.py run oss_performance_mediawiki_mlp -i '{
127
129
}'
128
130
```
129
131
130
-
While the benchmark is running, you can observe the various processes occupying the CPU with the `top` command.
132
+
While the benchmark is running, you can monitor CPU activity and observe benchmark-related processes using the `top` command.
131
133
132
-
Once the benchmark is complete, a `benchmark_metrics_*` directory will be created within the `DCPerf` directory, containing a `JSON` file for the system specs and another for the metrics.
134
+
When the benchmark is complete, a `benchmark_metrics_*` directory is created within the `DCPerf` directory, containing a `JSON` file for the system specs and another for the metrics.
133
135
134
-
For example, the metrics file will list the following:
136
+
For example, the metrics file lists the following:
135
137
136
138
```output
137
139
"metrics": {
@@ -156,7 +158,7 @@ For example, the metrics file will list the following:
156
158
"score": 2.4692578125
157
159
```
158
160
159
-
## Understanding the Benchmark Results
161
+
## Understanding the benchmark results
160
162
161
163
The metrics file contains several key performance indicators from the benchmark run:
162
164
@@ -179,8 +181,12 @@ The metrics file contains several key performance indicators from the benchmark
179
181
180
182
These metrics help you evaluate the performance and reliability of the system under test. Higher values for successful requests and RPS, and lower response times, generally indicate better performance. The score provides a single value for easy comparison across runs or systems.
181
183
182
-
## Next Steps
184
+
## Next steps
185
+
186
+
These are some activites you might like to try next:
187
+
188
+
* Use the results to compare performance across different systems, hardware configurations, or after making system changes, such as kernel, compiler, or driver updates.
189
+
190
+
* Consider tuning system parameters or trying alternative DCPerf benchmarks to further evaluate your environment.
183
191
184
-
- Use the results to compare performance across different systems, hardware configurations, or after making system changes (e.g., kernel or compiler updates).
185
-
- Consider tuning system parameters or trying different DCPerf benchmarks to further evaluate your environment.
186
-
- Explore the other DCPerf benchmarks
192
+
* Explore additional DCPerf workloads, including those that simulate key-value stores, in-memory caching, or machine learning inference.
0 commit comments