Skip to content

Commit 99c2628

Browse files
Merge pull request #2063 from madeline-underwood/DCPerf
Dc perf_JA to review
2 parents fbc9a0b + 8beabf7 commit 99c2628

File tree

1 file changed

+31
-25
lines changed

1 file changed

+31
-25
lines changed

content/install-guides/dcperf.md

Lines changed: 31 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@ author: Kieran Hejmadi
44
minutes_to_complete: 20
55
official_docs: https://github.com/facebookresearch/DCPerf?tab=readme-ov-file#install-and-run-benchmarks
66

7-
draft: true
8-
97
additional_search_terms:
108
- linux
119
- Neoverse
@@ -23,17 +21,17 @@ weight: 1
2321

2422
## Introduction
2523

26-
DCPerf is an open source benchmarking and microbenchmarking suite, originally developed by Meta, that faithfully replicates the characteristics of various general purpose data center workloads. One of the key differentiators compared to alternative benchmarking software is the fidelity of micro-architectural behavior replicated by DCPerf, for example, cache misses and branch misprediction rate.
24+
DCPerf is an open-source benchmarking and microbenchmarking suite originally developed by Meta. It faithfully replicates the characteristics of general-purpose data center workloads, with particular attention to microarchitectural fidelity. DCPerf stands out for accurate simulation of behaviors such as cache misses and branch mispredictions, which are details that many other benchmarking tools overlook.
2725

28-
DCPerf generates performance data to inform procurement decisions. It can also be used for regression testing to detect changes in the environment, such as kernel and compiler changes.
26+
You can use DCPerf to generate performance data to inform procurement decisions, and for regression testing to detect changes in the environment, such as kernel and compiler changes.
2927

30-
You can install DCPerf on Arm-based servers. The examples below have been tested on an AWS `c7g.metal` instance running Ubuntu 22.04 LTS.
28+
DCPerf runs on Arm-based servers. The examples below have been tested on an AWS `c7g.metal` instance running Ubuntu 22.04 LTS.
3129

3230
{{% notice Note %}}
33-
When running on a server provided by a cloud service, you will have limited access to some parameters, such as UEFI settings, which can impact performance.
31+
When running on a server provided by a cloud service, you have limited access to some parameters, such as UEFI settings, which can affect performance.
3432
{{% /notice %}}
3533

36-
## Install Prerequisites
34+
## Install prerequisites
3735

3836
To get started, install the required software:
3937

@@ -47,7 +45,7 @@ It is recommended that you install Python packages in a Python virtual environme
4745
Set up your virtual environment:
4846

4947
```bash
50-
python -m venv venv
48+
python3 -m venv venv
5149
source venv/bin/activate
5250
```
5351
If requested, restart the recommended services.
@@ -65,13 +63,13 @@ git clone https://github.com/facebookresearch/DCPerf.git
6563
cd DCPerf
6664
```
6765

68-
## Running the MediaWiki Benchmark
66+
## Running the MediaWiki benchmark
6967

70-
DCPerf offers many benchmarks. Please refer to the official documentation for the benchmark of your choice.
68+
DCPerf offers many benchmarks. See the official documentation for the benchmark of your choice.
7169

7270
One example is the MediaWiki benchmark, designed to faithfully reproduce the workload of the Facebook social networking site.
7371

74-
Install HipHop Virtual Machine (HHVM), a virtual machine used to execute the web application code.
72+
Install HipHop Virtual Machine (HHVM), a virtual machine used to execute the web application code:
7573

7674
```bash
7775
wget https://github.com/facebookresearch/DCPerf/releases/download/hhvm/hhvm-3.30-multplatform-binary-ubuntu.tar.xz
@@ -81,10 +79,12 @@ sudo ./pour-hhvm.sh
8179
export LD_LIBRARY_PATH="/opt/local/hhvm-3.30/lib:$LD_LIBRARY_PATH"
8280
```
8381

84-
Confirm `hhvm` is available. The `hhvm` binary is located in the `DCPerf/hhvm/aarch64-ubuntu22.04/hhvm-3.30/bin` directory.
82+
Confirm `hhvm` is available. The `hhvm` binary is located in the `DCPerf/hhvm/aarch64-ubuntu22.04/hhvm-3.30/bin` directory:
8583

8684
```bash
8785
hhvm --version
86+
# Return to the DCPerf root directory
87+
cd ..
8888
```
8989

9090
You should see output similar to:
@@ -102,23 +102,25 @@ sudo apt install selinux-utils
102102
getenforce
103103
```
104104

105-
You should see the following response. If you do not see the `Disabled` output, please refer to your Linux distribution documentation for information about how to disable SELinux.
105+
You should see the following response:
106106

107107
```output
108108
Disabled
109109
```
110110

111-
The `install` argument to the `benchpress_cli.py` command line script can be used to automatically install all dependencies for each benchmark.
111+
If you do not see the `Disabled` output, see your Linux distribution documentation for information about how to disable SELinux.
112+
113+
You can automatically install all dependencies for each benchmark using the `install` argument with the `benchpress_cli.py` command-line script:
112114

113115
```console
114116
sudo ./benchpress_cli.py install oss_performance_mediawiki_mlp
115117
```
116118

117-
Please note this may take several minutes to complete.
119+
This step might take several minutes to complete, depending on your system's download and setup speed.
118120

119-
## Run the MediaWiki Benchmark
121+
## Run the MediaWiki benchmark
120122

121-
For the sake of brevity, you can provide the duration and timeout arguments using a `JSON` dictionary with the `-i` argument.
123+
For the sake of brevity, you can provide the duration and timeout arguments using a `JSON` dictionary with the `-i` argument:
122124

123125
```console
124126
sudo ./benchpress_cli.py run oss_performance_mediawiki_mlp -i '{
@@ -127,11 +129,11 @@ sudo ./benchpress_cli.py run oss_performance_mediawiki_mlp -i '{
127129
}'
128130
```
129131

130-
While the benchmark is running, you can observe the various processes occupying the CPU with the `top` command.
132+
While the benchmark is running, you can monitor CPU activity and observe benchmark-related processes using the `top` command.
131133

132-
Once the benchmark is complete, a `benchmark_metrics_*` directory will be created within the `DCPerf` directory, containing a `JSON` file for the system specs and another for the metrics.
134+
When the benchmark is complete, a `benchmark_metrics_*` directory is created within the `DCPerf` directory, containing a `JSON` file for the system specs and another for the metrics.
133135

134-
For example, the metrics file will list the following:
136+
For example, the metrics file lists the following:
135137

136138
```output
137139
"metrics": {
@@ -156,7 +158,7 @@ For example, the metrics file will list the following:
156158
"score": 2.4692578125
157159
```
158160

159-
## Understanding the Benchmark Results
161+
## Understanding the benchmark results
160162

161163
The metrics file contains several key performance indicators from the benchmark run:
162164

@@ -179,8 +181,12 @@ The metrics file contains several key performance indicators from the benchmark
179181

180182
These metrics help you evaluate the performance and reliability of the system under test. Higher values for successful requests and RPS, and lower response times, generally indicate better performance. The score provides a single value for easy comparison across runs or systems.
181183

182-
## Next Steps
184+
## Next steps
185+
186+
These are some activites you might like to try next:
187+
188+
* Use the results to compare performance across different systems, hardware configurations, or after making system changes, such as kernel, compiler, or driver updates.
189+
190+
* Consider tuning system parameters or trying alternative DCPerf benchmarks to further evaluate your environment.
183191

184-
- Use the results to compare performance across different systems, hardware configurations, or after making system changes (e.g., kernel or compiler updates).
185-
- Consider tuning system parameters or trying different DCPerf benchmarks to further evaluate your environment.
186-
- Explore the other DCPerf benchmarks
192+
* Explore additional DCPerf workloads, including those that simulate key-value stores, in-memory caching, or machine learning inference.

0 commit comments

Comments
 (0)