Skip to content

Commit df13693

Browse files
authored
Update docs (#2118)
* Update mlcflow commands
1 parent eaace3f commit df13693

File tree

13 files changed

+207
-49
lines changed

13 files changed

+207
-49
lines changed

.github/workflows/auto-update-dev.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ on:
77

88
jobs:
99
update-dev:
10+
if: github.repository_owner == 'mlcommons'
1011
strategy:
1112
matrix:
1213
branch: [ "dev", "docs" ]

.github/workflows/build_wheels.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ jobs:
7070
7171
build_wheels:
7272
name: Build wheels on ${{ matrix.os }}
73+
if: github.repository_owner == 'mlcommons'
7374
needs: update_version
7475
runs-on: ${{ matrix.os }}
7576
strategy:
@@ -101,6 +102,7 @@ jobs:
101102
path: wheels
102103

103104
publish_wheels:
105+
if: github.repository_owner == 'mlcommons'
104106
needs: build_wheels # Wait for the build_wheels job to complete
105107
runs-on: ubuntu-latest # Only run this job on Linux
106108
environment: release

.github/workflows/publish.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ on:
99
branches:
1010
- master
1111
- docs
12+
- dev
1213

1314
jobs:
1415

docs/benchmarks/automotive/3d_object_detection/get-pointpainting-data.md

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,27 @@ hide:
33
- toc
44
---
55

6-
# 3D Object Detection using PointPainting
6+
# 3-D Object Detection using PointPainting
77

8-
TBD
8+
## Dataset
99

10+
> **Note:** By default, the waymo dataset is downloaded from the mlcommons official drive. One has to accept the [MLCommons Waymo Open Dataset EULA](https://waymo.mlcommons.org/) to access the dataset files.
11+
12+
The benchmark implementation run command will automatically download the preprocessed dataset. In case you want to download only the datasets, you can use the below command.
13+
14+
```bash
15+
mlcr get,dataset,waymo -j
16+
```
17+
18+
- `--outdirname=<PATH_TO_DOWNLOAD_WAYMO_DATASET>` could be provided to download the dataset to a specific location.
19+
20+
## Model
21+
> **Note:** By default, the PointPainting is downloaded from the mlcommons official drive. One has to accept the [MLCommons Waymo Open Dataset EULA](https://waymo.mlcommons.org/) to access the model files.
22+
23+
The benchmark implementation run command will automatically download the model. In case you want to download only the PointPainting model, you can use the below command.
24+
25+
```bash
26+
mlcr get,ml-model,pointpainting -j
27+
```
28+
29+
- `--outdirname=<PATH_TO_DOWNLOAD_POINTPAINTING_MODEL>` could be provided to download the model files to a specific location.

docs/benchmarks/language/get-llama2-70b-data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,10 @@ Get the Official MLPerf LLAMA2-70b Model
2626

2727
### Pytorch
2828
```
29-
mlcr get,ml-model,llama2-70b,_pytorch -j
29+
mlcr get,ml-model,llama2-70b,_pytorch -j --outdirname=<My download path>
3030
```
3131

3232
!!! tip
3333

34-
Downloading llama2-70B model from Hugging Face will prompt you to enter the Hugging Face username and password. Please note that the password required is the [**access token**](https://huggingface.co/settings/tokens) generated for your account. Additionally, ensure that your account has access to the [llama2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) model.
34+
[Access Request Link](https://llama2.mlcommons.org/) for MLCommons members
3535

docs/benchmarks/language/get-llama3_1-405b-data.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,9 @@ Get the Official MLPerf LLAMA3.1-405b Model
3232

3333
### Pytorch
3434
```
35-
mlcr get,ml-model,llama3 --outdirname=<path to download> --hf_token=<huggingface access token> -j
35+
mlcr get,ml-model,llama3 --outdirname=<path to download> -j
3636
```
3737

3838
!!! tip
3939

40-
Downloading llama3.1-405B model from Hugging Face will require an [**access token**](https://huggingface.co/settings/tokens) which could be generated for your account. Additionally, ensure that your account has access to the [llama3.1-405B](https://huggingface.co/meta-llama/Llama-3.1-405B-Instruct) model.
41-
40+
[Access Request Link](https://llama3-1.mlcommons.org/) for MLCommons members

docs/changelog/changelog.md

Lines changed: 18 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,19 @@
1-
# Release Notes
1+
# **Release Notes**
22

3+
🚀 **mlc-scripts 1.0.0** was released on **February 10, 2025**, introducing full support for **MLPerf Inference v5.0** using the first stable release of **MLCFlow 1.0.1**.
4+
5+
🔹 All previous **CM scripts** used in MLPerf have been successfully **ported to the MLC interface**, ensuring seamless integration. Additionally, all **GitHub Actions** are now passing, confirming a stable and reliable implementation.
6+
7+
## **Key Updates in MLCFlow**
8+
9+
**Simplified Interface**
10+
- A redesigned approach using **Actions and Targets**, making the CLI more intuitive for users.
11+
12+
**Unified Automation Model**
13+
- Consolidated into a **single automation entity**: **Script**, which is seamlessly extended by **Cache, Docker, and Tests**.
14+
15+
**Improved Docker Integration**
16+
- A **cleaner, more efficient Docker extension**, streamlining containerized execution.
17+
18+
**Enhanced Script Management**
19+
- Tighter integration between the interface and script automation, making script **creation and management easier than ever**.

docs/changelog/index.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,10 @@ hide:
33
- toc
44
---
55

6-
# What's New, What's Coming
6+
# What's New & What's Coming 🚀
77

8+
!!! info
9+
**Inference v5.0 Submission** is approaching! The submission deadline is **February 28, 2025, at 1 PM PST**.
10+
11+
!!! tip
12+
Starting **January 2025**, MLPerf Inference automations are powered by **[MLCFlow](https://docs.mlcommons.org/mlcflow)**—a newly developed Python package that replaces the previously used **CMind** package. This transition enhances automation, streamlines workflows, and makes MLPerf scripts more independent and simpler.

docs/power/index.md

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Power Measurement
2+
3+
*Originally Prepared by the MLCommons taskforce on automation and reproducibility and [OctoML](https://octoml.ai)*.
4+
5+
## Requirements
6+
7+
1. Power analyzer (anyone [certified by SPEC PTDaemon](https://www.spec.org/power/docs/SPECpower-Device_List.html)). Yokogawa is the one that most submitters have submitted with and a new single-channel model like 310E can cost around 3000$.
8+
9+
2. SPEC PTDaemon (can be downloaded from [here](https://github.com/mlcommons/power) after signing the EULA which can be requested by sending an email to `[email protected]`). Once you have GitHub access to the MLCommons power repository then the MLC workflow will automatically download and configure the SPEC PTDaemon tool.
10+
11+
3. Access to the [MLCommons power-dev](https://github.com/mlcommons/power-dev) repository which has the `server.py` to be run on the director node and `client.py` to be run on the SUT node. This repository being public will be automatically pulled by the MLC workflow.
12+
13+
## Connecting power analyzer to the computer
14+
15+
We need to connect the power analyzer to a director machine via USB if the director machine is running Linux. Ethernet and serial modes are supported only on Windows. The power supply to the SUT is done through the power analyzer (current in series and voltage in parallel). An adapter like [this](https://amzn.to/3Cl2TV5) can help avoid cutting the electrical wires.
16+
17+
![pages (14)](https://user-images.githubusercontent.com/4791823/210117283-82375460-5b3a-4e8a-bd85-9d33675a5843.png).
18+
19+
The director machine runs the `server.py` script and loads a server process that communicates with the SPEC PTDaemon. When a client connects to it (using `client.py`), it in turn connects to the PTDaemon and initiates a measurement run. Once the measurement ends, the power log files are transferred to the client.
20+
21+
## Ranging mode and Testing mode
22+
23+
Power analyzers usually have different current and voltage ranges it supports and the exact ranges to be used depends on a given SUT and this needs some empirical data. We can do a ranging run where the current and voltage ranges are set to `Auto` and the power analyzer automatically figures out the correct ranges needed. These determined ranges are then used for a proper testing mode run. Using the 'auto' mode in a testing run is not allowed as it can mess up the measurements.
24+
25+
26+
## Start Power Server (Power analyzer should be connected to this computer and PTDaemon runs here)
27+
28+
If you are having GitHub access to [MLCommons power](https://github.com/mlcommons/power-dev) repository, PTDaemon should be automatically installed using the below MLC command:
29+
30+
PS: The below command will ask for `sudo` permission on Linux and should be run with administrator privilege on Windows (to do NTP time sync).
31+
```bash
32+
mlcr mlperf,power,server --device_type=49 --device_port=/dev/usbtmc0
33+
```
34+
* ``--interface_flag="-U" and `--device_port=1` (can change as per the USB slot used for connecting) can be used on Windows for USB connection
35+
* `--device_type=49` corresponds to Yokogawa 310E and `ptd -h` should list the device_type for all supported devices. The location of `ptd` can be found using the below command
36+
* `--device_port=20` and `--interface_flag="-g" can be used to connect to GPIB interface (currently supported only on Windows) with the serial address set to 20
37+
```bash
38+
cat `mlc find cache --tags=get,spec,ptdaemon`/mlc-cached-state.json
39+
```
40+
41+
An example analyzer configuration file
42+
```
43+
[server]
44+
ntpserver = time.google.com
45+
listen = 0.0.0.0 4950
46+
47+
[ptd]
48+
ptd = C:\Users\arjun\CM\repos\local\cache\5a0a52d578724774\repo\PTD\binaries\ptd-windows-x86.exe
49+
analyzerCount = 2
50+
[analyzer2]
51+
interfaceflag = -g
52+
devicetype = 8
53+
deviceport = 20
54+
networkport = 8888
55+
56+
[analyzer1]
57+
interfaceflag = -y
58+
devicetype = 49
59+
deviceport = C3YD21068E
60+
networkport = 8889
61+
```
62+
63+
### Running the power server inside a docker container
64+
65+
```bash
66+
mlc docker script --tags=run,mlperf,power,server --docker_gh_token=<GITHUB AUTH_TOKEN> \
67+
--device=/dev/usbtmc0
68+
```
69+
* Device address may need to be changed depending on the USB port being used
70+
* The above command uses a host-container port mapping 4950:4950 which can be changed by using `--docker_port_maps,=4950:4950`
71+
72+
## Running a dummy workload with power (on host machine)
73+
74+
```bash
75+
mlcr mlperf,power,client --power_server=<POWER_SERVER_IP>
76+
```
77+
78+
### Run a dummy workload with power inside a docker container
79+
80+
```bash
81+
mlc docker script --tags==mlperf,power,client --power_server=<POWER_SERVER_IP>"
82+
```
83+
84+
## Running MLPerf Image Classification with power
85+
86+
```bash
87+
mlcr app,mlperf,inference,_reference,_power,_resnet50,_onnxruntime,_cpu --mode=performance --power_server=<POWER_SERVER_IP>
88+
```
89+
90+
### Running MLPerf Image Classification with power inside a docker container
91+
```bash
92+
mlcr app,mlperf,inference,_reference,_power,_resnet50,_onnxruntime,_cpu --mode=performance --power_server=<POWER_SERVER_IP> --docker
93+
```

docs/submission/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ flowchart LR
7373
direction TB
7474
A[populate system details] --> B[generate submission structure]
7575
B --> C[truncate-accuracy-logs]
76-
C --> D{Infer low talency results <br>and/or<br> filter out invalid results}
76+
C --> D{Infer low latency results <br>and/or<br> filter out invalid results}
7777
D --> yes --> E[preprocess-mlperf-inference-submission]
7878
D --> no --> F[run-mlperf-inference-submission-checker]
7979
E --> F
@@ -184,7 +184,7 @@ Once you have all the results on the system, you can upload them to the MLCommon
184184
=== "via CLI"
185185
You can do the following command which will run the submission checker and upload the results to the MLCommons submission server
186186
```
187-
mlcr run,submission,checker \
187+
mlcr run,submission,checker,inference \
188188
--submitter_id=<> \
189189
--submission_dir=<Path to the submission folder>
190190
```

0 commit comments

Comments
 (0)