You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/workloads/mlperf/mlperf-profiles.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,7 +99,7 @@ Runs the MLPerf benchmark workload to test GPU performance.
99
99
| BatchSize | Optional. BatchSize for the datachunks in training model. | 40 |
100
100
| Implementation | Optional. Implementation present for a given model/benchmark. Example for bert [link](https://github.com/mlcommons/training_results_v2.1/tree/main/NVIDIA/benchmarks)| pytorch-22.09 |
101
101
| ContainerName | Optional. Name for docker model. |language_model |
102
-
| DataPath | Optional. Folder name for training data. /mlperftraining0/{DataPath} |mlperf-training-data-bert.1.0.0|
102
+
| DataPath | Optional. Folder name for training data. /mlperftraining0/\{DataPath} |mlperf-training-data-bert.1.0.0|
103
103
| GPUNum | Optional. Number of GPUs to stress. |8 |
104
104
| ConfigFile | Optional. Configuration for running workload. Visit the implementation for a model for all supported config files. [link](https://github.com/mlcommons/training_results_v2.1/tree/main/NVIDIA/benchmarks). |config_DGXA100_1x8x56x1.sh|
105
105
| PackageName | Required. Packname for mlperf training. ||
0 commit comments