Skip to content

Commit 268e885

Browse files
authored
only download benchmark data for further uploading (#15945)
Currently we downaload everything created durning export and benchmarking, including ptd, pte, benchmarking results, etc, when trying to upload benchmarking result to pytorch hub. ptd and pte are large and unnecessary for this stage and when we benchmarking lots of models, such large files will cause out of disk space error. this PR prevents those large and unnecessary files from downloading and try to avoid out of disk space error.
1 parent e4faf06 commit 268e885

File tree

1 file changed

+11
-2
lines changed

1 file changed

+11
-2
lines changed

.github/workflows/cuda-perf.yml

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -259,8 +259,8 @@ jobs:
259259
CUDA_DRIVER_VERSION=$(nvidia-smi --query-gpu=driver_version --format=csv,noheader | head -1)
260260
echo "CUDA Driver Version: $CUDA_DRIVER_VERSION"
261261
262-
# Create results directory
263-
RESULTS_DIR="${RUNNER_ARTIFACT_DIR}"
262+
# Create results directory (separate from model artifacts)
263+
RESULTS_DIR="benchmark_results"
264264
mkdir -p "$RESULTS_DIR"
265265
266266
# Determine model name and runner command based on model
@@ -320,6 +320,15 @@ jobs:
320320
"workflow_run_url": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"
321321
}
322322
EOF
323+
324+
# Only copy benchmark results to RUNNER_ARTIFACT_DIR for upload (not the entire model)
325+
# First, clean up the downloaded model artifacts from RUNNER_ARTIFACT_DIR
326+
rm -rf "${RUNNER_ARTIFACT_DIR}"/*
327+
328+
# Then copy only the benchmark result JSON files
329+
cp "$RESULTS_DIR"/*.json "${RUNNER_ARTIFACT_DIR}/"
330+
echo "Benchmark results prepared for upload:"
331+
ls -lah "${RUNNER_ARTIFACT_DIR}"
323332
echo "::endgroup::"
324333
325334
upload-benchmark-results:

0 commit comments

Comments
 (0)