Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
580def3
Support MLC_TMP_FOLDER for file search in script automation
amd-arsuresh Jan 24, 2026
b9bd579
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 24, 2026
8a7e5d1
Added --search_folder_path for script exe search, improve the version…
amd-arsuresh Jan 26, 2026
4d0bd7e
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 26, 2026
25a0169
Merge branch 'mlcommons:dev' into dev
amd-arsuresh Jan 26, 2026
a854f24
Merge pull request #792 from amd/dev
arjunsuresh Jan 26, 2026
7b4a724
Fix for gcc installation on a given path
arjunsuresh Jan 26, 2026
ff4e5d8
Merge pull request #793 from amd/dev
arjunsuresh Jan 26, 2026
1c5169b
[Automated Commit] Document script/install-gcc-src/meta.yaml [skip ci]
github-actions[bot] Jan 26, 2026
6f698f2
Support cache_expiration in dynamic_variation_meta
arjunsuresh Jan 26, 2026
f278f2b
Merge branch 'dev' into dev
amd-arsuresh Jan 26, 2026
7543562
Merge pull request #794 from amd/dev
arjunsuresh Jan 26, 2026
953f7a1
Support int cache_expiration
arjunsuresh Jan 26, 2026
a7737fe
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 26, 2026
5dd0602
Merge pull request #795 from amd/dev
arjunsuresh Jan 26, 2026
3d66c7c
Automatically use variation.# meta for --version
amd-arsuresh Jan 27, 2026
a8d190d
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 27, 2026
c182f32
Merge pull request #796 from amd/dev
arjunsuresh Jan 27, 2026
5ead80e
Fix MLC_GCC_INSTALLED_PATH
arjunsuresh Jan 27, 2026
0f526c2
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 27, 2026
7274232
Merge pull request #797 from amd/dev
arjunsuresh Jan 27, 2026
767976e
Export needed variables for get,llvm
amd-arsuresh Jan 28, 2026
9187d67
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Jan 28, 2026
2b4ff7e
Improve AOCC version string
amd-arsuresh Jan 28, 2026
a77c174
Merge branch 'dev' into dev
amd-arsuresh Jan 28, 2026
c838839
Update test-mlperf-inference-retinanet.yml
arjunsuresh Jan 28, 2026
1844bc0
Merge branch 'dev' into dev
amd-arsuresh Jan 28, 2026
d82cf32
Merge pull request #799 from amd/dev
arjunsuresh Jan 28, 2026
5c6c9ef
Update and rename test-mlperf-inference-yolo.yml to test-mlperf-infer…
anandhu-eng Jan 30, 2026
5bbf400
Update test-mlperf-inference-yolo-open-div.yml
anandhu-eng Jan 30, 2026
71a289c
Create test for closed division yolo submission
anandhu-eng Jan 30, 2026
73499b3
Merge pull request #800 from mlcommons/anandhu-eng-patch-3
arjunsuresh Jan 30, 2026
61c070f
Fixes for Windows compatibility (#801)
amd-arsuresh Jan 31, 2026
5b8af43
Fix cache meta loading on windows (#802)
amd-arsuresh Feb 1, 2026
b088aed
Make remote_run Windows compatible (#803)
amd-arsuresh Feb 1, 2026
86dbf9b
Support --pull_changes option for install-gcc, install-llvm (#805)
amd-arsuresh Feb 4, 2026
0449bd4
[Automated Commit] Document script/install-gcc-src/meta.yaml [skip ci]
github-actions[bot] Feb 4, 2026
7732c25
Fix indentation in console output (#806)
amd-arsuresh Feb 5, 2026
abbf6d7
Fix remembered selection not working
amd-arsuresh Feb 5, 2026
f796b60
[Automated Commit] Document script/get-oneapi/meta.yaml [skip ci]
github-actions[bot] Feb 5, 2026
653c117
Support is_path key for input_description and expanding user home for…
amd-arsuresh Feb 7, 2026
19993e1
Update MLC_DOWNLOAD_URL for YOLO model
anandhu-eng Feb 12, 2026
f7741e1
[Automated Commit] Document script/get-ml-model-yolov11/meta.yaml [s…
github-actions[bot] Feb 12, 2026
30ed279
Add needs_pat field to YOLOv11 meta.yaml (#809)
anandhu-eng Feb 12, 2026
27d53d8
Merge branch 'main' into dev
arjunsuresh Feb 12, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-retinanet.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: MLPerf inference retinanet

on:
pull_request_target:
branches: [ "main", "dev" ]
branches: [ "main_off", "dev_off" ]
paths:
- '.github/workflows/test-mlperf-inference-retinanet.yml'
- '**'
Expand Down
124 changes: 124 additions & 0 deletions .github/workflows/test-mlperf-inference-yolo-closed-div.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
name: MLPerf inference YOLO-v11 Closed Division
permissions:
contents: read

on:
schedule:
- cron: '0 0 * * *' # Runs daily at 12 AM UTC
pull_request_target:
branches: [ "main_off", "dev_off" ]
paths:
- '.github/workflows/test-mlperf-inference-yolo.yml'
- '**'
- '!**.md'

jobs:
mlc-run:
runs-on: ${{ matrix.os }}
env:
MLC_INDEX: "on"
SUBMISSION_DIR: ${{ github.workspace }}/mlperf_inference_results
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python-version: [ "3.13", "3.12" ]
backend: [ "pytorch" ]
division: [ "open" ]

steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}

- name: Install mlcflow
run: |
pip install mlcflow
pip install tabulate

- name: Pull MLOps repo
shell: bash
env:
REPO: ${{ github.event.pull_request.head.repo.html_url }}
BRANCH: ${{ github.event.pull_request.head.ref }}
run: |
mlc pull repo "$REPO" --branch="$BRANCH"
- name: Test MLPerf Inference YOLO-v11 (Linux/macOS)
run: |
mlcr run-mlperf,inference,_full,_find-performance,_all-scenarios,_r6.0-dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=test -v --quiet
mlcr run-mlperf,inference,_submission,_full,_all-modes,_all-scenarios,_r6.0-dev --division=closed --submission_dir=${{ env.SUBMISSION_DIR }} --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=valid --multistream_target_latency=900 --env.MLC_MLPERF_USE_MAX_DURATION=no -v --quiet

- name: upload results artifact
uses: actions/upload-artifact@v4
with:
name: mlperf-inference-yolo-results-${{ matrix.os }}-py${{ matrix.python-version }}-bk${{ matrix.backend }}
path: ${{ env.SUBMISSION_DIR }}

upload-results-to-github:
needs: mlc-run
runs-on: ubuntu-latest
env:
MLC_INDEX: "on"
SUBMISSION_DIR: ${{ github.workspace }}/mlperf_inference_results
concurrency:
group: upload-results-v6.0
cancel-in-progress: false
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
python-version: [ "3.13" ]

steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}

- name: Install mlcflow
run: |
pip install mlcflow
pip install tabulate

- name: Pull MLOps repo
shell: bash
env:
REPO: ${{ github.event.pull_request.head.repo.html_url }}
BRANCH: ${{ github.event.pull_request.head.ref }}
run: |
mlc pull repo "$REPO" --branch="$BRANCH"

- name: Download benchmark artifacts
uses: actions/download-artifact@v4
with:
path: "${{ env.SUBMISSION_DIR }}/closed"

- name: Load secrets
id: op-load-secrets
uses: 1password/load-secrets-action@v3
env:
OP_SERVICE_ACCOUNT_TOKEN: ${{ secrets.OP_SERVICE_ACCOUNT_TOKEN }}
PAT: op://7basd2jirojjckncf6qnq3azai/bzbaco3uxoqs2rcyu42rvuccga/credential

- name: Push Results
env:
GITHUB_TOKEN: ${{ steps.op-load-secrets.outputs.PAT }}
if: github.repository_owner == 'mlcommons'
run: |
git config --global user.name "mlcommons-bot"
git config --global user.email "mlcommons-bot@users.noreply.github.com"
git config --global credential.https://github.com.helper ""
git config --global credential.https://github.com.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr push,github,mlperf,inference,submission --submission_dir=${{ env.SUBMISSION_DIR }} --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0/ --repo_branch=v6.0 --commit_message="Results from yolo-v11 GH action on ${{ matrix.os }}" --quiet

Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ on:
schedule:
- cron: '0 0 * * *' # Runs daily at 12 AM UTC
pull_request_target:
branches: [ "main", "dev" ]
branches: [ "main_off", "dev_off" ]
paths:
- '.github/workflows/test-mlperf-inference-yolo.yml'
- '**'
Expand Down Expand Up @@ -50,13 +50,13 @@ jobs:
mlc pull repo "$REPO" --branch="$BRANCH"
- name: Test MLPerf Inference YOLO-v11 (Linux/macOS)
run: |
mlcr run-mlperf,inference,_full,_find-performance,_all-scenarios,_r6.0-dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=test -adr.inference-src.tags=_branch.anandhu-eng-patch-13 --adr.inference-src-loadgen.tags=_branch.anandhu-eng-patch-13 --adr.inference-src.version=custom --adr.inference-src-loadgen.version=custom --adr.loadgen.version=custom -v --quiet
mlcr run-mlperf,inference,_submission,_full,_all-modes,_all-scenarios,_r6.0-dev --submission_dir=${{ env.SUBMISSION_DIR }} --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=valid -adr.inference-src.tags=_branch.anandhu-eng-patch-13 --adr.inference-src-loadgen.tags=_branch.anandhu-eng-patch-13 --adr.inference-src.version=custom --adr.inference-src-loadgen.version=custom --adr.loadgen.version=custom --multistream_target_latency=900 --env.MLC_MLPERF_USE_MAX_DURATION=no -v --quiet
mlcr run-mlperf,inference,_full,_find-performance,_all-scenarios,_r6.0-dev --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=test -v --quiet
mlcr run-mlperf,inference,_submission,_full,_all-modes,_all-scenarios,_r6.0-dev --submission_dir=${{ env.SUBMISSION_DIR }} --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --hw_name="gh_${{ matrix.os }}x86" --model=yolo-99 --implementation=reference --category=edge --backend=${{ matrix.backend }} --framework=pytorch --device=cpu --execution_mode=valid --multistream_target_latency=900 --env.MLC_MLPERF_USE_MAX_DURATION=no -v --quiet

- name: upload results artifact
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.division }}
name: mlperf-inference-yolo-results-${{ matrix.os }}-py${{ matrix.python-version }}-bk${{ matrix.backend }}
path: ${{ env.SUBMISSION_DIR }}

upload-results-to-github:
Expand Down Expand Up @@ -100,7 +100,7 @@ jobs:
- name: Download benchmark artifacts
uses: actions/download-artifact@v4
with:
path: ${{ env.SUBMISSION_DIR }}
path: "${{ env.SUBMISSION_DIR }}/open"

- name: Load secrets
id: op-load-secrets
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ At its core, MLCFlow relies on a single powerful automation, the Script, which i
## 🀝 Contributing
We welcome contributions from the community! To contribute:
1. Submit pull requests (PRs) to the **`dev`** branch.
2. Review our [CONTRIBUTORS.md](here) for guidelines and best practices.
2. See [here](CONTRIBUTORS.md) for guidelines and best practices on contribution.
3. Explore more about MLPerf Inference automation in the official [MLPerf Inference Documentation](https://docs.mlcommons.org/inference/).

Your contributions help drive the project forward!
Expand Down Expand Up @@ -61,6 +61,7 @@ This project is made possible through the generous support of:
- [cKnowledge.org](https://cKnowledge.org)
- [cTuning Foundation](https://cTuning.org)
- [GATEOverflow](https://gateoverflow.in)
- [AMD](https://www.amd.com)
- [MLCommons](https://mlcommons.org)

We appreciate their contributions and sponsorship!
Expand Down
88 changes: 56 additions & 32 deletions automation/script/cache_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ def prepare_cache_tags(i):
cached_tags.append(x)

explicit_cached_tags = cached_tags.copy()
explicit_cached_tags.append("-tmp")

# explicit variations
if len(i['explicit_variation_tags']) > 0:
Expand Down Expand Up @@ -161,12 +162,15 @@ def search_cache(i, explicit_cached_tags):
i['logger'].debug(
i['recursion_spaces'] +
' - Pruning cache list outputs with the following tags: {}'.format(explicit_cached_tags))

cache_list = i['cache_list']

n_tags = [p[1:] for p in explicit_cached_tags if p.startswith("-")]
p_tags = [p for p in explicit_cached_tags if not p.startswith("-")]

pruned_cache_list = [
item
for item in cache_list
if set(explicit_cached_tags) <= set(item.meta.get('tags', []))
if set(p_tags) <= set(item.meta.get('tags', [])) and set(n_tags).isdisjoint(set(item.meta.get('tags', [])))
]

return pruned_cache_list
Expand Down Expand Up @@ -214,9 +218,19 @@ def validate_cached_scripts(i, found_cached_scripts):
'''
valid = []
if len(found_cached_scripts) > 0:

# We can consider doing quiet here if noise is too much
# import logging
# logger = i['logger']
# logger_level_saved = logger.level
# logger.setLevel(logging.ERROR)
# saved_quiet = i['env'].get('MLC_QUIET', False)
# i['env']['MLC_QUIET'] = True
for cached_script in found_cached_scripts:
if is_cached_entry_valid(i, cached_script):
valid.append(cached_script)
# logger.setLevel(logger_level_saved)
# i['env']['MLC_QUIET'] = saved_quiet

return valid

Expand Down Expand Up @@ -417,9 +431,7 @@ def find_cached_script(i):
i, explicit_cached_tags, found_cached_scripts)
found_cached_scripts = validate_cached_scripts(i, found_cached_scripts)

search_tags = '-tmp'
if len(explicit_cached_tags) > 0:
search_tags += ',' + ','.join(explicit_cached_tags)
search_tags = ','.join(explicit_cached_tags)

return {'return': 0, 'cached_tags': cached_tags,
'search_tags': search_tags, 'found_cached_scripts': found_cached_scripts}
Expand All @@ -430,40 +442,52 @@ def find_cached_script(i):

def fix_cache_paths(cached_path, env):

current_cache_path = cached_path
current_cache_path = os.path.normpath(cached_path)

new_env = env # just a reference

def normalize_and_replace_path(path_str):
"""Helper to normalize and replace cache paths in a string."""
# Normalize the path to use the current OS separators
normalized = os.path.normpath(path_str)

# Check if path contains local/cache or local\cache pattern
path_parts = normalized.split(os.sep)

try:
local_idx = path_parts.index("local")
if local_idx + \
1 < len(path_parts) and path_parts[local_idx + 1] == "cache":
# Extract the loaded cache path (up to and including "cache")
loaded_cache_path = os.sep.join(path_parts[:local_idx + 2])
loaded_cache_path_norm = os.path.normpath(loaded_cache_path)

if loaded_cache_path_norm != current_cache_path and os.path.exists(
current_cache_path):
# Replace old cache path with current cache path
return normalized.replace(
loaded_cache_path_norm, current_cache_path)
except (ValueError, IndexError):
# "local" not in path or malformed path
pass

return normalized

for key, val in new_env.items():
# Check for a path separator in a string and determine the
# separator
if isinstance(val, str) and any(sep in val for sep in [
"/local/cache/", "\\local\\cache\\"]):
sep = "/" if "/local/cache/" in val else "\\"

path_split = val.split(sep)
repo_entry_index = path_split.index("local")
loaded_cache_path = sep.join(
path_split[0:repo_entry_index + 2])
if loaded_cache_path != current_cache_path and os.path.exists(
current_cache_path):
new_env[key] = val.replace(
loaded_cache_path, current_cache_path).replace(sep, "/")
if isinstance(val, str):
# Check if path contains cache directory pattern
normalized_val = val.replace('\\', os.sep).replace('/', os.sep)
if os.sep.join(['local', 'cache']) in normalized_val:
new_env[key] = normalize_and_replace_path(val)

elif isinstance(val, list):
for i, val2 in enumerate(val):
if isinstance(val2, str) and any(sep in val2 for sep in [
"/local/cache/", "\\local\\cache\\"]):
sep = "/" if "/local/cache/" in val2 else "\\"

path_split = val2.split(sep)
repo_entry_index = path_split.index("local")
loaded_cache_path = sep.join(
path_split[0:repo_entry_index + 2])
if loaded_cache_path != current_cache_path and os.path.exists(
current_cache_path):
new_env[key][i] = val2.replace(
loaded_cache_path, current_cache_path).replace(sep, "/")
if isinstance(val2, str):
# Check if path contains cache directory pattern
normalized_val2 = val2.replace(
'\\', os.sep).replace('/', os.sep)
if os.sep.join(['local', 'cache']) in normalized_val2:
new_env[key][i] = normalize_and_replace_path(val2)

return {'return': 0, 'new_env': new_env}

Expand Down
Loading
Loading