-
Notifications
You must be signed in to change notification settings - Fork 752
Use cached PyTorch wheels on MacOS jobs #9484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 8 commits
eaed605
3071d26
947c39c
9cbe26a
1576f32
fbd4949
b47eb88
bc6baf3
3b1938f
34fed00
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -62,10 +62,38 @@ install_pytorch_and_domains() { | |
| git checkout "${TORCH_VERSION}" | ||
| git submodule update --init --recursive | ||
|
|
||
| export USE_DISTRIBUTED=1 | ||
| # Then build and install PyTorch | ||
| python setup.py bdist_wheel | ||
| pip install "$(echo dist/*.whl)" | ||
| SYSTEM_NAME=$(uname) | ||
| if [[ "${SYSTEM_NAME}" == "Darwin" ]]; then | ||
| PLATFORM=$(python -c 'import sysconfig; import platform; v=platform.mac_ver()[0].split(".")[0]; platform=sysconfig.get_platform().split("-"); platform[1]=f"{v}_0"; print("_".join(platform))') | ||
| fi | ||
| PYTHON_VERSION=$(python -c 'import platform; v=platform.python_version_tuple(); print(f"{v[0]}{v[1]}")') | ||
| TORCH_RELEASE=$(cat version.txt) | ||
| TORCH_SHORT_HASH=${TORCH_VERSION:0:7} | ||
| TORCH_WHEEL_PATH="cached_artifacts/pytorch/executorch/pytorch_wheels/${SYSTEM_NAME}/${PYTHON_VERSION}" | ||
| TORCH_WHEEL_NAME="torch-${TORCH_RELEASE}%2Bgit${TORCH_SHORT_HASH}-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-${PLATFORM:-}.whl" | ||
|
||
|
|
||
| CACHE_TORCH_WHEEL="https://gha-artifacts.s3.us-east-1.amazonaws.com/${TORCH_WHEEL_PATH}/${TORCH_WHEEL_NAME}" | ||
| # Cache PyTorch wheel is only needed on MacOS, Linux CI already has this as part | ||
| # of the Docker image | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't you need to set default value for TORCH_WHEEL_NOT_FOUND (to handle non Darwin case)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. True, this function is currently used only on MacOS, but I remember reading that we can now build ExecuTorch on Windows too |
||
| if [[ "${SYSTEM_NAME}" == "Darwin" ]]; then | ||
| pip install "${CACHE_TORCH_WHEEL}" || TORCH_WHEEL_NOT_FOUND=1 | ||
|
||
| fi | ||
|
|
||
| # Found no such wheel, we will build it from source then | ||
| if [[ "${TORCH_WHEEL_NOT_FOUND:-0}" == "1" ]]; then | ||
| USE_DISTRIBUTED=1 python setup.py bdist_wheel | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. log that we're building from source |
||
| pip install "$(echo dist/*.whl)" | ||
|
|
||
| # Only AWS runners have access to S3 | ||
| if command -v aws && [[ -z "${GITHUB_RUNNER:-}" ]]; then | ||
| for WHEEL_PATH in dist/*.whl; do | ||
| WHEEL_NAME=$(basename "${WHEEL_PATH}") | ||
|
||
| aws s3 cp "${WHEEL_PATH}" "s3://gha-artifacts/${TORCH_WHEEL_PATH}/${WHEEL_NAME}" | ||
| done | ||
| fi | ||
| else | ||
| echo "Use cached wheel at ${CACHE_TORCH_WHEEL}" | ||
| fi | ||
|
|
||
| # Grab the pinned audio and vision commits from PyTorch | ||
| TORCHAUDIO_VERSION=$(cat .github/ci_commit_pins/audio.txt) | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can also cache audio, vision, and other wheels, but the gain is probably smaller because it's fast to build them. This can come in subsequent PRs. |
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can move this command (cloning all submodules) when we haven't found the cache entry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think this will reduce even further
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, good catch, I only need the version.txt from PyTorch