-
-
Couldn't load subscription status.
- Fork 50
Sync some features with anaconda's recipe #318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 9 commits
6facae4
63aa7dd
81363c0
e8d6740
e1f50ac
9fcb3a7
ca90fa0
ff040de
a2694a9
6074128
407a78d
393d0d5
0f9b587
4feaee5
0fe0ba4
14ca3d2
eaaae74
e19af70
939dae1
af73dbe
65bcd3b
a26ede2
03b2fc7
f3bfe5f
a811bb2
bd450c9
6f49c62
10bfd83
46f4e8e
c1c1e6c
3a34b59
e19e11c
5f19e7f
f6bbd00
9864e70
a002741
226f526
ad37dec
59af084
78b5aca
db810d0
a623264
e44b0d5
415a628
0a2094f
3889dee
982cadf
4683abe
e91c713
46d06c1
cdabb36
89d7354
8a92b36
8338fd7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,6 +4,10 @@ echo "=== Building ${PKG_NAME} (py: ${PY_VER}) ===" | |
|
|
||
| set -ex | ||
|
|
||
| echo "####################################################################" | ||
| echo "Building PyTorch using BLAS implementation: $blas_impl " | ||
| echo "####################################################################" | ||
|
|
||
| # This is used to detect if it's in the process of building pytorch | ||
| export IN_PYTORCH_BUILD=1 | ||
|
|
||
|
|
@@ -20,9 +24,21 @@ rm -rf pyproject.toml | |
| export USE_CUFILE=0 | ||
| export USE_NUMA=0 | ||
| export USE_ITT=0 | ||
|
|
||
| #################### ADJUST COMPILER AND LINKER FLAGS ##################### | ||
| # Pytorch's build system doesn't like us setting the c++ standard and will | ||
| # issue a warning: | ||
| # https://github.com/pytorch/pytorch/blob/3beb7006dd5a415dfa236081ad5d55ae38346324/CMakeLists.txt#L41 | ||
| export CXXFLAGS="$(echo $CXXFLAGS | sed 's/-std=c++[0-9][0-9]//g')" | ||
danpetry marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| # The below three lines expose symbols that would otherwise be hidden or | ||
| # optimised away. They were here before, so removing them would potentially | ||
| # break users' programs | ||
| export CFLAGS="$(echo $CFLAGS | sed 's/-fvisibility-inlines-hidden//g')" | ||
| export CXXFLAGS="$(echo $CXXFLAGS | sed 's/-fvisibility-inlines-hidden//g')" | ||
| export LDFLAGS="$(echo $LDFLAGS | sed 's/-Wl,--as-needed//g')" | ||
| # The default conda LDFLAGs include -Wl,-dead_strip_dylibs, which removes all the | ||
| # MKL sequential, core, etc. libraries, resulting in a "Symbol not found: _mkl_blas_caxpy" | ||
| # error on osx-64. | ||
| export LDFLAGS="$(echo $LDFLAGS | sed 's/-Wl,-dead_strip_dylibs//g')" | ||
| export LDFLAGS_LD="$(echo $LDFLAGS_LD | sed 's/-dead_strip_dylibs//g')" | ||
| if [[ "$c_compiler" == "clang" ]]; then | ||
|
|
@@ -45,6 +61,7 @@ fi | |
| # can be imported on system without a GPU | ||
| LDFLAGS="${LDFLAGS//-Wl,-z,now/-Wl,-z,lazy}" | ||
|
|
||
| ################ CONFIGURE CMAKE FOR CONDA ENVIRONMENT ################### | ||
| export CMAKE_GENERATOR=Ninja | ||
| export CMAKE_LIBRARY_PATH=$PREFIX/lib:$PREFIX/include:$CMAKE_LIBRARY_PATH | ||
| export CMAKE_PREFIX_PATH=$PREFIX | ||
|
|
@@ -62,6 +79,7 @@ done | |
| CMAKE_FIND_ROOT_PATH+=";$SRC_DIR" | ||
| unset CMAKE_INSTALL_PREFIX | ||
| export TH_BINARY_BUILD=1 | ||
| # Use our build version and number for inserting into binaries | ||
danpetry marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| export PYTORCH_BUILD_VERSION=$PKG_VERSION | ||
| # Always pass 0 to avoid appending ".post" to version string. | ||
| # https://github.com/conda-forge/pytorch-cpu-feedstock/issues/315 | ||
|
|
@@ -74,6 +92,8 @@ export USE_SYSTEM_SLEEF=1 | |
| # use our protobuf | ||
| export BUILD_CUSTOM_PROTOBUF=OFF | ||
| rm -rf $PREFIX/bin/protoc | ||
| export USE_SYSTEM_PYBIND11=1 | ||
| export USE_SYSTEM_EIGEN_INSTALL=1 | ||
|
Comment on lines
+92
to
+93
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens to the PyTorch vendored copies of these when using the system copies? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we have a patch to prevent this for mkl but I thought it best to look at the blas/openmp stuff closer and maybe address in a different PR. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. are you ok with this? |
||
|
|
||
| # prevent six from being downloaded | ||
| > third_party/NNPACK/cmake/DownloadSix.cmake | ||
|
|
@@ -100,15 +120,20 @@ if [[ "${CI}" == "github_actions" ]]; then | |
| # cirun-openstack-gpu-2xlarge, which has 32GB RAM, 8 CPUs | ||
| export MAX_JOBS=4 | ||
| else | ||
| export MAX_JOBS=${CPU_COUNT} | ||
| # Leave a spare core for other tasks. This may need to be reduced further | ||
| # if we get out of memory errors. | ||
| export MAX_JOBS=$((CPU_COUNT > 1 ? CPU_COUNT - 1 : 1)) | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What CI provider do things run on for anaconda? I think it might need to become specific to that, and not in the overall There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's AWS. Happy to just leave it as-is. I think originally we had it maxed at four cores and then increased it to CPU_COUNT-1 because of common practice and a desire to speed things up rather than concrete technical reasons. We can leave this as-is and if we have issues our end solve them later There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I've fixed our case in 6074128, so I have no objection to doing what helps you in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. have made this a default in the else branch There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Note that So one can set this in their CI scripts to the intended value and conda-build will use it. This is what conda-forge does Instead of adding this to the build recipe, would recommend doing this in Anaconda's CI scripts There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
| fi | ||
|
|
||
| if [[ "$blas_impl" == "generic" ]]; then | ||
| # Fake openblas | ||
| export BLAS=OpenBLAS | ||
| export OpenBLAS_HOME=${PREFIX} | ||
| else | ||
| elif [[ "$blas_impl" == "mkl" ]]; then | ||
| export BLAS=MKL | ||
| else | ||
| echo "[ERROR] Unsupported BLAS implementation '${blas_impl}'" >&2 | ||
| exit 1 | ||
| fi | ||
danpetry marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| if [[ "$PKG_NAME" == "pytorch" ]]; then | ||
|
|
@@ -164,12 +189,19 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then | |
| echo "unknown CUDA arch, edit build.sh" | ||
| exit 1 | ||
| esac | ||
| # Warning from pytorch v1.12.1: In the future we will require one to | ||
| # explicitly pass TORCH_CUDA_ARCH_LIST to cmake instead of implicitly | ||
| # setting it as an env variable. | ||
danpetry marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| # | ||
| # See: | ||
| # https://pytorch.org/docs/stable/cpp_extension.html (Compute capabilities) | ||
| # https://github.com/pytorch/pytorch/blob/main/.ci/manywheel/build_cuda.sh | ||
| case ${cuda_compiler_version} in | ||
| 12.6) | ||
| export TORCH_CUDA_ARCH_LIST="5.0;6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX" | ||
| ;; | ||
danpetry marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| *) | ||
| echo "unsupported cuda version. edit build.sh" | ||
| echo "No CUDA architecture list exists for CUDA v${cuda_compiler_version}. See build.sh for information on adding one." | ||
| exit 1 | ||
| esac | ||
| export TORCH_NVCC_FLAGS="-Xfatbin -compress-all" | ||
|
|
@@ -204,15 +236,16 @@ case ${PKG_NAME} in | |
|
|
||
| mv build/lib.*/torch/bin/* ${PREFIX}/bin/ | ||
| mv build/lib.*/torch/lib/* ${PREFIX}/lib/ | ||
| mv build/lib.*/torch/share/* ${PREFIX}/share/ | ||
| # need to merge these now because we're using system pybind11, meaning the destination directory is not empty | ||
| rsync -a build/lib.*/torch/share/* ${PREFIX}/share/ | ||
danpetry marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| mv build/lib.*/torch/include/{ATen,caffe2,tensorpipe,torch,c10} ${PREFIX}/include/ | ||
| rm ${PREFIX}/lib/libtorch_python.* | ||
|
|
||
| # Keep the original backed up to sed later | ||
| cp build/CMakeCache.txt build/CMakeCache.txt.orig | ||
| ;; | ||
| pytorch) | ||
| $PREFIX/bin/python -m pip install . --no-deps -vvv --no-clean \ | ||
| $PREFIX/bin/python -m pip install . --no-deps --no-build-isolation -vvv --no-clean \ | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should we sync these flags across calls and Windows? Seems they vary a bit Idk whether it is worth it, but we could also consider using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for windows we've got
and unix
So the same AFAICS? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. any further comments? |
||
| | sed "s,${CXX},\$\{CXX\},g" \ | ||
| | sed "s,${PREFIX},\$\{PREFIX\},g" | ||
| # Keep this in ${PREFIX}/lib so that the library can be found by | ||
|
|
||
danpetry marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,16 @@ | ||
| Currently inductor doesn't look in conda's includes and libs. This results in errors when it tries to compile, if system versions are being used of dependencies (e.g., sleef). | ||
|
|
||
| author: [email protected] | ||
|
|
||
| Index: pytorch/torch/_inductor/cpp_builder.py | ||
| =================================================================== | ||
| --- pytorch.orig/torch/_inductor/cpp_builder.py 2024-12-16 15:16:47.074821258 -0600 | ||
| +++ pytorch/torch/_inductor/cpp_builder.py 2024-12-16 15:17:33.922130106 -0600 | ||
| @@ -1055,6 +1055,7 @@ | ||
| + python_include_dirs | ||
| + torch_include_dirs | ||
| + omp_include_dir_paths | ||
| + + [os.getenv('CONDA_PREFIX') + '/include'] | ||
|
||
| ) | ||
| cflags = sys_libs_cflags + omp_cflags | ||
| ldflags = omp_ldflags | ||
Uh oh!
There was an error while loading. Please reload this page.