Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
6facae4
Add some comments to avoid duplicated error-finding
danpetry Jan 7, 2025
63aa7dd
Don't override upstream C++ standard
danpetry Jan 7, 2025
81363c0
Use N-1 cores
danpetry Jan 7, 2025
e8d6740
Add fall-through for blas if-else construct
danpetry Jan 7, 2025
e1f50ac
Don't isolate build
danpetry Jan 7, 2025
9fcb3a7
Unvendor pybind11 and eigen
danpetry Jan 7, 2025
ca90fa0
Add and run upstream smoke test
danpetry Jan 7, 2025
ff040de
Use unittest-xml-reporting instead of the historical xmlrunner
danpetry Jan 14, 2025
a2694a9
Add some testing for openmp bug, python api and torch.compile
danpetry Jan 14, 2025
6074128
reinstate `MAX_JOBS=${CPU_COUNT}` on azure
h-vetinari Jan 15, 2025
407a78d
Add description and doc_url
danpetry Jan 14, 2025
393d0d5
correct selector, move comment
danpetry Jan 14, 2025
0f9b587
Revert "Use N-1 cores"
danpetry Jan 17, 2025
4feaee5
Use N-1 cores by default
danpetry Jan 21, 2025
0fe0ba4
Remove CMAKE_ARGS conversion to individual env variables; legacy, rem…
danpetry Jan 21, 2025
14ca3d2
Hand TORCH_CUDA_ARCH_LIST to cmake rather than setting as an env vari…
danpetry Jan 21, 2025
eaaae74
Improve smoke test env variable comment
danpetry Jan 21, 2025
e19af70
Format patch with git format-patch, rename for clarity
danpetry Jan 21, 2025
939dae1
Bump version
danpetry Jan 21, 2025
af73dbe
Correct build number comment
danpetry Jan 21, 2025
65bcd3b
add build dependency on rsync rather than using platform
danpetry Jan 21, 2025
a26ede2
Don't use selectors in jinja expressions [ci skip] ***NO_CI***
danpetry Jan 23, 2025
03b2fc7
Add back disabling hypothesis tests
danpetry Jan 23, 2025
f3bfe5f
Hand contents of CMAKE_ARGS directly to CMake, to remove TORCH_CUDA_A…
danpetry Jan 23, 2025
a811bb2
Temporarily remove smoke test; it's not in the pytorch repo for v2.5.1
danpetry Jan 23, 2025
bd450c9
Revert "Hand contents of CMAKE_ARGS directly to CMake, to remove TORC…
danpetry Jan 24, 2025
6f49c62
Revert "Remove CMAKE_ARGS conversion to individual env variables; leg…
danpetry Jan 24, 2025
10bfd83
Revert "Use unittest-xml-reporting instead of the historical xmlrunner"
danpetry Jan 24, 2025
46f4e8e
appease linter
danpetry Jan 24, 2025
c1c1e6c
Merge branch 'main' into anaconda-sync
h-vetinari Jan 25, 2025
3a34b59
remove obsolete fmt handling in bld.bat
h-vetinari Jan 26, 2025
e19e11c
add pybind11 as a run-dependency of pytorch
h-vetinari Jan 26, 2025
5f19e7f
build non-CUDA builds on CPU agents
h-vetinari Jan 26, 2025
f6bbd00
MNT: Re-rendered with conda-build 25.1.1, conda-smithy 3.45.4, and co…
h-vetinari Jan 26, 2025
9864e70
Combine header messages into one
danpetry Jan 27, 2025
a002741
Change blas_impl if-else block into a case block instead
danpetry Jan 27, 2025
226f526
Correct cpu/gpu build config
danpetry Jan 27, 2025
ad37dec
MNT: Re-rendered with conda-build 25.1.1, conda-smithy 3.45.4, and co…
danpetry Jan 27, 2025
59af084
Revert "Correct cpu/gpu build config"
danpetry Jan 27, 2025
78b5aca
Run all linux builds on a gpu machine
danpetry Jan 27, 2025
db810d0
MNT: Re-rendered with conda-build 25.1.1, conda-smithy 3.45.4, and co…
danpetry Jan 27, 2025
a623264
update comment about `-std=c++<ver>` flag
h-vetinari Jan 27, 2025
e44b0d5
limit torch.backends CUDA availability tests to linux64
h-vetinari Jan 27, 2025
415a628
Revert change to hand TORCH_CUDA_ARCH_LIST as an environment variable…
danpetry Jan 28, 2025
0a2094f
set CMAKE_INSTALL_PREFIX; used in some installation commands
h-vetinari Jan 28, 2025
3889dee
fix install location of DLLs in CMake metadata; add test
h-vetinari Jan 28, 2025
982cadf
unset CMAKE_INSTALL_PREFIX again; instead, patch ATEN_INCLUDE_DIR
h-vetinari Jan 28, 2025
4683abe
distinguish destinations between lib & bin on windows
h-vetinari Jan 28, 2025
e91c713
do not modify destination for `install(FILES`
h-vetinari Jan 28, 2025
46d06c1
back to deleting DESTINATION, but only for TARGETS
h-vetinari Jan 28, 2025
cdabb36
remove Caffee2 from cmake_test; imported through torch anyway
h-vetinari Jan 28, 2025
89d7354
Skip tests failing due to unsupported GPUs [ci skip] ***NO_CI***
danpetry Jan 28, 2025
8a92b36
backport patch for skipping some inducer test failures
h-vetinari Jan 28, 2025
8338fd7
add cuda compiler for CMake tests of CUDA-enabled libtorch
h-vetinari Jan 29, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions recipe/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ source:
- patches/0014-Fix-FindOpenBLAS.patch
# backport https://github.com/pytorch/pytorch/pull/138095
- patches/0015-CD-Enable-Python-3.13-on-windows-138095.patch
- patches/0016-use-prefix-include-for-inductor.patch

build:
number: {{ build }}
Expand Down Expand Up @@ -381,6 +382,15 @@ outputs:
- python -c "import torch; print(torch.__version__)"
- python -c "import torch; assert torch.backends.mkldnn.m.is_available()" # [x86 and cuda_compiler_version == "None"]
- python -c "import torch; torch.tensor(1).to('cpu').numpy(); print('numpy support enabled!!!')"
# We have had issues with openmp .dylibs being doubly loaded in certain cases. These two tests catch the (observed) issue
- python -c "import torch; import numpy"
- python -c "import numpy; import torch"
# distributed support is enabled by default on linux; for mac, we enable it manually in build.sh
- python -c "import torch; assert torch.distributed.is_available()" # [linux or osx]
- python -c "import torch; assert torch.backends.cuda.is_built()" # [(cuda_compiler_version != "None")]
- python -c "import torch; assert torch.backends.cudnn.is_available()" # [(cuda_compiler_version != "None")]
- python -c "import torch; assert torch.cuda.is_available()" # [(cuda_compiler_version != "None")]
- python -c "import torch; assert torch.backends.cudnn.enabled" # [(cuda_compiler_version != "None")]
# At conda-forge, we target versions of OSX that are too old for MPS support
# But if users install a newer version of OSX, they will have MPS support
# https://github.com/conda-forge/pytorch-cpu-feedstock/pull/123#issuecomment-1186355073
Expand Down Expand Up @@ -424,6 +434,8 @@ outputs:
"test/test_nn.py",
"test/test_torch.py",
"test/test_xnnpack_integration.py",
# The inductor tests test torch.compile
"test/inductor/test_torchinductor.py", # [cuda_compiler_version != "None"]
"-m \"not hypothesis\"",
]) %}

Expand Down
16 changes: 16 additions & 0 deletions recipe/patches/0016-use-prefix-include-for-inductor.patch
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Currently inductor doesn't look in conda's includes and libs. This results in errors when it tries to compile, if system versions are being used of dependencies (e.g., sleef).

author: [email protected]

Index: pytorch/torch/_inductor/cpp_builder.py
===================================================================
--- pytorch.orig/torch/_inductor/cpp_builder.py 2024-12-16 15:16:47.074821258 -0600
+++ pytorch/torch/_inductor/cpp_builder.py 2024-12-16 15:17:33.922130106 -0600
@@ -1055,6 +1055,7 @@
+ python_include_dirs
+ torch_include_dirs
+ omp_include_dir_paths
+ + [os.getenv('CONDA_PREFIX') + '/include']
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glancing through the source code, it looks inductor accepts an include_dirs flag, which comes from a JSON config file

Could we just add our own JSON config file?

Seems this may be needed in other contexts as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inductor takes the compile_flags.json file for its AOT mode (handled in package.py), but not for its JIT mode. This is the problem the patch is solving - making it look in prefix/include during JIT (torch.compile) compilation. Probably in AOT mode the user wants to specify their own compile flags for their platform, which is what this json file is for.

The code you're looking at is where the AOT code is initializing the base class (BuildOptionsBase). However, we want to initialize the include directories in the child CppTorchOptions class, which is instantiated in cpu_vec_isa.py. It was a while ago I wrote this patch but IIRC that was the path in the stack trace.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this check out ok for you?

)
cflags = sys_libs_cflags + omp_cflags
ldflags = omp_ldflags
Loading