Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
912c8cd
resync with cf recipe - mainly patch updates
JamesRobertsonGames Sep 10, 2025
6d18b42
remove old patches and repair zip keys
JamesRobertsonGames Sep 10, 2025
6cf6aed
add nccl patch
JamesRobertsonGames Sep 11, 2025
1cb8a81
ensure test is explictly not being ran. They're listed as a known iss…
JamesRobertsonGames Sep 17, 2025
2dbe6b3
megabuild false
JamesRobertsonGames Sep 18, 2025
4b1bbe3
remove header
JamesRobertsonGames Sep 18, 2025
64a7f3a
duplicate patch deleted
JamesRobertsonGames Sep 18, 2025
2471496
remove abs logic and remove cuda depend explicitly as it is getting a…
JamesRobertsonGames Sep 23, 2025
46ffceb
stdlibc for openblas compat
JamesRobertsonGames Sep 29, 2025
f583333
mkl missmatch patch from pytorch repo
JamesRobertsonGames Sep 30, 2025
cdc5b39
simplify patch
JamesRobertsonGames Sep 30, 2025
cc76236
skip test patch for windows testing
JamesRobertsonGames Oct 2, 2025
0cb7f43
mkl_num_threads in test
JamesRobertsonGames Oct 2, 2025
ac7f821
regenerate patch
JamesRobertsonGames Oct 2, 2025
ee0864e
remove patch due to windows mismatch
JamesRobertsonGames Oct 2, 2025
ba8ea8b
add dynamic flag
JamesRobertsonGames Oct 3, 2025
b25bdd0
set mkl to 2023 compatibility mode with flags
JamesRobertsonGames Oct 9, 2025
fc31816
ensure we use entire build target cpu & skip windows ci test fails
JamesRobertsonGames Oct 20, 2025
8fa736e
ensure comments and improve threads for windows
JamesRobertsonGames Oct 21, 2025
d8eb46d
ensure 2025 for both
JamesRobertsonGames Oct 21, 2025
5f58bb4
ensure 2023 version in depends
JamesRobertsonGames Oct 21, 2025
3187c34
ensure mkl pins
JamesRobertsonGames Oct 21, 2025
3ea3033
last intel 2023 test removed
JamesRobertsonGames Oct 21, 2025
6103e50
update patch:
JamesRobertsonGames Oct 21, 2025
e4969fc
formatting
JamesRobertsonGames Oct 21, 2025
404a4ef
ensure c_compiler 20
JamesRobertsonGames Oct 22, 2025
bb992fa
ensure 17 compat
JamesRobertsonGames Oct 22, 2025
8bfcc2f
osx only pin
JamesRobertsonGames Oct 22, 2025
2a7da22
ensure libcxx in test
JamesRobertsonGames Oct 23, 2025
ae84e74
ensure formatting
JamesRobertsonGames Oct 23, 2025
89c1adf
add abs for timeout
JamesRobertsonGames Oct 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions abs.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
build_env_vars:
ANACONDA_ROCKET_ENABLE_PY313 : yes

# macOS 12.3 or above is required for running the GPU variant (MPS support). No way to specify this for only the GPU
# variant, so it's specified for both.
extra_labels_for_os:
osx-arm64: [ventura]

task_timeout: 72000
2 changes: 1 addition & 1 deletion recipe/bld.bat
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ set DISTUTILS_USE_SDK=1
set BUILD_TEST=0
set INSTALL_TEST=0
:: Don't increase MAX_JOBS to NUMBER_OF_PROCESSORS, as it will run out of heap
set CPU_COUNT=1
set CPU_COUNT=4
set MAX_JOBS=%CPU_COUNT%
:: Use our Pybind11, Eigen
set USE_SYSTEM_PYBIND11=1
Expand Down
10 changes: 5 additions & 5 deletions recipe/conda_build_config.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
gpu_variant:
- cpu
- metal # [(osx and arm64)]
- cuda-12 # [(linux and x86_64)]
# - cuda-12 # [(linux and x86_64)]
c_compiler_version: # [osx]
- 17 # [osx]
cxx_compiler_version: # [osx]
Expand All @@ -12,10 +12,10 @@ cxx_compiler_version: # [osx]
# However there's currently a conda-forge bug that prevents this: https://github.com/conda/conda-build/issues/5048
MACOSX_SDK_VERSION: # [(osx and arm64)]
- 11.1 # [(osx and arm64)]
#- 13.3 # [(osx and arm64)]
- 13.3 # [(osx and arm64)]
CONDA_BUILD_SYSROOT: # [(osx and arm64)]
- /Library/Developer/CommandLineTools/SDKs/MacOSX11.1.sdk # [(osx and arm64)]
#- /Library/Developer/CommandLineTools/SDKs/MacOSX13.3.sdk # [(osx and arm64)]
- /Library/Developer/CommandLineTools/SDKs/MacOSX13.3.sdk # [(osx and arm64)]
zip_keys: # [(osx and arm64)]
- gpu_variant # [(osx and arm64)]
- MACOSX_SDK_VERSION # [(osx and arm64)]
Expand All @@ -27,8 +27,8 @@ zip_keys: # [(osx and arm64)]
# Conda-forge didn't do a "megabuild" on osx because it pushed their CI runners over their 6-hour limit. We don't have
# such a limit.
megabuild:
- true
#- false # [osx]
# - true
- false

# The version of python to use when building libtorch in a "megabuild"
megabuild_python:
Expand Down
59 changes: 36 additions & 23 deletions recipe/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
{% set version = "2.6.0" %}
{% set sha256 = "3005690eb7b083c443a38c7657938af63902f524ad87a6c83f1aca38c77e3b57" %}
{% set version = "2.7.0" %}
{% set sha256 = "ecca266fa2de4235a9fd5a18a33299a9de55ab3babb87f8c297c1c9ab8d436bd" %}
# Set the RC number to build release candidates. Set to None otherwise
{% set rc = None %}
{% set build = 6 %}
{% set build = 0 %}

# Keep this in sync with the release
{% set smoke_test_commit = "1eba9b3aa3c43f86f4a2c807ac8e12c4a7767340" %}
Expand Down Expand Up @@ -30,8 +30,14 @@ package:

source:
{% if rc != None %}
- git_url: https://github.com/pytorch/pytorch.git
git_rev: v{{ version }}-rc{{ rc }}
# - git_url: https://github.com/pytorch/pytorch.git
# git_rev: v{{ version.replace(".rc", "-rc") }}
# we cannot apply patches to submodules when checking out with git_url, because
# then conda switches the patch-application to use git, which cannot construct
# a usable ancestor from outside the submodule; the only option then is to
# pull in the submodules separately.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NB, this bit is to support release candidate building, which shouldn't be used mostly. Also the method below won't fetch release candidates anymore.

- url: https://github.com/pytorch/pytorch/archive/refs/tags/v{{ version }}.tar.gz
sha256: 04ae0a8babdc9cb9dfc4f8746b2b8aa0f8ed0f9e92835cc4af0bcb01e3969e51
{% else %}
# The "pytorch-v" tarballs contain submodules; the "pytorch-" ones don't.
- url: https://github.com/pytorch/pytorch/releases/download/v{{ version }}/pytorch-v{{ version }}.tar.gz
Expand All @@ -42,18 +48,17 @@ source:
- patches/0003-Force-usage-of-python-3-and-error-without-numpy.patch
# https://github.com/pytorch/pytorch/pull/137084
- patches/0004-Help-find-numpy.patch
# sympy 1.13.2 was reported to result in test failures on Windows and mac - skipping to 1.13.3 for stability
# https://github.com/pytorch/pytorch/pull/133235
- patches/0006-Update-sympy-version.patch
# https://github.com/pytorch/pytorch/issues/150918 - continue tests on failure due to flaky tests
- patches/0007-continue-tests-on-failure.patch
- patches/0008-add-missing-includes.patch
- patches/0009-use-prefix-include-for-inductor.patch
- patches/0010-make-ATEN_INCLUDE_DIR-relative-to-TORCH_INSTALL_PREF.patch
- patches/0011-remove-DESTINATION-lib-from-CMake-install-TARGETS-di.patch # [win]
- patches_submodules/0001-remove-DESTINATION-lib-from-CMake-install-directives.patch # [win]
- patches/0013-simplify-torch.utils.cpp_extension.include_paths-use.patch
- patches/0014-point-include-paths-to-PREFIX-include.patch
- patches/0015-point-lib-paths-to-PREFIX-lib.patch
- patches/0016-Do-not-check-out-nccl-when-not-building-it.patch
- patches/0017-mkl-version-mismatch.patch
{% endif %}
- url: https://raw.githubusercontent.com/pytorch/pytorch/{{ smoke_test_commit }}/.ci/pytorch/smoke_test/smoke_test.py
folder: smoke_test
Expand All @@ -64,7 +69,7 @@ build:
number: {{ build }}
string: gpu_cuda{{ cuda_compiler_version | replace('.', '') }}_h{{ PKG_HASH }}_{{ PKG_BUILDNUM }} # [gpu_variant == "cuda-12"]
string: gpu_mps_h{{PKG_HASH}}_{{ PKG_BUILDNUM }} # [gpu_variant == "metal"]
string: cpu_{{ blas_impl }}_h{{ PKG_HASH }}_{{ PKG_BUILDNUM }} # [gpu_variant == "cpu"]
string: cpu_{{ blas_impl }}_h{{ PKG_HASH }}_{{ PKG_BUILDNUM }} # [gpu_variant == "cpu"]
detect_binary_files_with_prefix: false
run_exports:
- {{ pin_subpackage('libtorch', max_pin='x.x') }}
Expand All @@ -87,10 +92,10 @@ requirements:
- cross-python_{{ target_platform }} # [build_platform != target_platform]
- numpy * # [megabuild and build_platform != target_platform]
- numpy # [not megabuild and build_platform != target_platform]
#- {{ stdlib('c') }}
- {{ stdlib('c') }}
- {{ compiler('c') }}
- {{ compiler('cxx') }}
- {{ compiler('cuda') }} # [(gpu_variant or "").startswith("cuda")]
- {{ compiler('cuda') }} # [(gpu_variant or "").startswith("cuda")]
- nvtx-c # [cuda_compiler_version != "None" and build_platform != target_platform]
{% if cuda_major >= 12 %}
- cuda-driver-dev # [build_platform != target_platform]
Expand All @@ -114,6 +119,7 @@ requirements:
# This has a strong run_export so we don't need to put it in `host` or `run`
# We use llvm-openmp for openblas variants on osx.
- llvm-openmp 17 # [osx and not (blas_impl == "mkl")]
- libcxx 17 # [osx]
- libuv # [win]
- cmake
- ninja-base
Expand Down Expand Up @@ -158,7 +164,7 @@ requirements:
- requests
- future # [py<313]
- six
- mkl-devel {{ mkl }} # [blas_impl == "mkl"]
- mkl-devel {{ mkl }} # [blas_impl == "mkl"]
- openblas-devel {{ openblas }} # [blas_impl == "openblas"]
# - libcblas * *_mkl # [blas_impl == "mkl"]
# - libcblas # [blas_impl != "mkl"]
Expand All @@ -167,7 +173,7 @@ requirements:
# We pull in the same versions of mkl and intel-openmp: intel aligns the versions
# We use intel-openmp for all mkl variants.
# For openblas on win and linux, we don't specify any openmp implementation; it comes from the compiler.
- intel-openmp {{ mkl }} # [blas_impl == "mkl"]
- intel-openmp {{ mkl }} # [blas_impl == "mkl"]
- llvm-openmp 17 # [osx and not (blas_impl == "mkl")]
- libabseil
- libprotobuf {{ libprotobuf }}
Expand All @@ -183,8 +189,8 @@ requirements:
# satisfy overlinking checks
run:
- {{ pin_compatible('intel-openmp') }} # [blas_impl == "mkl"]
- libuv # [win]
- {{ pin_compatible('magma') }} # [(gpu_variant or "").startswith("cuda")]
- libuv # [win]
- {{ pin_compatible('magma') }} # [(gpu_variant or "").startswith("cuda")]

# these tests are for the libtorch output below, but due to
# a particularity of conda-build, that output is defined in
Expand Down Expand Up @@ -227,7 +233,6 @@ outputs:
- "**/shm.dll" # [win]
- "**/torch_cpu.dll" # [win]
- "**/torch_python.dll" # [win]
- $RPATH/ld64.so.1 # [s390x]
# libcuda.so is the cuda driver API library and is a system library.
- "**/libcuda.so*" # [(gpu_variant or "").startswith("cuda")]
- name: pytorch
Expand All @@ -247,7 +252,6 @@ outputs:
- "**/shm.dll" # [win]
- "**/torch_cpu.dll" # [win]
- "**/torch_python.dll" # [win]
- $RPATH/ld64.so.1 # [s390x]
detect_binary_files_with_prefix: false
run_exports:
- {{ pin_subpackage('pytorch', max_pin='x.x') }}
Expand All @@ -261,7 +265,7 @@ outputs:
- python # [build_platform != target_platform]
- cross-python_{{ target_platform }} # [build_platform != target_platform]
- numpy # [build_platform != target_platform]
#- {{ stdlib('c') }}
- {{ stdlib('c') }}
- {{ compiler('c') }}
- {{ compiler('cxx') }}
- {{ compiler('cuda') }} # [(gpu_variant or "").startswith("cuda")]
Expand All @@ -288,6 +292,7 @@ outputs:
# This has a strong run_export so we don't need to put it in `host` or `run`
# We use llvm-openmp for openblas variants on osx.
- llvm-openmp 17 # [osx and not (blas_impl == "mkl")]
- libcxx 17 # [osx]
- cmake
- ninja-base
# Keep libprotobuf here so that a compatibile version
Expand Down Expand Up @@ -327,7 +332,7 @@ outputs:
- requests
- future # [py<313]
- six
- mkl-devel {{ mkl }} # [blas_impl == "mkl"]
- mkl-devel {{ mkl }} # [blas_impl == "mkl"]
- openblas-devel {{ openblas }} # [blas_impl == "openblas"]
# - libcblas * *_mkl # [blas_impl == "mkl"]
# - libcblas # [blas_impl != "mkl"]
Expand All @@ -336,7 +341,7 @@ outputs:
# We pull in the same versions of mkl and intel-openmp: intel aligns the versions
# We use intel-openmp for all mkl variants.
# For openblas on win and linux, we don't specify any openmp implementation; it comes from the compiler.
- intel-openmp {{ mkl }} # [blas_impl == "mkl"]
- intel-openmp {{ mkl }} # [blas_impl == "mkl"]
- llvm-openmp 17 # [osx and not (blas_impl == "mkl")]
- libabseil
- libprotobuf {{ libprotobuf }}
Expand Down Expand Up @@ -374,7 +379,7 @@ outputs:
- nomkl # [blas_impl != "mkl"]
- fsspec
# Required to support torch.compile. This is tested in smoke_test.py, which is required to pass
- triton 3.1.0 # [(gpu_variant or "").startswith("cuda") and (linux and x86_64)]
- triton 3.3.0 # [(gpu_variant or "").startswith("cuda") and (linux and x86_64)]
# avoid that people without GPUs needlessly download ~0.5-1GB
# The CUDA version constraint is handled in cuda-version as a run_constrained.
# However, that doesn't enforce that the package requires a GPU; that needs to be done here.
Expand Down Expand Up @@ -410,6 +415,7 @@ outputs:
- pybind11
# the inductor "test_aoti_eager..." tests require objcopy
- binutils # [linux]
- libcxx 17 # [osx]
imports:
- torch
source_files:
Expand Down Expand Up @@ -452,7 +458,14 @@ outputs:
# Note that the `|| true` expression will make the build continue even if the whole script falls over completely
# (for example, in the case of missing imports). There doesn't seem to be a way of making a script exception return
# non-zero but failing tests return zero.
- python ./test/run_test.py --core --continue-through-error || true
# ------------------------------------------------------------------------------------------------
# Exclude complex tests that are known to be flaky for -k "not (complex and (linalg_vecdot or dot or vdot))"
# https://github.com/pytorch/pytorch/issues/150918
- python ./test/run_test.py --core --continue-through-error -k "not (complex and (linalg_vecdot or dot or vdot))" || true # [not win]
# lgamma or mvlgamma or multigammaln or gammaln all have these issues on a combination of Intel Xeon processors and Windows Server differences.
# enabling these tests on windows will cause numerical differences in the test suite.
# This is a non-deterministic issue where between 80-110 tests fail. This has been observed between Pytorch 2.5 and above.
- python ./test/run_test.py --core --continue-through-error -k "not ((complex and (linalg_vecdot or dot or vdot)) or lgamma or mvlgamma or multigammaln or gammaln)" || exit 0 # [win]
# The inductor tests test the torch.compile backend. Using the options below avoids running distributed tests,
# which would be run if we used the --inductor option. (Distributed tests would only be correctly run on a multi-gpu test platform,
# which we don't have.)
Expand Down
12 changes: 7 additions & 5 deletions recipe/patches/0001-windows-FindMKL-add-library-suffix.patch
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,11 @@ This is required because our mdl-devel package contains libraries named like

Index: pytorch/cmake/Modules/FindMKL.cmake
===================================================================
--- pytorch.orig/cmake/Modules/FindMKL.cmake 2024-11-19 16:11:16.666292111 -0600
+++ pytorch/cmake/Modules/FindMKL.cmake 2024-11-19 16:14:15.719045239 -0600
@@ -119,6 +119,9 @@
diff --git cmake/Modules/FindMKL.cmake cmake/Modules/FindMKL.cmake
index a9276de..d340c41 100644
--- a/cmake/Modules/FindMKL.cmake
+++ b/cmake/Modules/FindMKL.cmake
@@ -119,6 +119,9 @@ ELSE(WIN32)
ELSE()
SET(mklthreads "mkl_intel_thread")
SET(mklrtls "iomp5" "guide")
Expand All @@ -20,7 +22,7 @@ Index: pytorch/cmake/Modules/FindMKL.cmake
ENDIF()
SET(mklifaces "intel")
ENDIF (CMAKE_COMPILER_IS_GNUCC)
@@ -268,7 +271,7 @@
@@ -268,7 +271,7 @@ MACRO(CHECK_ALL_LIBRARIES LIBRARIES OPENMP_TYPE OPENMP_LIBRARY _name _list _flag
ENDIF(OPENMP_FOUND)
ELSEIF(${_library} MATCHES "iomp")
SET(_openmp_type "Intel")
Expand All @@ -29,7 +31,7 @@ Index: pytorch/cmake/Modules/FindMKL.cmake
SET(_openmp_library "${${_prefix}_${_library}_LIBRARY}")
ELSE()
MESSAGE(FATAL_ERROR "Unknown OpenMP flavor: ${_library}")
@@ -421,23 +424,23 @@
@@ -421,23 +424,23 @@ IF (MKL_LIBRARIES)
FOREACH(mkl64 ${mkl64s} "_core" "")
FOREACH(mkls ${mklseq} "")
IF (NOT MKL_LAPACK_LIBRARIES)
Expand Down
8 changes: 5 additions & 3 deletions recipe/patches/0002-swap-openmp-search-precedence.patch
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
Index: pytorch/cmake/Modules/FindMKL.cmake
===================================================================
--- pytorch.orig/cmake/Modules/FindMKL.cmake 2024-11-19 16:17:57.874552303 -0600
+++ pytorch/cmake/Modules/FindMKL.cmake 2024-11-19 16:18:29.915971379 -0600
@@ -108,8 +108,8 @@
diff --git cmake/Modules/FindMKL.cmake cmake/Modules/FindMKL.cmake
index d340c41..3eab320 100644
--- a/cmake/Modules/FindMKL.cmake
+++ b/cmake/Modules/FindMKL.cmake
@@ -108,8 +108,8 @@ ELSE(WIN32)
SET(mklthreads "mkl_tbb_thread")
SET(mklrtls "tbb")
ELSE()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,11 @@ Subject: [PATCH] Force usage of python 3 and error without numpy

Index: pytorch/cmake/Dependencies.cmake
===================================================================
--- pytorch.orig/cmake/Dependencies.cmake 2024-11-12 20:18:55.008695666 -0600
+++ pytorch/cmake/Dependencies.cmake 2024-11-19 16:18:44.359469010 -0600
@@ -861,9 +861,9 @@
diff --git cmake/Dependencies.cmake cmake/Dependencies.cmake
index 1813f44..36b507f 100644
--- a/cmake/Dependencies.cmake
+++ b/cmake/Dependencies.cmake
@@ -848,9 +848,9 @@ if(BUILD_PYTHON)
if(USE_NUMPY)
list(APPEND PYTHON_COMPONENTS NumPy)
endif()
Expand All @@ -23,7 +25,7 @@ Index: pytorch/cmake/Dependencies.cmake
endif()

if(NOT Python_Interpreter_FOUND)
@@ -880,7 +880,7 @@
@@ -867,7 +867,7 @@ if(BUILD_PYTHON)
if(Python_Development.Module_FOUND)
if(USE_NUMPY)
if(NOT Python_NumPy_FOUND)
Expand Down
8 changes: 5 additions & 3 deletions recipe/patches/0004-Help-find-numpy.patch
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,11 @@ Subject: [PATCH] Help find numpy

Index: pytorch/tools/setup_helpers/cmake.py
===================================================================
--- pytorch.orig/tools/setup_helpers/cmake.py 2024-11-12 20:18:55.448001614 -0600
+++ pytorch/tools/setup_helpers/cmake.py 2024-11-19 16:18:54.574972542 -0600
@@ -305,9 +305,15 @@
diff --git tools/setup_helpers/cmake.py tools/setup_helpers/cmake.py
index 84e4dad..8ce7272 100644
--- a/tools/setup_helpers/cmake.py
+++ b/tools/setup_helpers/cmake.py
@@ -306,9 +306,15 @@ class CMake:
sys.exit(1)
build_options.update(cmake__options)

Expand Down
11 changes: 0 additions & 11 deletions recipe/patches/0006-Update-sympy-version.patch

This file was deleted.

15 changes: 0 additions & 15 deletions recipe/patches/0007-Fix-duplicate-linker-script.patch

This file was deleted.

8 changes: 5 additions & 3 deletions recipe/patches/0007-continue-tests-on-failure.patch
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
Index: pytorch/test/run_test.py
===================================================================
--- pytorch.orig/test/run_test.py 2024-11-19 16:09:45.970287774 -0600
+++ pytorch/test/run_test.py 2024-11-19 16:19:29.415077042 -0600
@@ -1044,7 +1044,7 @@
diff --git test/run_test.py test/run_test.py
index a508d8d..02d9af9 100755
--- a/test/run_test.py
+++ b/test/run_test.py
@@ -1384,7 +1384,7 @@ def get_pytest_args(options, is_cpp_test=False, is_distributed_test=False):
else:
# When under the normal mode, retry a failed test 2 more times. -x means stop at the first
# failure
Expand Down
6 changes: 4 additions & 2 deletions recipe/patches/0008-add-missing-includes.patch
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@ It won't compile for osx-arm64 metal; unsure why it can on linux-64, but in any
case, they should be present.
Index: pytorch/torch/csrc/distributed/c10d/control_plane/Handlers.hpp
===================================================================
--- pytorch.orig/torch/csrc/distributed/c10d/control_plane/Handlers.hpp 2024-11-12 20:18:55.762951714 -0600
+++ pytorch/torch/csrc/distributed/c10d/control_plane/Handlers.hpp 2024-11-19 16:23:52.134859846 -0600
diff --git torch/csrc/distributed/c10d/control_plane/Handlers.hpp torch/csrc/distributed/c10d/control_plane/Handlers.hpp
index 70333a3..aa9619e 100644
--- a/torch/csrc/distributed/c10d/control_plane/Handlers.hpp
+++ b/torch/csrc/distributed/c10d/control_plane/Handlers.hpp
@@ -4,6 +4,8 @@
#include <map>
#include <string>
Expand Down
Loading