Skip to content

Commit ea3eaf6

Browse files
malfetpytorchmergebot
authored andcommitted
Fix AOTI cpp tests (pytorch#153423)
`Error in dlopen: /lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.30 not found` error was caused by cmake migration (as conda one probably have some extra link rules), while `C++ exception with description "CUDA error: no kernel image is available for execution on the device` were caused by the fact that test were build for Maxwell, but run on SM_86 Remaining test was failing before, but was probably disabled TODOs: - Move build to the build step Pull Request resolved: pytorch#153423 Approved by: https://github.com/huydhn, https://github.com/cyyever
1 parent 6b02e60 commit ea3eaf6

File tree

1 file changed

+9
-2
lines changed

1 file changed

+9
-2
lines changed

.ci/pytorch/test.sh

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -408,8 +408,15 @@ test_inductor_aoti() {
408408
# We need to hipify before building again
409409
python3 tools/amd_build/build_amd.py
410410
fi
411-
BUILD_AOT_INDUCTOR_TEST=1 python setup.py develop
412-
CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="${TORCH_LIB_DIR}" python test/run_test.py --cpp --verbose -i cpp/test_aoti_abi_check cpp/test_aoti_inference
411+
if [[ "$BUILD_ENVIRONMENT" == *sm86* ]]; then
412+
BUILD_AOT_INDUCTOR_TEST=1 TORCH_CUDA_ARCH_LIST=8.6 USE_FLASH_ATTENTION=OFF python setup.py develop
413+
# TODO: Replace me completely, as one should not use conda libstdc++, nor need special path to TORCH_LIB
414+
LD_LIBRARY_PATH=/opt/conda/envs/py_3.10/lib/:${TORCH_LIB_DIR}:$LD_LIBRARY_PATH
415+
CPP_TESTS_DIR="${BUILD_BIN_DIR}" python test/run_test.py --cpp --verbose -i cpp/test_aoti_abi_check cpp/test_aoti_inference
416+
else
417+
BUILD_AOT_INDUCTOR_TEST=1 python setup.py develop
418+
CPP_TESTS_DIR="${BUILD_BIN_DIR}" LD_LIBRARY_PATH="${TORCH_LIB_DIR}" python test/run_test.py --cpp --verbose -i cpp/test_aoti_abi_check cpp/test_aoti_inference
419+
fi
413420
}
414421

415422
test_inductor_cpp_wrapper_shard() {

0 commit comments

Comments
 (0)