Skip to content
9 changes: 7 additions & 2 deletions qa/L0_backend_python/examples/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,12 @@ rm -fr *.log python_backend/

# Install torch
pip3 uninstall -y torch
pip3 uninstall -y numpy
pip3 install "numpy>=2"
if [ "$TEST_JETSON" == "0" ] && [[ ${TEST_WINDOWS} == 0 ]]; then
pip3 install torch==2.0.0+cu117 -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.15.0+cu117
pip3 install torch==2.5.0 -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.20.0
else
pip3 install torch==2.0.0 -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.15.0
pip3 install torch==2.5.0 -f https://download.pytorch.org/whl/torch_stable.html torchvision==0.20.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that the if condition installed a +cu version, and the else did not. In your change you have both if/else installing the same thing. Please either match the install pattern with the correct +cuXXX variant, or double check whether this if/else is required if not. CC @fpetrini15 who might know a bit more about the if/else split here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rmccorm4 I think this is a code segment we discussed a few months back. It seems like since Jacky has fixed my original error here; 23a6c21. I think for non-Jetson Linux tests, we want to continue installing the cuda-enabled version. That said, these tests were passing fine even with my logical error from months ago, so it's unclear to me what we achieve by installing the cuda-enabled version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this doesn't break windows/jetson, I'm happy to keep it. If it might, we could also try making the change only on the linux non-jetson block. ex:

if not jetson/windows:
  pip install numpy 2
  pip install torch 2.5
else: # if windows/jetson
  pip install numpy 1
  pip install torch 2.0
fi

Let me know if you have a preference for minimal blast radius @fpetrini15

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should align with the original intention of the script and install the cuda-enabled version.

CC @Tabrizian if you have any insight into why we preferred it originally?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe PyTorch with CUDA was not supported on Jetson. We also don't support GPU tensors on Jetson with Python backend so it could be related to that.

fi

# Install `validators` for Model Instance Kind example
Expand Down Expand Up @@ -440,4 +442,7 @@ else
echo -e "\n***\n*** Example verification test FAILED.\n***"
fi

pip3 uninstall -y numpy
pip3 install "numpy<2"
Copy link
Contributor Author

@KrishnanPrash KrishnanPrash Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In L0_backend_python, the way examples/test.sh is called is along the lines of:

    setup_virtualenv

    set +e
    (cd ${TEST} && bash -ex test.sh)

    if [ $? -ne 0 ]; then
        echo "Subtest ${TEST} FAILED"
        RET=1
    fi
    set -e

    deactivate_virtualenv

So re-installing a lower version of numpy is unnecessary here because the virtualenv created for the examples subtest is not re-used for future subtests.


exit $RET
Loading