-
Notifications
You must be signed in to change notification settings - Fork 425
Description
Version of Singularity:
What version of Singularity are you using? Run:
apptainer version 1.1.9
Expected behavior
Correct help instruction for the Deep Variant tool e.g. without the following error prompted at stdout
2024-11-12 15:40:09.008692: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /hpc/apps/cuda/cuda_12.2.2/lib64/:/.singularity.d/libs
2024-11-12 15:40:09.008745: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /hpc/apps/cuda/cuda_12.2.2/lib64/:/.singularity.d/libs
2024-11-12 15:40:09.008752: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Actual behavior
The tool runs smoothly until it engages GPU in the second step of the processing; this causes the run to abort and returns the previous message. In theory, the application should run to completion using the following:
singularity run --nv -e --bind ${INPUT} --bind ${OUTPUT} --bind ${TMP} /hpc/home/ngrmtt1/deepvariant_1.6.0-gpu.sif \
/opt/deepvariant/bin/run_deepvariant \
--model_type=WGS \
--ref=INLUP_amiga.fa \
--reads=${INPUT}/${BAM} \
--intermediate_results_dir=${TMP} \
--output_vcf=${OUTPUT}/${VCF}.vcf.gz \
--output_gvcf=${OUTPUT}/${VCF}.g.vcf.gz \
--make_examples_extra_args="min_mapping_quality=1,keep_legacy_allele_counter_behavior=true,normalize_reads=true" \
--num_shards=16
however, it seems to not be able to see the environment variable export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/hpc/apps/cuda/cuda_12.2.2/lib64/, which I set outside the container, and recall in phase of launching with --bind /hpc/apps/cuda/cuda_12.2.2/lib64. On the other hand, running singularity shell and exporting the variable within the container seems to work, but I'm unsure whether it can be used in a conventional script. Ideally, I'm lookig for a way to export environment variable along with singularity run.
Steps to reproduce this behavior
How can others reproduce this issue/problem?
What OS/distro are you running
NAME="Rocky Linux"
VERSION="9.2 (Blue Onyx)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.2"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Rocky Linux 9.2 (Blue Onyx)"
ANSI_COLOR="0;32"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:rocky:rocky:9::baseos"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
SUPPORT_END="2032-05-31"
ROCKY_SUPPORT_PRODUCT="Rocky-Linux-9"
ROCKY_SUPPORT_PRODUCT_VERSION="9.2"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.2"
How did you install Singularity
The IT team did it, so not sure about the actual procedure.