-
Use Base Anaconda3 image, install all the modules currently in use.
-
Code is using a branch/PR that supports python3.
-
Image copy overs dlib model + snapshot
-
Copy over code and implementations
Link to dockerhub
Anaconda distribution is installed into the /opt/conda
Ensures that the default user has the conda command in their path.
Solution Link Conda is installed in 'pytorchNew' envrionment
conda activate pytorchNew
conda list -e > requirements.txt
conda create --name <env> --file requirements.txt # Rebuild Environment
docker build -t 16fb/deepheadpose:ZX .
docker push 16fb/deepheadpose:ZX
## Naming Conventions:
:GPU -> GPU Variant
:ZX -> ZX Variant with his code
--gpus allto enable GPU-itrun interactively-vmount new video to containerconda activate pytorchactivate anaconda envrionment
docker run -it --gpus all -v ${PWD}/toMount:/home/deep-head-pose/mount 16fb/deepheadpose:GPU
conda activate pytorch
Use I assume use "/" because its Linux fs
python code/test_on_video_dlib.py --snapshot models/hopenet_robust_alpha1.pkl --face_model dlib/mmod_human_face_detector.dat --video conan-cruise.gif --fps 15 --n_frames 10
python code/test_on_video_dlib.py --snapshot models/mysnap_epoch_29.pkl --face_model dlib/mmod_human_face_detector.dat --video conan-cruise.gif --fps 15 --n_frames 100
python code/test_on_video_dlib.py --snapshot models/mysnap_epoch_29.pkl --face_model dlib/mmod_human_face_detector.dat --video mount/Kamala.gif --fps 15 --n_frames 100
Ideally using bind mounts:
-v <Source Directory>:<Container Directory>
Place .gif into "toMount" directory, then bind mount into container as "mount/".
-> Reference the new gif for data.
So files can be read and written to and from Host
-v /toMount:/home/deep-head-pose/mount
Copy the output video directory into mount/video
cp -r /home/deep-head-pose/output/video /home/deep-head-pose/mount/video
Theres no progress bar:
Save to tar file
docker save --output <FileName> <ImageName>docker save --output deepheadpose 16fb/deepheadpose:latestdocker save --output deepheadpose 16fb/deepheadpose:ZX
Load from tar file
docker load --input <FileName>docker load --input deepheadpose
conda activate pytorchNew
python code/test_on_video_dlib.py --snapshot models/mysnap_epoch_29.pkl --face_model dlib/mmod_human_face_detector.dat --video conan-cruise.gif --fps 15 --n_frames 100
Notes:
Its quite slow for video
conda activate pytorchNew
python code/i.py --snapshot models/mysnap_epoch_29.pkl --face_model dlib/mmod_human_face_detector.dat --video conan-cruise.gif --fps 24 --n_frames 100
Notes:
Had to set face model to take in from args.
Still quite slow.
It does show obtained tensors.
conda activate pytorchNew
python code/i_webcam.py --snapshot models/mysnap_epoch_29.pkl --face_model dlib/mmod_human_face_detector.dat --video conan-cruise.gif --fps 15 --n_frames 100
Notes:
Very Very Very Slow.
Removed the wait key at the end of i_webcam.py.
no dice.
conda create --name test conda activate test conda install python pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch conda install opencv matplotlib pandas scipy scikit-image cmake dlib -c conda-forge
conda create --name ZX --file conda.txt conda install torchvision
Thers no torchvision... what? no ski-image