a. [Optional] Create a conda virtual environment and activate it:
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlabb. Install PyTorch and torchvision (CUDA is required):
# CUDA 9.2
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=9.2 -c pytorch
# CUDA 10.0
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorchThe higher versions are not covered by tests.
c. Clone mmskeleton from github:
git clone https://github.com/open-mmlab/mmskeleton.git
cd mmskeletond. Install mmskeleton:
python setup.py develope. Install nms for person estimation:
cd mmskeleton/ops/nms/
python setup_linux.py develop
cd ../../../f. [Optional] Install mmdetection for person detection:
python setup.py develop --mmdetIn the event of a failure installation, please install mmdetection manually.
g. To verify that mmskeleton and mmdetection installed correctly, use:
python mmskl.py pose_demo [--gpus $GPUS]
# or "python mmskl.py pose_demo_HD [--gpus $GPUS]" for a higher accuracyAn generated video as below will be saved under the prompted path.
Any application in mmskeleton is described by a configuration file. That can be started by a uniform command:
python mmskl.py $CONFIG_FILE [--options $OPTHION]which is equivalent to
mmskl $CONFIG_FILE [--options $OPTHION]
Optional arguments options is defined in the configuration file.
You can check them via:
mmskl $CONFIG_FILE -hSee START_RECOGNITION.md for learning how to train a model for skeleton-based action recognitoin.
See CUSTOM_DATASET for building your own skeleton-based dataset.
See CREATE_APPLICATION for creating your own mmskeleton application.
