Skip to content

Commit 9841b94

Browse files
authored
Fix docs. (#373)
* (I)Fix Docs (1)pycocotools dependency version fixed!(important) (2)installation document fixed. (3)quick start document fixed. * (I)Fix Docs
1 parent dbfb7aa commit 9841b94

File tree

3 files changed

+59
-51
lines changed

3 files changed

+59
-51
lines changed

docs/markdown/install/training.md

Lines changed: 56 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,20 @@
77
CUDA enviroment is essential to run deep learning neural networks on GPUs. The CUDA installation packages to download should match your system and your NVIDIA Driver version.
88

99
## Configure environment
10-
There are two ways to install hyperpose python training library.
10+
 There are two ways to install hyperpose python training library.
1111

12-
All the following instructions have been tested on the environments below:<br>
13-
> Ubuntu 18.04, Tesla V100-DGXStation, Nvidia Driver Version 440.33.01, CUDA Verison=10.2
14-
> Ubuntu 18.04, Tesla V100-DGXStation, Nvidia Driver Version 410.79, CUDA Verison=10.0
15-
> Ubuntu 18.04, TITAN RTX, Nvidia Driver Version 430.64, CUDA Version=10.1
16-
> Ubuntu 18.04, TITAN Xp, Nvidia Driver Version 430.26, CUDA Version=10.2
12+
&emsp;All the following instructions have been tested on the environments below:<br>
13+
| OS | NVIDIA Driver | CUDA Toolkit | GPU |
14+
| ------------ | ------------- | ------------ | -------------- |
15+
| Ubuntu 18.04 | 410.79 | 10.0 | Tesla V100-DGX |
16+
| Ubuntu 18.04 | 440.33.01 | 10.2 | Tesla V100-DGX |
17+
| Ubuntu 18.04 | 430.64 | 10.1 | TITAN RTX |
18+
| Ubuntu 18.04 | 430.26 | 10.2 | TITAN XP |
19+
| Ubuntu 16.04 | 430.50 | 10.1 | RTX 2080Ti |
1720

18-
Before all, we recommend you to create anaconda virtual environment first, which could handle the possible conflicts between the libraries you already have in your computers and the libraries hyperpose need to install, and also handle the dependencies of the cudatoolkit and cudnn library in a very simple way.<br>
19-
To create the virtual environment, run the following command in bash:
21+
22+
&emsp;Before all, we recommend you to create anaconda virtual environment first, which could handle the possible conflicts between the libraries you already have in your computers and the libraries hyperpose need to install, and also handle the dependencies of the cudatoolkit and cudnn library in a very simple way.<br>
23+
&emsp;To create the virtual environment, run the following command in bash:
2024
```bash
2125
# >>> create virtual environment (choose yes)
2226
conda create -n hyperpose python=3.7
@@ -27,44 +31,45 @@ conda install cudatoolkit=10.0.130
2731
conda install cudnn=7.6.0
2832
```
2933

30-
After configuring and activating conda enviroment, we can then begin to install the hyperpose.<br>
34+
&emsp;After configuring and activating conda enviroment, we can then begin to install the hyperpose.<br>
35+
36+
### (I)The first method to install is to put hyperpose python module in the working directory.(recommand)<br>
37+
&emsp;After git-cloning the source [repository](https://github.com/tensorlayer/hyperpose.git), you can directly import hyperpose python library under the root directory of the cloned repository.<br>
3138

32-
(I)The first method to install is to put hyperpose python module in the working directory.(recommand)<br>
33-
After git-cloning the source [repository](https://github.com/tensorlayer/hyperpose.git), you can directly import hyperpose python library under the root directory of the cloned repository.<br>
39+
&emsp;To make importion available, you should install the prerequist dependencies as followed:<br>
40+
&emsp;you can either install according to the requirements.txt in the [repository](https://github.com/tensorlayer/hyperpose.git)
3441

35-
To make importion available, you should install the prerequist dependencies as followed:<br>
36-
you can either install according to the requirements.txt in the [repository](https://github.com/tensorlayer/hyperpose.git)
3742
```bash
38-
# install according to the requirements.txt
39-
pip install -r requirements.txt
43+
# install according to the requirements.txt
44+
pip install -r requirements.txt
4045
```
4146

42-
or install libraries one by one
47+
&emsp;or install libraries one by one
4348

4449
```bash
45-
# >>> install tensorflow of version 2.3.1
46-
pip install tensorflow-gpu==2.3.1
47-
# >>> install tensorlayer of version 2.2.3
48-
pip install tensorlayer==2.2.3
49-
# >>> install other requirements (numpy<=17.0.0 because it has conflicts with pycocotools)
50-
pip install opencv-python
51-
pip install numpy==1.16.4
52-
pip install pycocotools
53-
pip install matplotlib
54-
```
55-
56-
This method of installation use the latest source code and thus is less likely to meet compatibility problems.<br><br>
57-
58-
(II)The second method to install is to use pypi repositories.<br>
59-
We have already upload hyperpose python library to pypi website so you can install it using pip, which gives you the last stable version.
50+
# >>> install tensorflow of version 2.3.1
51+
pip install tensorflow-gpu==2.3.1
52+
# >>> install tensorlayer of version 2.2.3
53+
pip install tensorlayer==2.2.3
54+
# >>> install other requirements (numpy<=17.0.0 because it has conflicts with pycocotools)
55+
pip install opencv-python
56+
pip install numpy==1.16.4
57+
pip install pycocotools
58+
pip install matplotlib
59+
```
60+
61+
&emsp;This method of installation use the latest source code and thus is less likely to meet compatibility problems.<br><br>
62+
63+
### (II)The second method to install is to use pypi repositories.<br>
64+
&emsp;We have already upload hyperpose python library to pypi website so you can install it using pip, which gives you the last stable version.
6065

6166
```bash
6267
pip install hyperpose
6368
```
6469

65-
This will download and install all dependencies automatically.
70+
&emsp;This will download and install all dependencies automatically.
6671

67-
Now after installing dependent libraries and hyperpose itself, let's check whether the installation successes.
72+
&emsp;Now after installing dependent libraries and hyperpose itself, let's check whether the installation successes.
6873
run following command in bash:
6974
```bash
7075
# >>> now the configuration is done, check whether the GPU is avaliable.
@@ -77,33 +82,35 @@ python
7782
```
7883

7984
## Extra configuration for exporting model
80-
The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a **.npz** file that contains the well-trained model weights.
85+
&emsp;The hypeprose python training library handles the whole pipelines for developing the pose estimation system, including training, evaluating and testing. Its goal is to produce a **.npz** file that contains the well-trained model weights.
86+
87+
&emsp;For the training platform, the enviroment configuration above is engough. However, most inference engine only accept .pb format or .onnx format model, such as [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
88+
89+
&emsp;Thus, one need to convert the trained model loaded with **.npz** file weight to **.pb** format or **.onnx** format for further deployment, which need extra configuration below:<br>
8190

82-
For the training platform, the enviroment configuration above is engough. However, most inference engine only accept .pb format or .onnx format model, such as [TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html).
91+
### (I)Convert to .pb format:<br>
92+
&emsp;To convert the model into .pb format, we use *@tf.function* to decorate the *infer* function of each model class, so we can use the *get_concrete_function* function from tensorflow to consctruct the frozen model computation graph and then save it in .pb format.
8393

84-
Thus, one need to convert the trained model loaded with **.npz** file weight to **.pb** format or **.onnx** format for further deployment, which need extra configuration below:<br>
94+
&emsp;We already provide a script with cli to facilitate conversion, which located at [export_pb.py](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py). What we need here is only *tensorflow* library that we already installed.
8595

86-
> **(I)Convert to .pb format:**<br>
87-
To convert the model into .pb format, we use *@tf.function* to decorate the *infer* function of each model class, so we can use the *get_concrete_function* function from tensorflow to consctruct the frozen model computation graph and then save it in .pb format.
96+
### (II)Convert to .onnx format:<br>
97+
&emsp;To convert the model in .onnx format, we need to first convert the model into .pb format, then convert it from .pb format into .onnx format. Two extra library are needed:
8898

89-
We already provide a script with cli to facilitate conversion, which located at [export_pb.py](https://github.com/tensorlayer/hyperpose/blob/master/export_pb.py). What we need here is only *tensorflow* library that we already installed.
99+
* [tf2onnx](https://github.com/onnx/tensorflow-onnx):<br>
100+
*tf2onnx* is used to convert .pb format model into .onnx format model. more information see [here](https://github.com/onnx/tensorflow-onnx).<br>
101+
install tf2onnx by running:
90102

91-
> **(II)Convert to .onnx format:**<br>
92-
To convert the model in .onnx format, we need to first convert the model into .pb format, then convert it from .pb format into .onnx format. Two extra library are needed:
93-
> **tf2onnx**:<br>
94-
*tf2onnx* is used to convert .pb format model into .onnx format model, is necessary here. details information see [reference](https://github.com/onnx/tensorflow-onnx).
95-
install tf2onnx by running:
96103
```bash
97104
pip install -U tf2onnx
98105
```
99106

100-
> **graph_transforms**:<br>
101-
*graph_transform* is used to check the input and output node of the .pb file if one doesn't know. when convert .pb file into .onnx file using tf2onnx, one is required to provide the input node name and output node name of the computation graph stored in .pb file, so he may need to use *graph_transform* to inspect the .pb file to get node names.<br>
102-
build graph_transforms according to [tensorflow tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool)
107+
* [graph_transforms](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool):<br>
108+
*graph_transform* is used to check the input and output node of the .pb file if one doesn't know. when convert .pb file into .onnx file using tf2onnx, one is required to provide the input node name and output node name of the computation graph stored in .pb file, so he may need to use *graph_transform* to inspect the .pb file to get node names.<br>
109+
build graph_transforms according to [tensorflow tools](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#using-the-graph-transform-tool).
103110

104111
## Extra configuration for parallel training
105-
The hyperpose python training library use the High performance distributed machine learning framework **Kungfu** for parallel training.<br>
106-
Thus to use the parallel training functionality of hyperpose, please install [Kungfu](https://github.com/lsds/KungFu) according to the official instructon it provides.
112+
&emsp;The hyperpose python training library use the High performance distributed machine learning framework **Kungfu** for parallel training.<br>
113+
&emsp;Thus to use the parallel training functionality of hyperpose, please install [Kungfu](https://github.com/lsds/KungFu) according to the official instructon it provides.
107114

108115

109116

docs/markdown/tutorial/training.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
11
# Tutorial for Training Library
22
Up to now, Hyperpose provides:
3-
* 4 types of preset model architectures:
3+
* 5 types of preset model architectures:
44
> Openpose
55
> LightweightOpenpos
66
> Poseproposal
77
> MobilenetThinOpenpose
8+
> Pifpaf
89
* 10 types of common model backbone for backbone replacement:
910
> MobilenetV1, MobilenetV2
1011
> Vggtiny, Vgg16, Vgg19

requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,5 @@ easydict>=1.9,<=1.10
44
opencv-python>=3.4,<3.5
55
tensorflow==2.3.1
66
tensorlayer==2.2.3
7-
pycocotools # must be installed after cython and numpy are installed
7+
pycocotools=2.0.0 # must be installed after cython and numpy are installed
88

0 commit comments

Comments
 (0)