|
1 | 1 | # Fluid Inference使用指南
|
2 | 2 |
|
| 3 | +## 目录: |
| 4 | + |
3 | 5 | - Python Inference API
|
4 | 6 | - 编译Fluid Inference库
|
5 | 7 | - Inference C++ API
|
6 | 8 | - Inference实例
|
7 | 9 | - Inference计算优化
|
8 | 10 |
|
9 | 11 | ## Python Inference API **[改进中]**
|
10 |
| -- [保存Inference模型](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/io.py#L295) |
| 12 | +- 保存Inference模型 ([链接](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/io.py#L295)) |
11 | 13 |
|
12 | 14 | ```python
|
13 | 15 | def save_inference_model(dirname,
|
|
43 | 45 | $ ls
|
44 | 46 | $ __model__ __params__
|
45 | 47 | ```
|
46 |
| -- [加载Inference模型](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/io.py#L380) |
| 48 | +- 加载Inference模型([链接](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/io.py#L380)) |
47 | 49 | ```python
|
48 | 50 | def load_inference_model(dirname,
|
49 | 51 | executor,
|
|
110 | 112 |
|
111 | 113 |
|
112 | 114 | ## 链接Fluid Inference库
|
113 |
| -- [示例项目](https://github.com/luotao1/fluid_inference_example.git) |
| 115 | +- 示例项目([链接](https://github.com/luotao1/fluid_inference_example.git)) |
114 | 116 |
|
115 | 117 | - GCC配置
|
116 | 118 | ```bash
|
|
143 | 145 |
|
144 | 146 | ## C++ Inference API
|
145 | 147 |
|
146 |
| -- [推断流程](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/tests/test_helper.h#L91) |
| 148 | +- 推断流程([链接](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/tests/test_helper.h#L91)) |
147 | 149 |
|
148 | 150 | - 1、 初始化设备
|
149 | 151 | ```cpp
|
|
242 | 244 |
|
243 | 245 |
|
244 | 246 | - **不在每次执行时创建和销毁变量
|
245 |
| - [PR](https://github.com/PaddlePaddle/Paddle/pull/9301)** |
| 247 | + ([PR](https://github.com/PaddlePaddle/Paddle/pull/9301))** |
246 | 248 | - 执行`inference_program`
|
247 | 249 | ```cpp
|
248 | 250 | // Call once
|
|
259 | 261 | - 在同一个`Scope`中,相同的变量名是公用同一块内存的,容易引起意想不到的错误
|
260 | 262 |
|
261 | 263 |
|
262 |
| -- **不在每次执行时创建Op [PR](https://github.com/PaddlePaddle/Paddle/pull/9630)** |
| 264 | +- **不在每次执行时创建Op([PR](https://github.com/PaddlePaddle/Paddle/pull/9630))** |
263 | 265 | - 执行`inference_program`
|
264 | 266 | ```cpp
|
265 | 267 | // Call once
|
|
273 | 275 | - 一旦修改了`inference_program`,则需要重新创建`ctx`
|
274 | 276 |
|
275 | 277 |
|
276 |
| -- **[多线程共享Parameters](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/tests/test_multi_thread_helper.h)** |
| 278 | +- **多线程共享Parameters([链接](https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/inference/tests/test_multi_thread_helper.h))** |
277 | 279 | - 主线程
|
278 | 280 | - 1、 初始化设备
|
279 | 281 | - 2、 定义`place`,`executor`,`scope`
|
|
310 | 312 | - CPUPlace,CPU设备
|
311 | 313 | - CUDAPlace,CUDA GPU设备
|
312 | 314 | - 神经网络表示:
|
313 |
| - - [Program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/program.md) |
| 315 | + - [Program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/fluid/design/concepts/program.md). |
314 | 316 |
|
315 |
| - 详细介绍请参考[**Paddle Fluid开发者指南**](https://github.com/lcy-seso/learning_notes/blob/master/Fluid/developer's_guid_for_Fluid/Developer's_Guide_to_Paddle_Fluid.md) |
| 317 | + 详细介绍请参考[**Paddle Fluid开发者指南**](https://github.com/lcy-seso/learning_notes/blob/master/Fluid/developer's_guid_for_Fluid/Developer's_Guide_to_Paddle_Fluid.md) |
316 | 318 |
|
317 | 319 |
|
318 | 320 |
|
|
328 | 330 |
|
329 | 331 |
|
330 | 332 | ## Inference计算优化
|
331 |
| -- 使用Python推理优化工具[inference_transpiler](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/inference_transpiler.py) |
| 333 | +- 使用Python推理优化工具([inference_transpiler](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/inference_transpiler.py)) |
332 | 334 | ```python
|
333 | 335 | class InferenceTranspiler:
|
334 | 336 | def transpile(self, program, place, scope=None):
|
|
341 | 343 | - 使用`InferenceTranspiler`会修改参数的值,请确保`program`的参数在`scope`内。
|
342 | 344 | - 支持的优化
|
343 | 345 | - 融合batch_norm op的计算
|
344 |
| -- [使用示例](https://github.com/Xreki/Xreki.github.io/blob/master/fluid/inference/inference_transpiler.py) |
| 346 | +- 使用示例([链接](https://github.com/Xreki/Xreki.github.io/blob/master/fluid/inference/inference_transpiler.py)) |
345 | 347 | ```python
|
346 | 348 | import paddle.fluid as fluid
|
347 | 349 | # NOTE: Applying the inference transpiler will change the inference_program.
|
|
353 | 355 |
|
354 | 356 |
|
355 | 357 | ## 内存使用优化
|
356 |
| -- 使用Python内存优化工具[memory_optimization_transipiler](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/memory_optimization_transpiler.py) |
| 358 | +- 使用Python内存优化工具([memory_optimization_transipiler](https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/memory_optimization_transpiler.py)) |
357 | 359 | ```python
|
358 | 360 | fluid.memory_optimize(inference_program)
|
359 | 361 | ```
|
0 commit comments