Skip to content

Commit d575872

Browse files
committed
Fix labelling
1 parent 0062f49 commit d575872

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

chapter_model_deployment/Conversion_to_Inference_Model_and_Model_Optimization.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,10 @@ compilation is complete. However, some optimization operations can only
5757
be performed in their entirety during the deployment phase.
5858

5959
![Layered computer storagearchitecture](../img/ch08/ch09-storage.png)
60-
:label:`ch-deploy/fusion-storage}## Operator Fusion {#sec:ch-deploy/kernel-fusion`
60+
:label:`ch-deploy/fusion-storage`
61+
62+
## Operator Fusion
63+
:label:`ch-deploy/kernel-fusion`
6164

6265
Operator fusion involves combining multiple operators in a deep neural
6366
network (DNN) model into a new operator based on certain rules, reducing
@@ -148,8 +151,7 @@ using MindSpore Lite. We ran the sample network and mobilenet-v2 network
148151
for inference in dual threads on a Huawei Mate 30 smartphone to compare
149152
the time of running 3,000 inference epochs before and after the fusion.
150153
As shown in Table
151-
[1](#tab:ch09/ch09-conv-bn-fusion){reference-type="ref"
152-
reference="tab:ch09/ch09-conv-bn-fusion"}, the inference performance of
154+
`ch09-conv-bn-fusion`, the inference performance of
153155
the sample network and mobilenet-v2 network is improved considerably
154156
after the fusion --- by 8.5% and 11.7% respectively. Such improvements
155157
are achieved without bringing side effects and without requiring

0 commit comments

Comments
 (0)