Skip to content

Commit b0f9884

Browse files
authored
inference doc fix grammer (#11718)
1 parent a1f1a5e commit b0f9884

File tree

1 file changed

+14
-13
lines changed

1 file changed

+14
-13
lines changed
Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
# Inference High-level APIs
2-
This document describes the high-level inference APIs one can use to easily deploy a Paddle model for an application.
2+
This document describes the high-level inference APIs, one can use them to deploy a Paddle model for an application quickly.
33

4-
The APIs are described in `paddle_inference_api.h`, just one header file, and two libaries `libpaddle_fluid.so` and `libpaddle_fluid_api.so` are needed.
4+
The APIs are described in `paddle_inference_api.h`, just one header file, and two libaries `libpaddle_fluid.so` and `libpaddle_fluid_api.so` are needed for a deployment.
55

66
## PaddleTensor
7-
We provide the `PaddleTensor` data structure is to give a general tensor interface.
7+
We provide the `PaddleTensor` data structure to give a general tensor interface.
88

99
The definition is
1010

@@ -17,18 +17,19 @@ struct PaddleTensor {
1717
};
1818
```
1919
20-
The data is stored in a continuous memory `PaddleBuf`, and tensor's data type is specified by a `PaddleDType`.
21-
The `name` field is used to specify the name of input variable,
22-
that is important when there are multiple inputs and need to distiuish which variable to set.
20+
The data is stored in a continuous memory `PaddleBuf,` and a `PaddleDType` specifies tensor's data type.
21+
The `name` field is used to specify the name of an input variable,
22+
that is important when there are multiple inputs and need to distinguish which variable to set.
2323
2424
## engine
25-
The inference APIs has two different underlying implementation, currently there are two valid engines:
25+
The inference APIs has two different underlying engines
2626
2727
- the native engine, which is consists of the native operators and framework,
28-
- the Anakin engine, which is a Anakin library embeded.
28+
- the Anakin engine, which has an Anakin library embedded.
2929
3030
The native engine takes a native Paddle model as input, and supports any model that trained by Paddle,
31-
but the Anakin engine can only take the Anakin model as input(user need to manully transform the format first) and currently not all Paddle models are supported.
31+
the Anakin engine is faster for some model,
32+
but it can only take the Anakin model as input(user need to transform the format first manually) and currently not all Paddle models are supported.
3233
3334
```c++
3435
enum class PaddleEngineKind {
@@ -38,10 +39,10 @@ enum class PaddleEngineKind {
3839
```
3940

4041
## PaddlePredictor and how to create one
41-
The main interface is `PaddlePredictor`, there are following methods
42+
The main interface is `PaddlePredictor,` there are following methods
4243

4344
- `bool Run(const std::vector<PaddleTensor>& inputs, std::vector<PaddleTensor>* output_data)`
44-
- take inputs and output `output_data`
45+
- take inputs and output `output_data.`
4546
- `Clone` to clone a predictor from an existing one, with model parameter shared.
4647

4748
There is a factory method to help create a predictor, and the user takes the ownership of this object.
@@ -51,9 +52,9 @@ template <typename ConfigT, PaddleEngineKind engine = PaddleEngineKind::kNative>
5152
std::unique_ptr<PaddlePredictor> CreatePaddlePredictor(const ConfigT& config);
5253
```
5354
54-
By specifying the engine kind and config, one can get an specific implementation.
55+
By specifying the engine kind and config, one can get a specific implementation.
5556
5657
## Reference
5758
5859
- [paddle_inference_api.h](./paddle_inference_api.h)
59-
- [demos](./demo)
60+
- [some demos](./demo)

0 commit comments

Comments
 (0)