Skip to content

Commit 5c56f96

Browse files
shoumikhinfacebook-github-bot
authored andcommitted
Fix typos in docs. (#5613)
Summary: Pull Request resolved: pytorch/executorch#5613 . Reviewed By: kirklandsign Differential Revision: D63355422 fbshipit-source-id: 994db615dc7b9f34d2f0c13cb36fbcdefe4172a7
1 parent 99ee547 commit 5c56f96

File tree

2 files changed

+29
-29
lines changed

2 files changed

+29
-29
lines changed

docs/source/extension-module.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ const auto error = module.load_method("forward");
6060

6161
assert(module.is_method_loaded("forward"));
6262
```
63-
Note: the `Program` is loaded automatically before any `Method` is loaded. Subsequent attemps to load them have no effect if one of the previous attemps was successful.
63+
Note: the `Program` is loaded automatically before any `Method` is loaded. Subsequent attempts to load them have no effect if one of the previous attempts was successful.
6464

6565
You can also force-load the "forward" method with a convenience syntax:
6666

@@ -72,7 +72,7 @@ assert(module.is_method_loaded("forward"));
7272

7373
### Querying for Metadata
7474

75-
Get a set of method names that a Module contains udsing the `method_names()` function:
75+
Get a set of method names that a Module contains using the `method_names()` function:
7676

7777
```cpp
7878
const auto method_names = module.method_names();
@@ -98,7 +98,7 @@ if (method_meta.ok()) {
9898
if (input_meta.ok()) {
9999
assert(input_meta->scalar_type() == ScalarType::Float);
100100
}
101-
const auto output_meta = meta->output_tensor_meta(0);
101+
const auto output_meta = method_meta->output_tensor_meta(0);
102102

103103
if (output_meta.ok()) {
104104
assert(output_meta->sizes().size() == 1);

docs/source/extension-tensor.md

Lines changed: 26 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
**Author:** [Anthony Shoumikhin](https://github.com/shoumikhin)
44

5-
Tensors are fundamental data structures in ExecuTorch, representing multi-dimensional arrays used in computations for neural networks and other numerical algorithms. In ExecuTorch, the [Tensor](https://github.com/pytorch/executorch/blob/main/runtime/core/portable_type/tensor.h) class doesn’t own its metadata (sizes, strides, dim_order) or data, keeping the runtime lightweight. Users are responsible for supplying all these memory buffers and ensuring that the metadata and data outlive the `Tensor` instance. While this design is lightweight and flexible, especially for tiny embedded systems, it places a significant burden on the user. However, if your environment requires minimal dynamic allocations, a small binary footprint, or limited C++ standard library support, you’ll need to accept that trade-off and stick with the regular `Tensor` type.
5+
Tensors are fundamental data structures in ExecuTorch, representing multi-dimensional arrays used in computations for neural networks and other numerical algorithms. In ExecuTorch, the [Tensor](https://github.com/pytorch/executorch/blob/main/runtime/core/portable_type/tensor.h) class doesn’t own its metadata (sizes, strides, dim_order) or data, keeping the runtime lightweight. Users are responsible for supplying all these memory buffers and ensuring that the metadata and data outlive the `Tensor` instance. While this design is lightweight and flexible, especially for tiny embedded systems, it places a significant burden on the user. If your environment requires minimal dynamic allocations, a small binary footprint, or limited C++ standard library support, you’ll need to accept that trade-off and stick with the regular `Tensor` type.
66

77
Imagine you’re working with a [`Module`](extension-module.md) interface, and you need to pass a `Tensor` to the `forward()` method. You would need to declare and maintain at least the sizes array and data separately, sometimes the strides too, often leading to the following pattern:
88

@@ -33,7 +33,7 @@ You must ensure `sizes`, `dim_order`, `strides`, and `data` stay valid. This mak
3333
3434
To alleviate these issues, ExecuTorch provides `TensorPtr` and `TensorImplPtr` via the new [Tensor Extension](https://github.com/pytorch/executorch/tree/main/extension/tensor) that manage the lifecycle of tensors and their implementations. These are essentially smart pointers (`std::unique_ptr<Tensor>` and `std::shared_ptr<TensorImpl>`, respectively) that handle the memory management of both the tensor's data and its dynamic metadata.
3535
36-
Now, users no longer need to worry about metadata lifetimes separately. Data ownership is determined based on whether the it is passed by pointer or moved into the `TensorPtr` as an `std::vector`. Everything is bundled in one place and managed automatically, enabling you to focus on actual computations.
36+
Now, users no longer need to worry about metadata lifetimes separately. Data ownership is determined based on whether it is passed by pointer or moved into the `TensorPtr` as an `std::vector`. Everything is bundled in one place and managed automatically, enabling you to focus on actual computations.
3737
3838
Here’s how you can use it:
3939
@@ -50,24 +50,24 @@ auto tensor = make_tensor_ptr(
5050
module.forward(tensor);
5151
```
5252

53-
The data is now owned by the tensor instance because it's provided as a vector. To create a non-owning `TensorPtr` just pass the data by pointer. The `type` is deduced automatically from the data vector (`float`). `strides` and `dim_order` are computed automatically to the default values based on the `sizes` if not specified explicitly as extra arguments.
53+
The data is now owned by the tensor instance because it's provided as a vector. To create a non-owning `TensorPtr` just pass the data by pointer. The `type` is deduced automatically based on the data vector (`float`). `strides` and `dim_order` are computed automatically to the default values based on the `sizes` if not specified explicitly as additional arguments.
5454

55-
`EValue` in `Module::forward()` accepts `TensorPtr` directly, ensuring seamless integration. `EValue` can now be constructed implicitly with a smart pointer to any type that it can hold, so `TensorPtr` gets dereferenced implicitly and `EValue` holding a `Tensor` that the `TensorPtr` pointed at is passed to the `forward()`.
55+
`EValue` in `Module::forward()` accepts `TensorPtr` directly, ensuring seamless integration. `EValue` can now be constructed implicitly with a smart pointer to any type that it can hold. This allows `TensorPtr` to be dereferenced implicitly when passed to `forward()`, and `EValue` will hold the `Tensor` that the `TensorPtr` points to.
5656

5757
## API Overview
5858

5959
The new API revolves around two main smart pointers:
6060

6161
- `TensorPtr`: `std::unique_ptr` managing a `Tensor` object. Since each `Tensor` instance is unique, `TensorPtr` ensures exclusive ownership.
62-
- `TensorImplPtr`: `std::shared_ptr` managing a `TensorImpl` object. Multiple `Tensor` instances can share the same `TensorImpl`, so `TensorImplPtr` uses shared ownership.
62+
- `TensorImplPtr`: `std::shared_ptr` managing a `TensorImpl` object. Multiple `Tensor` instances can share the same `TensorImpl`, so `TensorImplPtr` ensures shared ownership.
6363

6464
### Creating Tensors
6565

6666
There are several ways to create a `TensorPtr`.
6767

68-
### Creating Scalar Tensors
68+
#### Creating Scalar Tensors
6969

70-
You can create a scalar tensor, i.e. a tensor with zero dimensions or with one of sizes being zero.
70+
You can create a scalar tensor, i.e. a tensor with zero dimensions or with one of the sizes being zero.
7171

7272
*Providing A Single Data Value*
7373

@@ -85,7 +85,7 @@ auto tensor = make_tensor_ptr(42, ScalarType::Float);
8585

8686
Now the integer 42 will be cast to float and the tensor will contain a single value 42 of type float.
8787

88-
#### Owning a Data Vector
88+
#### Owning Data from a Vector
8989

9090
When you provide sizes and data vectors, `TensorPtr` takes ownership of both the data and the sizes.
9191

@@ -101,19 +101,19 @@ The type is deduced automatically as `ScalarType::Float` from the data vector.
101101
102102
*Providing Data Vector with a Type*
103103
104-
If you provide data of one type but specify a different scalar type, the data will be cast to the specified type.
104+
If you provide data of one type but specify a different scalar type, the data will be cast to the given type.
105105
106106
```cpp
107107
auto tensor = make_tensor_ptr(
108108
{1, 2, 3, 4, 5, 6}, // data (int)
109109
ScalarType::Double); // double scalar type
110110
```
111111

112-
In this example, even though the data vector contains integers, we specify the scalar type as `Double`. The integers are cast to doubles, and the new data vector is owned by the `TensorPtr`. The `sizes` argument is skipped in this example, so the input data vector's size is used. Note that we forbid the opposite cast, when a floating point type casts to an integral type, because that loses precision. Similarly, casting other types to `Bool` isn't allowed.
112+
In this example, even though the data vector contains integers, we specify the scalar type as `Double`. The integers are cast to double, and the new data vector is owned by the `TensorPtr`. Since the `sizes` argument is skipped in this example, the tensor is one-dimensional with a size equal to the length of the data vector. Note that the reverse cast, from a floating-point type to an integral type, is not allowed because that loses precision. Similarly, casting other types to `Bool` is disallowed.
113113

114114
*Providing Data Vector as `std::vector<uint8_t>`*
115115

116-
You can also provide raw data as a `std::vector<uint8_t>`, specifying the sizes and scalar type. The data will be reinterpreted according to the provided type.
116+
You can also provide raw data in the form of a `std::vector<uint8_t>`, specifying the sizes and scalar type. The data will be reinterpreted according to the provided type.
117117

118118
```cpp
119119
std::vector<uint8_t> data = /* raw data */;
@@ -125,7 +125,7 @@ auto tensor = make_tensor_ptr(
125125
126126
The `data` vector must be large enough to accommodate all the elements according to the provided sizes and scalar type.
127127
128-
#### Non-Owning a Raw Data Pointer
128+
#### Non-Owning Data from Raw Pointer
129129
130130
You can create a `TensorPtr` that references existing data without taking ownership.
131131
@@ -143,7 +143,7 @@ The `TensorPtr` does not own the data, you must ensure the `data` remains valid.
143143

144144
*Providing Raw Data with Custom Deleter*
145145

146-
If you want `TensorPtr` to manage the lifetime of the data, you can provide a custom deleter.
146+
If you want the `TensorPtr` to manage the lifetime of the data, you can provide a custom deleter.
147147

148148
```cpp
149149
auto* data = new double[6]{1.0, 2.0, 3.0, 4.0, 5.0, 6.0};
@@ -159,7 +159,7 @@ The `TensorPtr` will call the custom deleter when it is destroyed, i.e. when the
159159
160160
#### Sharing Existing Tensor
161161
162-
You can create a `TensorPtr` by wrapping an existing `TensorImplPtr`, and the latter can be created with the same collection of APIs as `TensorPtr`. Any changes made to `TensorImplPtr` or any `TensorPtr` sharing the same `TensorImplPtr` get reflected in for all.
162+
You can create a `TensorPtr` by wrapping an existing `TensorImplPtr`, and the latter can be created with the same collection of APIs as `TensorPtr`. Any changes made to `TensorImplPtr` or any `TensorPtr` sharing the same `TensorImplPtr` are reflected across all.
163163
164164
*Sharing Existing TensorImplPtr*
165165
@@ -171,7 +171,7 @@ auto tensor = make_tensor_ptr(tensor_impl);
171171
auto tensor_copy = make_tensor_ptr(tensor_impl);
172172
```
173173

174-
Both `tensor` and `tensor_copy` share the underlying `TensorImplPtr`, reflecting changes in data but not in metadata.
174+
Both `tensor` and `tensor_copy` share the underlying `TensorImplPtr`, reflecting changes to data but not to metadata.
175175

176176
Also, you can create a new `TensorPtr` that shares the same `TensorImplPtr` as an existing `TensorPtr`.
177177

@@ -192,7 +192,7 @@ Tensor original_tensor = /* some existing tensor */;
192192
auto tensor = make_tensor_ptr(original_tensor);
193193
```
194194

195-
Now the newly created `TensorPtr` references the same data as the original tensor, but has its own metadata copy, so can interpret or "view" the data differently, but any modifications to the data will be reflected for the original `Tensor` too.
195+
Now the newly created `TensorPtr` references the same data as the original tensor, but has its own metadata copy, so it can interpret or "view" the data differently, but any modifications to the data will be reflected in the original `Tensor` as well.
196196

197197
### Cloning Tensors
198198

@@ -211,15 +211,15 @@ auto original_tensor = make_tensor_ptr();
211211
auto tensor = clone_tensor_ptr(original_tensor);
212212
```
213213

214-
Note that regardless of whether the original `TensorPtr` owns the data or not, the newly created `TensorPtr` will own a copy of the data.
214+
Note that, regardless of whether the original `TensorPtr` owns the data or not, the newly created `TensorPtr` will own a copy of the data.
215215

216216
### Resizing Tensors
217217

218218
The `TensorShapeDynamism` enum specifies the mutability of a tensor's shape:
219219

220220
- `STATIC`: The tensor's shape cannot be changed.
221-
- `DYNAMIC_BOUND`: The tensor's shape can be changed, but can never contain more elements than it had at creation based on the initial sizes.
222-
- `DYNAMIC`: The tensor's shape can be changed arbitrarily. Note that currently `DYNAMIC` is an alias of `DYNAMIC_BOUND`.
221+
- `DYNAMIC_BOUND`: The tensor's shape can be changed but cannot contain more elements than it originally had at creation based on the initial sizes.
222+
- `DYNAMIC`: The tensor's shape can be changed arbitrarily. Note that, currently, `DYNAMIC` is an alias for `DYNAMIC_BOUND`.
223223

224224
When resizing a tensor, you must respect its dynamism setting. Resizing is only allowed for tensors with `DYNAMIC` or `DYNAMIC_BOUND` shapes, and you cannot resize `DYNAMIC_BOUND` tensor to contain more elements than it had initially.
225225

@@ -233,19 +233,19 @@ auto tensor = make_tensor_ptr(
233233
// Number of elements: 6
234234

235235
resize_tensor_ptr(tensor, {2, 2});
236-
// The tensor's sizes are now {2, 2}
236+
// The tensor sizes are now {2, 2}
237237
// Number of elements is 4 < initial 6
238238

239239
resize_tensor_ptr(tensor, {1, 3});
240-
// The tensor's sizes are now {1, 3}
240+
// The tensor sizes are now {1, 3}
241241
// Number of elements is 3 < initial 6
242242

243243
resize_tensor_ptr(tensor, {3, 2});
244-
// The tensor's sizes are now {3, 2}
244+
// The tensor sizes are now {3, 2}
245245
// Number of elements is 6 == initial 6
246246

247247
resize_tensor_ptr(tensor, {6, 1});
248-
// The tensor's sizes are now {6, 1}
248+
// The tensor sizes are now {6, 1}
249249
// Number of elements is 6 == initial 6
250250
```
251251
@@ -378,7 +378,7 @@ This also applies when using functions like `set_input()` or `set_output()` that
378378
379379
## Interoperability with ATen
380380
381-
If your code is compiled with the preprocessor flag `USE_ATEN_LIB` turned on, all the `TensorPtr` APIs will use `at::` APIs under the hood. E.g. `TensorPtr` becomes a `std::unique_ptr<at::Tensor>` and `TensorImplPtr` becomes `c10::intrusive_ptr<at::TensorImpl>`. This allows for seamless integration with [PyTorch ATen](https://pytorch.org/cppdocs) library.
381+
If your code is compiled with the preprocessor flag `USE_ATEN_LIB` enabled, all the `TensorPtr` APIs will use `at::` APIs under the hood. E.g. `TensorPtr` becomes a `std::unique_ptr<at::Tensor>` and `TensorImplPtr` becomes `c10::intrusive_ptr<at::TensorImpl>`. This allows for seamless integration with [PyTorch ATen](https://pytorch.org/cppdocs) library.
382382
383383
### API Equivalence Table
384384
@@ -421,6 +421,6 @@ Here's a table matching `TensorPtr` creation functions with their corresponding
421421
422422
## Conclusion
423423
424-
The `TensorPtr` and `TensorImplPtr` in ExecuTorch simplifies tensor memory management by bundling the data and dynamic metadata into smart pointers. This design eliminates the need for users to manage multiple pieces of data and ensures safer and more maintainable code.
424+
The `TensorPtr` and `TensorImplPtr` in ExecuTorch simplify tensor memory management by bundling the data and dynamic metadata into smart pointers. This design eliminates the need for users to manage multiple pieces of data and ensures safer and more maintainable code.
425425
426-
By providing interfaces similar to PyTorch's ATen library, ExecuTorch makes it easier for developers to adopt the new API without a steep learning curve.
426+
By providing interfaces similar to PyTorch's ATen library, ExecuTorch simplifies the adoption of the new API, allowing developers to transition without a steep learning curve.

0 commit comments

Comments
 (0)