Skip to content

Commit 78e67cc

Browse files
committed
chore: format changes in documentation
Signed-off-by: Bo Wang <[email protected]>
1 parent b6bf716 commit 78e67cc

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

core/partitioning/README.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,22 +4,22 @@ TRTorch partitioning phase is developed to support automatic fallback feature in
44
default until the automatic fallback feature is enabled.
55

66
On a high level, TRTorch partitioning phase does the following:
7-
- Segmentation. Go through the set of operators in order and verify if there is converter for each operator. Then,
7+
- `Segmentation`. Go through the set of operators in order and verify if there is converter for each operator. Then,
88
roughly separate the graph into parts that TRTorch can support and parts TRTorch cannot.
9-
- Dependency Analysis. For every to be compiled operator there is a "complete dependency graph", which means that
9+
- `Dependency Analysis`. For every to be compiled operator there is a "complete dependency graph", which means that
1010
every input can to traced back to an input as Tensor or TensorList. Go through all segments after segmentation then
1111
do dependency analysis to ensure that there are only Tensor/TensorList inputs and outputs for TensorRT segments.
12-
- Shape Analysis. For each segments, figure out the input and outputs shapes starting from the provided input shape
12+
- `Shape Analysis`. For each segments, figure out the input and outputs shapes starting from the provided input shape
1313
from the user. Shapes can be calculated by running the graphs with JIT.
14-
- Conversion. Every TensorRT segments will be converted to TensorRT engine. This part is done in compiler.cpp, but
14+
- `Conversion`. Every TensorRT segments will be converted to TensorRT engine. This part is done in compiler.cpp, but
1515
it's still a phase in our partitioning process.
16-
- Stitching. Stitch all TensorRT engines with PyTorch nodes altogether.
16+
- `Stitching`. Stitch all TensorRT engines with PyTorch nodes altogether.
1717

1818
Here is the brief description of functionalities of each file:
19-
- PartitionInfo.h/cpp: The automatic fallback APIs that is used for partitioning.
20-
- SegmentedBlock.h/cpp: The main data structures that is used to maintain information for each segments after segmentation.
21-
- shape_analysis.h/cpp: Code implementation to get the shapes for each segments by running them in JIT.
22-
- partitioning.h/cpp: APIs and main code implementation for partitioning phase.
19+
- `PartitionInfo.h/cpp`: The automatic fallback APIs that is used for partitioning.
20+
- `SegmentedBlock.h/cpp`: The main data structures that is used to maintain information for each segments after segmentation.
21+
- `shape_analysis.h/cpp`: Code implementation to get the shapes for each segments by running them in JIT.
22+
- `partitioning.h/cpp`: APIs and main code implementation for partitioning phase.
2323

2424
### Automatic Fallback
2525
To enable automatic fallback feature, you can set following attributes in Python:
@@ -39,10 +39,10 @@ To enable automatic fallback feature, you can set following attributes in Python
3939
}
4040
})
4141
```
42-
- enabled: By default automatic fallback will be off. It is enabled by setting it to True.
43-
- min_block_size: The minimum number of consecutive operations that must satisfy to be converted to TensorRT. For
42+
- `enabled`: By default automatic fallback will be off. It is enabled by setting it to True.
43+
- `min_block_size`: The minimum number of consecutive operations that must satisfy to be converted to TensorRT. For
4444
example, if it's set to 3, then there must be 3 consecutive supported operators then this segments will be converted.
45-
- forced_fallback_ops: A list of strings that will be the names of operations that the user explicitly want to be in
45+
- `forced_fallback_ops`: A list of strings that will be the names of operations that the user explicitly want to be in
4646
PyTorch nodes.
4747

4848
To enable automatic fallback feature in C++, following APIs could be uses:

0 commit comments

Comments
 (0)