Skip to content

Commit f92c587

Browse files
authored
[docs] Refresh add_ops.md (llvm#3939)
- removes section regarding Turbine Camp - Each line of detail either: - already existed internally in Confluence OR - was severely out of date! - Was beyond the concerns of Torch-MLIR - adjusts link to LLVM style guide - was directed to a specific style guide rule rather than the start of the style guide in general - adds missing h2 - cleans up style using markdown linter - prefers formatted links over intentionally bare URLs - enforces explicitly-define language in code blocks - prefers implicitly-ordered, 1-based lists - avoids less-common 0-based lists since that would require deviation from the default linter config - wraps bare urls/emails - enforces unordered list nested indentation - enforces space around headers - enforces space around code fence blocks - removes extraneous blank lines - enforces space around list blocks
1 parent bf594b0 commit f92c587

File tree

1 file changed

+30
-54
lines changed

1 file changed

+30
-54
lines changed

docs/add_ops.md

Lines changed: 30 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -2,72 +2,49 @@
22

33
Collected links and contacts for how to add ops to torch-mlir.
44

5-
<details>
6-
<summary>Turbine Camp: Start Here</summary>
7-
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
5+
## [How to Add a Torch Operator](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md)
86

9-
Written & maintained by @renxida
10-
11-
Guides by other folks that were used during the creation of this document:
12-
- [Chi Liu](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)
13-
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
14-
15-
## Before you begin...
16-
17-
Nod-ai maintains the pipeline below, which allows us to take a ML model from e.g. huggingface, and compile it to a variety of devices including llvm-cpu, rocm and cuda and more as an optimized `vmfb` binary.
18-
19-
1. The pipeline begins with a huggingface model, or some other supported source like llama.cpp.
20-
2. [nod-ai/SHARK-Turbine](https://github.com/nod-ai/SHARK-Turbine) takes a huggingface model and exports a `.mlir` file.
21-
3. **[llvm/torch-mlir](https://github.com/llvm/torch-mlir)**, which you will be working on in turbine-camp, will lower torchscript, torch dialect, and torch aten ops further into a mixture `linalg` or `math` MLIR dialects (with occasionally other dialects in the mix)
22-
4. [IREE](https://github.com/openxla/iree) converts the final `.mlir` file into a binary (typically `.vmfb`) for running on a device (llvm-cpu, rocm, vulcan, cuda, etc).
23-
24-
The details of how we do it and helpful commands to help you set up each repo is in [Sungsoon's Shark Getting Started Google Doc](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
25-
26-
PS: IREE is pronounced Eerie, and hence the ghost icon.
27-
28-
## How to begin
29-
0. Set up torch-mlir according to the instructions here: https://github.com/llvm/torch-mlir/blob/main/docs/development.md
30-
1. You will start by adding support for 2 ops in torch-mlir, to get you familiar with the center of our pipeline. Begin by reading [torch-mlir's documentation on how to implement a new torch op](https://github.com/llvm/torch-mlir/blob/main/docs/Torch-ops-E2E-implementation.md), and set up `llvm/torch_mlir` using https://github.com/llvm/torch-mlir/blob/main/docs/development.md
31-
2. Pick 1 of the yet-unimplemented from the following. You should choose something that looks easy to you. **Make sure you create an issue by clicking the little "target" icon to the right of the op, thereby marking the op as yours**
32-
- [TorchToLinalg ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/347)
33-
- [TorchOnnnxToTorch ops tracking issue](https://github.com/nod-ai/SHARK-Turbine/issues/215)
34-
3. Implement it. For torch -> linalg, see the how to torchop section below. For Onnx ops, see how to onnx below.
35-
5. Make a pull request and reference your issue. When the pull request is closed, also close your issue to mark the op as done
36-
37-
</details>
7+
## How to Add a Conversion for an Operator
388

399
### How to TorchToLinalg
4010

4111
You will need to do 5 things:
12+
4213
- make sure -DTORCH_MLIR_ENABLE_JIT_IR_IMPORTER=ON is added during build. This is to enable the python file used in `build_tools/update_torch_ods.sh` and `build_tools/update_abstract_interp_lib.sh`
4314
- make sure the op exists in `torch_ods_gen.py`, and then run `build_tools/update_torch_ods.sh`, and then build. This generates `GeneratedTorchOps.td`, which is used to generate the cpp and h files where ops function signatures are defined.
44-
- Reference [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
15+
- Reference [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
4516
- make sure the op exists in `abstract_interp_lib_gen.py`, and then run `build_tools/update_abstract_interp_lib.sh`, and then build. This generates `AbstractInterpLib.cpp`, which is used to generate the cpp and h files where ops function signatures are defined.
46-
- Reference [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)
17+
- Reference [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)
4718
- write test cases. They live in `projects/pt1`. See the [Dec 2023 example](https://github.com/llvm/torch-mlir/pull/2640/files).
4819
- implement the op in one of the `lib/Conversion/TorchToLinalg/*.cpp` files
4920

5021
Reference Examples
22+
5123
- [A Dec 2023 example with the most up to date lowering](https://github.com/llvm/torch-mlir/pull/2640/files)
5224
- [Chi's simple example of adding op lowering](https://github.com/llvm/torch-mlir/pull/1454) useful instructions and referring links for you to understand the op lowering pipeline in torch-mlir in the comments
5325

5426
Resources:
55-
- how to set up torch-mlir: [https://github.com/llvm/torch-mlir/blob/main/docs/development.md](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#checkout-and-build-from-source)
56-
- torch-mlir doc on how to debug and test: [ttps://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)
27+
28+
- [how to set up torch-mlir](https://github.com/llvm/torch-mlir/blob/main/docs/development.md)
29+
- [torch-mlir doc on how to debug and test](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)
5730
- [torch op registry](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/csrc/jit/passes/utils/op_registry.cpp#L21)
5831
- [torch shape functions](https://github.com/pytorch/pytorch/blob/7451dd058564b5398af79bfc1e2669d75f9ecfa2/torch/jit/_shape_functions.py#L1311)
5932

6033
### How to TorchOnnxToTorch
61-
0. Generate the big folder of ONNX IR. Use https://github.com/llvm/torch-mlir/blob/main/test/python/onnx_importer/import_smoke_test.py . Alternatively, if you're trying to support a certain model, convert that model to onnx IR with
62-
```
34+
35+
1. Generate the big folder of ONNX IR. Use [this Python script](https://github.com/llvm/torch-mlir/blob/main/test/python/onnx_importer/import_smoke_test.py). Alternatively, if you're trying to support a certain model, convert that model to onnx IR with
36+
37+
```shell
6338
optimum-cli export onnx --model facebook/opt-125M fb-opt
6439
python -m torch_mlir.tools.import_onnx fb-opt/model.onnx -o fb-opt-125m.onnx.mlir
6540
```
66-
2. Find an instance of the Op that you're trying to implement inside the smoke tests folder or the generated model IR, and write a test case. Later you will save it to one of the files in `torch-mlir/test/Conversion/TorchOnnxToTorch`, but for now feel free to put it anywhere.
67-
3. Implement the op in `lib/Conversion/TorchOnnxToTorch/something.cpp`.
68-
4. Test the conversion by running `./build/bin/torch-mlir-opt -split-input-file -verify-diagnostics -convert-torch-onnx-to-torch your_mlir_file.mlir`. For more details, see https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing . Xida usually creates a separate MLIR file to test it to his satisfaction before integrating it into one of the files at `torch-mlir/test/Conversion/TorchOnnxToTorch`.
41+
42+
1. Find an instance of the Op that you're trying to implement inside the smoke tests folder or the generated model IR, and write a test case. Later you will save it to one of the files in `torch-mlir/test/Conversion/TorchOnnxToTorch`, but for now feel free to put it anywhere.
43+
1. Implement the op in `lib/Conversion/TorchOnnxToTorch/something.cpp`.
44+
1. Test the conversion by running `./build/bin/torch-mlir-opt -split-input-file -verify-diagnostics -convert-torch-onnx-to-torch your_mlir_file.mlir`. For more details, see [the testing section of the doc on development](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing). Xida usually creates a separate MLIR file to test it to his satisfaction before integrating it into one of the files at `torch-mlir/test/Conversion/TorchOnnxToTorch`.
6945

7046
Helpful examples:
47+
7148
- [A Dec 2023 example where an ONNX op is implemented](https://github.com/llvm/torch-mlir/pull/2641/files#diff-b584b152020af6d2e5dbf62a08b2f25ed5afc2c299228383b9651d22d44b5af4R493)
7249
- [Vivek's example of ONNX op lowering](https://github.com/llvm/torch-mlir/commit/dc9ea08db5ac295b4b3f91fc776fef6a702900b9)
7350

@@ -77,16 +54,20 @@ Helpful examples:
7754
`. Please don't just paste the generated tests - reference them to write your own
7855

7956
## Contacts
57+
8058
People who've worked on this for a while
59+
8160
- Vivek (@vivek97 on discord)
82-
61+
- [Chi Liu](mailto:Chi[email protected])
8362

8463
Recent Turbine Camp Attendees, from recent to less recent
85-
- [email protected] (@xida_ren on discord)
86-
64+
65+
- [Xida Ren](mailto:[email protected]) (@xida_ren on discord)
66+
- [Sungsoon Cho](mailto:[email protected])
8767

8868
## Links
89-
- IMPORTANT: read the LLVM style guide: https://llvm.org/docs/CodingStandards.html#use-early-exits-and-continue-to-simplify-code
69+
70+
- IMPORTANT: read [the LLVM style guide](https://llvm.org/docs/CodingStandards.html#style-issues)
9071
- Tutorials
9172
- [Sungsoon's Shark Getting Started Google Doc](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
9273
- This document contains commands that would help you set up shark and run demos
@@ -105,18 +86,12 @@ Recent Turbine Camp Attendees, from recent to less recent
10586
- [Model and Op Support](https://github.com/nod-ai/SHARK-Turbine/issues/119)
10687
- [ONNX op support](https://github.com/nod-ai/SHARK-Turbine/issues/215)
10788

89+
## [Chi's useful commands for debugging torch mlir](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)
10890

109-
## Chi's useful commands for debugging torch mlir
110-
111-
https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2
112-
113-
## How to write test cases and test your new op
114-
115-
https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing
116-
117-
91+
## [How to write test cases and test your new op](https://github.com/llvm/torch-mlir/blob/main/docs/development.md#testing)
11892

11993
## How to set up vs code and intellisence for [torch-mlir]
94+
12095
Xida: This is optional. If you're using VS code like me, you might want to set it up so you can use the jump to definition / references, auto fix, and other features.
12196

12297
Feel free to contact me on discord if you have trouble figuring this out.
@@ -162,4 +137,5 @@ under `torch-mlir`
162137
"cmake.cmakePath": "/home/xida/miniconda/envs/torch-mlir/bin/cmake", // make sure this is a cmake that knows where your python is
163138
}
164139
```
140+
165141
The important things to note are the `cmake.configureArgs`, which specify the location of your torch mlir, and the `cmake.sourceDirectory`, which indicates that CMAKE should not build from the current directory and should instead build from `externals/llvm-project/llvm`

0 commit comments

Comments
 (0)