You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,14 @@ Tensor Compute Primitives
3
3
4
4
Mid-level intermediate representation for machine learning programs.
5
5
6
-
[](https://github.com/cruise-automation/mlir-tcp/actions/workflows/bazelBuildAndTestTcp.yml)
6
+
[](https://github.com/llvm/mlir-tcp/actions/workflows/bazelBuildAndTestTcp.yml)
7
7
8
8
:construction:**This project is under active development (WIP).**
9
9
10
10
## Project Communication
11
11
12
12
- For general discussion use `#mlir-tcp` channel on the [LLVM Discord](https://discord.gg/xS7Z362)
13
-
- For feature request or bug report file a detailed [issue on GitHub](https://github.com/cruise-automation/mlir-tcp/issues)
13
+
- For feature request or bug report file a detailed [issue on GitHub](https://github.com/llvm/mlir-tcp/issues)
14
14
15
15
## Developer Guide
16
16
@@ -31,7 +31,7 @@ bazel build //:tcp-opt
31
31
bazel test //...
32
32
```
33
33
34
-
We welcome contributions to `mlir-tcp`. When authoring new TCP ops with dialect conversions from/to Torch and Linalg, please include lit tests for dialect and conversions, as well as [aot_compile](https://github.com/cruise-automation/mlir-tcp/blob/main/tools/aot/README.md) generated e2e integration tests. Lastly, please finalize your PR with clang-format, black and bazel buildifier to ensure the C++/python sources and BUILD files are formatted consistently:
34
+
We welcome contributions to `mlir-tcp`. When authoring new TCP ops with dialect conversions from/to Torch and Linalg, please include lit tests for dialect and conversions, as well as [aot_compile](https://github.com/llvm/mlir-tcp/blob/main/tools/aot/README.md) generated e2e integration tests. Lastly, please finalize your PR with clang-format, black and bazel buildifier to ensure the C++/python sources and BUILD files are formatted consistently:
The following CI workflows are automatically triggered anytime upstream dependencies (`deps.bzl`) are updated:
62
-
-[](https://github.com/cruise-automation/mlir-tcp/actions/workflows/bazelBuildAndTestLlvm.yml)
63
-
-[](https://github.com/cruise-automation/mlir-tcp/actions/workflows/bazelBuildAndTestTorchmlir.yml)
64
-
-[](https://github.com/cruise-automation/mlir-tcp/actions/workflows/bazelBuildAndTestStablehlo.yml)
62
+
-[](https://github.com/llvm/mlir-tcp/actions/workflows/bazelBuildAndTestLlvm.yml)
63
+
-[](https://github.com/llvm/mlir-tcp/actions/workflows/bazelBuildAndTestTorchmlir.yml)
64
+
-[](https://github.com/llvm/mlir-tcp/actions/workflows/bazelBuildAndTestStablehlo.yml)
65
65
66
66
To use newer `torch-mlir` and/or `torch` python packages in our hermetic python sandbox, just regenerate `requirements_lock.txt` as follows:
67
67
```shell
@@ -113,7 +113,7 @@ For help with gdb commands please refer to [gdb cheat sheet](https://gist.github
113
113
114
114
### `aot_compile` debugging
115
115
116
-
Refer this [README](https://github.com/cruise-automation/mlir-tcp/blob/main/tools/aot/README.md) for a step-by-step guide to debugging an end-to-end compilation pipeline using the AOT Compile framework.
116
+
Refer this [README](https://github.com/llvm/mlir-tcp/blob/main/tools/aot/README.md) for a step-by-step guide to debugging an end-to-end compilation pipeline using the AOT Compile framework.
Copy file name to clipboardExpand all lines: tools/aot/README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
AOT Compile (Developer Guide)
2
2
=============================
3
3
4
-
The [`aot_compile`](https://github.com/cruise-automation/mlir-tcp/blob/main/tools/aot/aot_compile.bzl) bazel macro implements an end-to-end framework to compile PyTorch (or TCP) programs to a CPU library, execute it and test for functional correctness of the generated code. It comprises starting with TorchDynamo export of PyTorch programs, conversion and lowerings through {Torch, TCP, Linalg, LLVM} MLIR dialects, translation to LLVM assembly, compilation to assembly source for the host architecture (CPU), and lastly generation of shared object that could be dynamically linked into an executable/test at runtime. It leverages a series of genrules to stitch the compilation pipeline together, and an unsophisticated meta-programming trick for auto-generating C++ tests (specialized to the input program's function signature) that execute the compiled code and validate its numerics against reference PyTorch.
4
+
The [`aot_compile`](https://github.com/llvm/mlir-tcp/blob/main/tools/aot/aot_compile.bzl) bazel macro implements an end-to-end framework to compile PyTorch (or TCP) programs to a CPU library, execute it and test for functional correctness of the generated code. It comprises starting with TorchDynamo export of PyTorch programs, conversion and lowerings through {Torch, TCP, Linalg, LLVM} MLIR dialects, translation to LLVM assembly, compilation to assembly source for the host architecture (CPU), and lastly generation of shared object that could be dynamically linked into an executable/test at runtime. It leverages a series of genrules to stitch the compilation pipeline together, and an unsophisticated meta-programming trick for auto-generating C++ tests (specialized to the input program's function signature) that execute the compiled code and validate its numerics against reference PyTorch.
5
5
6
6
When authoring new TCP ops with dialect conversions from/to Torch and Linalg, adding an `aot_compile` target is a fast, automated and standardized way to test the e2e compilation and validate that the op lowerings are implemented consistent with PyTorch semantics.
7
7
8
8
Caveat: The AOT compile framework's primary objective is to serve as an end-to-end `compile -> execute -> test` harness for functional correctness, and *not* as an optimizing compiler for production usecases. In the future we might be interested in reusing pieces of infrastructure here to construct an optimizing compiler, but it entails more work to get there (such as a runtime and performance benchmark apparatus).
9
9
10
10
## Compile PyTorch programs
11
11
12
-
Onboarding to the `aot_compile` macro is quite easy (examples [here](https://github.com/cruise-automation/mlir-tcp/blob/main/test/AotCompile/BUILD)). Start by adding the following line to the `BUILD` to load the macro:
12
+
Onboarding to the `aot_compile` macro is quite easy (examples [here](https://github.com/llvm/mlir-tcp/blob/main/test/AotCompile/BUILD)). Start by adding the following line to the `BUILD` to load the macro:
An invocation of `aot_compile(name="foo", ...)` generates a bunch of targets (see [here](https://github.com/cruise-automation/mlir-tcp/blob/main/tools/aot/aot_compile.bzl#L43) for the list) that can be helpful in debugging the intermediate steps in the compilation process.
70
+
An invocation of `aot_compile(name="foo", ...)` generates a bunch of targets (see [here](https://github.com/llvm/mlir-tcp/blob/main/tools/aot/aot_compile.bzl#L43) for the list) that can be helpful in debugging the intermediate steps in the compilation process.
71
71
72
72
To get the full list of `aot_compile` macro generated targets for `broadcast_add_mixed_ranks`, run the query:
Note we're missing the `//test/AotCompile:basic_tcp_ops_compile_execute_test` target. As there is no access to PyTorch reference implementation, the `aot_compile` macro does not auto-generate C++ execute tests but they can be manually written (example [here](https://github.com/cruise-automation/mlir-tcp/blob/main/test/AotCompile/test_aot_compiled_basic_tcp_ops.cpp)). These tests should include `extern "C"` function declarations with the same name and for every function in the input TCP source.
503
+
Note we're missing the `//test/AotCompile:basic_tcp_ops_compile_execute_test` target. As there is no access to PyTorch reference implementation, the `aot_compile` macro does not auto-generate C++ execute tests but they can be manually written (example [here](https://github.com/llvm/mlir-tcp/blob/main/test/AotCompile/test_aot_compiled_basic_tcp_ops.cpp)). These tests should include `extern "C"` function declarations with the same name and for every function in the input TCP source.
504
504
505
505
The rest of the steps to debug the e2e compilation pipeline are pretty much the same.
0 commit comments