Skip to content

Commit f919efe

Browse files
Remove components that have been upstreamed. (#655)
* Revises README to explain what is still here. * Updates CI so that we can use it to run integration tests.
1 parent 4a01c40 commit f919efe

File tree

159 files changed

+53
-20515
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

159 files changed

+53
-20515
lines changed

.github/workflows/test.yml

Lines changed: 0 additions & 56 deletions
This file was deleted.

.github/workflows/test_models.yml

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,15 @@ jobs:
3030
# with:
3131
# python-version: ${{matrix.version}}
3232

33-
- name: "Checkout Code"
34-
uses: actions/checkout@v2
33+
- name: "Checkout This Repo"
34+
uses: actions/checkout@v4
35+
36+
- name: "Checkout iree-turbine"
37+
uses: actions/checkout@v4
38+
with:
39+
repository: iree-org/iree-turbine
40+
# TODO: Let the ref be passed as a parameter to run integration tests.
41+
path: iree-turbine
3542

3643
- name: Sync source deps
3744
# build IREE from source with -DIREE_BUILD_TRACY=ON if getting tracy profile
@@ -42,10 +49,10 @@ jobs:
4249
# Note: We install in three steps in order to satisfy requirements
4350
# from non default locations first. Installing the PyTorch CPU
4451
# wheels saves multiple minutes and a lot of bandwidth on runner setup.
45-
pip install -r core/pytorch-cpu-requirements.txt
46-
pip install --pre --upgrade -r core/requirements.txt
47-
pip install --pre -e core[testing]
48-
pip install --pre --upgrade -e models -r models/requirements.txt
52+
pip install --no-compile -r ${{ github.workspace }}/iree-turbine/pytorch-cpu-requirements.txt
53+
pip install --no-compile --pre --upgrade -r ${{ github.workspace }}/iree-turbine/requirements.txt
54+
pip install --no-compile --pre -e ${{ github.workspace }}/iree-turbine[testing]
55+
pip install --no-compile --pre --upgrade -e models -r models/requirements.txt
4956
5057
- name: Show current free memory
5158
run: |

.github/workflows/test_sdxl.yml

Lines changed: 14 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,23 +19,30 @@ jobs:
1919
python-version: ${{matrix.version}}
2020

2121
- name: "Checkout Code"
22-
uses: actions/checkout@v2
22+
uses: actions/checkout@v4
2323
with:
2424
ref: ean-sd-fp16
2525

26+
- name: "Checkout iree-turbine"
27+
uses: actions/checkout@v4
28+
with:
29+
repository: iree-org/iree-turbine
30+
# TODO: Let the ref be passed as a parameter to run integration tests.
31+
path: iree-turbine
32+
2633
- name: Sync source deps
2734
# build IREE from source with -DIREE_BUILD_TRACY=ON if getting tracy profile
2835
run: |
2936
python -m pip install --upgrade pip
3037
# Note: We install in three steps in order to satisfy requirements
3138
# from non default locations first. Installing the PyTorch CPU
3239
# wheels saves multiple minutes and a lot of bandwidth on runner setup.
33-
pip install --index-url https://download.pytorch.org/whl/cpu \
34-
-r core/pytorch-cpu-requirements.txt
35-
pip install --upgrade -r core/requirements.txt
36-
pip install -e core[testing,torch-cpu-nightly]
37-
pip install --upgrade -r models/requirements.txt
38-
pip install -e models
40+
pip install --no-compile --index-url https://download.pytorch.org/whl/cpu \
41+
-r ${{ github.workspace }}/iree-turbine//pytorch-cpu-requirements.txt
42+
pip install --no-compile --upgrade -r ${{ github.workspace }}/iree-turbine/requirements.txt
43+
pip install --no-compile -e ${{ github.workspace }}/iree-turbine/[testing,torch-cpu-nightly]
44+
pip install --no-compile --upgrade -r models/requirements.txt
45+
pip install --no-compile -e models
3946
4047
- name: Show current free memory
4148
run: |

.github/workflows/test_shark.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,13 @@ jobs:
3636
path: SHARK
3737
ref: "main"
3838

39+
- name: "Checkout iree-turbine"
40+
uses: actions/checkout@v4
41+
with:
42+
repository: iree-org/iree-turbine
43+
# TODO: Let the ref be passed as a parameter to run integration tests.
44+
path: iree-turbine
45+
3946
# TODO: Replace with a sh script from shark repo
4047
- name: "Install SHARK"
4148
run: |

README.md

Lines changed: 19 additions & 118 deletions
Original file line numberDiff line numberDiff line change
@@ -1,129 +1,30 @@
11
# SHARK Turbine
22

3-
![image](https://netl.doe.gov/sites/default/files/2020-11/Turbine-8412270026_83cfc8ee8f_c.jpg)
3+
This repo is Nod-AI's integration repository for various model bringup
4+
activities and CI. In 2023 and early 2024, it played a different role
5+
by being the place where FX/Dynamo based torch-mlir and IREE toolsets
6+
were developed, including:
47

5-
Turbine is the set of development tools that the [SHARK Team](https://github.com/nod-ai/SHARK)
6-
is building for deploying all of our models for deployment to the cloud and devices. We
7-
are building it as we transition from our TorchScript-era 1-off export and compilation
8-
to a unified approach based on PyTorch 2 and Dynamo. While we use it heavily ourselves, it
9-
is intended to be a general purpose model compilation and execution tool.
8+
* [Torch-MLIR FxImporter](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py)
9+
* [Torch-MLIR ONNX Importer](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/onnx_importer.py)
10+
* [Torch-MLIR's ONNX C Importer](https://github.com/llvm/torch-mlir/tree/main/projects/onnx_c_importer)
11+
* [IREE Turbine](https://github.com/iree-org/iree-turbine)
12+
* [Sharktank and Shortfin](https://github.com/nod-ai/sharktank)
1013

11-
Turbine provides a collection of tools:
14+
As these have all found upstream homes, this repo is a bit bare. We will
15+
continue to use it as a staging ground for things that don't have a
16+
more defined spot and as a way to drive certain kinds of upstreaming
17+
activities.
1218

13-
* *AOT Export*: For compiling one or more `nn.Module`s to compiled, deployment
14-
ready artifacts. This operates via both a simple one-shot export API (Already upstreamed to [torch-mlir](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/extras/fx_importer.py))
15-
for simple models and an underlying [advanced API](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/shark_turbine/aot/compiled_module.py) for complicated models
16-
and accessing the full features of the runtime.
17-
* *Eager Execution*: A `torch.compile` backend is provided and a Turbine Tensor/Device
18-
is available for more native, interactive use within a PyTorch session.
19-
* *Turbine Kernels*: (coming soon) A union of the [Triton](https://github.com/openai/triton) approach and
20-
[Pallas](https://jax.readthedocs.io/en/latest/pallas/index.html) but based on
21-
native PyTorch constructs and tracing. It is intended to complement for simple
22-
cases where direct emission to the underlying, cross platform, vector programming model
23-
is desirable.
24-
* *Turbine-LLM*: a repository of layers, model recipes, and conversion tools
25-
from popular Large Language Model (LLM) quantization tooling.
2619

27-
Under the covers, Turbine is based heavily on [IREE](https://github.com/openxla/iree) and
28-
[torch-mlir](https://github.com/llvm/torch-mlir) and we use it to drive evolution
29-
of both, upstreaming infrastructure as it becomes timely to do so.
20+
## Current Projects
3021

31-
See [the roadmap](docs/roadmap.md) for upcoming work and places to contribute.
22+
### turbine-models
3223

33-
## Contact Us
24+
The `turbine-models` project (under models/) contains ports and adaptations
25+
of various (mostly HF) models that we use in various ways.
3426

35-
Turbine is under active development. If you would like to participate as it comes online,
36-
please reach out to us on the `#turbine` channel of the
37-
[nod-ai Discord server](https://discord.gg/QMmR6f8rGb).
27+
### CI
3828

39-
## Quick Start for Users
29+
Integration CI for a variety of projects is rooted in this repo.
4030

41-
1. Install from source:
42-
43-
```
44-
pip install shark-turbine
45-
# Or for editable: see instructions under developers
46-
```
47-
48-
The above does install some unecessary cuda/cudnn packages for cpu use. To avoid this you
49-
can specify pytorch-cpu and install via:
50-
```
51-
pip install -r core/pytorch-cpu-requirements.txt
52-
pip install shark-turbine
53-
```
54-
55-
(or follow the "Developers" instructions below for installing from head/nightly)
56-
57-
2. Try one of the samples:
58-
59-
Generally, we use Turbine to produce valid, dynamic shaped Torch IR (from the
60-
[`torch-mlir torch` dialect](https://github.com/llvm/torch-mlir/tree/main/include/torch-mlir/Dialect/Torch/IR)
61-
with various approaches to handling globals). Depending on the use-case and status of the
62-
compiler, these should be compilable via IREE with `--iree-input-type=torch` for
63-
end to end execution. Dynamic shape support in torch-mlir is a work in progress,
64-
and not everything works at head with release binaries at present.
65-
66-
* [AOT MLP With Static Shapes](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_simple.py)
67-
* [AOT MLP with a dynamic batch size](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/aot_mlp/mlp_export_dynamic.py)
68-
* [AOT llama2](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/llama2_inference/llama2.ipynb):
69-
Dynamic sequence length custom compiled module with state management internal to the model.
70-
* [Eager MNIST with `torch.compile`](https://github.com/nod-ai/SHARK-Turbine/blob/main/core/examples/eager_mlp/mlp_eager_simple.py)
71-
72-
## Developers
73-
74-
### Getting Up and Running
75-
76-
If only looking to develop against this project, then you need to install Python
77-
deps for the following:
78-
79-
* PyTorch
80-
* iree-compiler (with Torch input support)
81-
* iree-runtime
82-
83-
The pinned deps at HEAD require pre-release versions of all of the above, and
84-
therefore require additional pip flags to install. Therefore, to satisfy
85-
development, we provide a `requirements.txt` file which installs precise
86-
versions and has all flags. This can be installed prior to the package:
87-
88-
Installing into a venv is highly recommended.
89-
90-
```
91-
pip install -r core/pytorch-cpu-requirements.txt
92-
pip install --upgrade -r core/requirements.txt
93-
pip install --upgrade -e "core[torch-cpu-nightly,testing]"
94-
```
95-
96-
Run tests:
97-
98-
```
99-
pytest core/
100-
```
101-
102-
### Using a development compiler
103-
104-
If doing native development of the compiler, it can be useful to switch to
105-
source builds for iree-compiler and iree-runtime.
106-
107-
In order to do this, check out [IREE](https://github.com/openxla/iree) and
108-
follow the instructions to [build from source](https://iree.dev/building-from-source/getting-started/), making
109-
sure to specify [additional options for the Python bindings](https://iree.dev/building-from-source/getting-started/#building-with-cmake):
110-
111-
```
112-
-DIREE_BUILD_PYTHON_BINDINGS=ON -DPython3_EXECUTABLE="$(which python)"
113-
```
114-
115-
#### Configuring Python
116-
117-
Uninstall existing packages:
118-
119-
```
120-
pip uninstall iree-compiler
121-
pip uninstall iree-runtime
122-
```
123-
124-
Copy the `.env` file from `iree/` to this source directory to get IDE
125-
support and add to your path for use from your shell:
126-
127-
```
128-
source .env && export PYTHONPATH
129-
```

0 commit comments

Comments
 (0)