You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+13-2Lines changed: 13 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,10 +67,21 @@ For hardware, we used a 96GB 700W H100 GPU. Some of the optimizations applied (B
67
67
68
68
## Run the optimized pipeline
69
69
70
-
TODO
70
+
```sh
71
+
python gen_image.py --prompt "An astronaut standing next to a giant lemon" --output-file output.png --use-cached-model
72
+
```
73
+
74
+
This will include all optimizations and will attempt to use pre-cached binary models
75
+
generated via `torch.export` + AOTI. To generate these binaries for subsequent runs, run
76
+
the above command without the `--use-cached-model` flag.
71
77
72
78
> [!IMPORTANT]
73
-
> The binaries won't work for hardware that are different from the ones they were obtained on. For example, if the binaries were obtained on an H100, they won't work on A100.
79
+
> The binaries won't work for hardware that is sufficiently different from the hardware they were
80
+
> obtained on. For example, if the binaries were obtained on an H100, they won't work on A100.
81
+
> Further, the binaries are currently Linux-only and include dependencies on specific versions
82
+
> of system libs such as libstdc++; they will not work if they were generated in a sufficiently
83
+
> different environment than the one present at runtime. The PyTorch Compiler team is working on
84
+
> solutions for more portable binaries / artifact caching.
74
85
75
86
## Benchmarking
76
87
[`run_benchmark.py`](./run_benchmark.py) is the main script for benchmarking the different optimization techniques.
0 commit comments