Skip to content

Commit be26d3c

Browse files
committed
feat: add web ui for core ml stable diffusion (#56)
1 parent c90b705 commit be26d3c

File tree

3 files changed

+153
-0
lines changed

3 files changed

+153
-0
lines changed

README.md

Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,51 @@ Differences may be less or more pronounced for different inputs. Please see the
220220

221221
</details>
222222

223+
224+
## <a name="play-with-simple-web-ui"></a> Play with simple Web UI
225+
226+
<details>
227+
<summary> Click to expand </summary>
228+
229+
<img src="assets/webui.jpg">
230+
231+
After you have completed the model conversion according to the above process, you can use the following command to start a simple Web UI:
232+
233+
```bash
234+
python -m python_coreml_stable_diffusion.web -i <output-mlpackages-directory> --compute-unit ALL
235+
```
236+
237+
After the command is executed, we will get a log similar to the following:
238+
239+
```bash
240+
WARNING:coremltools:Torch version 1.13.0 has not been tested with coremltools. You may run into unexpected errors. Torch 1.12.1 is the most recent version that has been tested.
241+
INFO:python_coreml_stable_diffusion.pipeline:Initializing PyTorch pipe for reference configuration
242+
...
243+
...
244+
INFO:python_coreml_stable_diffusion.pipeline:Done.
245+
INFO:python_coreml_stable_diffusion.pipeline:Initializing Core ML pipe for image generation
246+
INFO:python_coreml_stable_diffusion.pipeline:Stable Diffusion configured to generate 512x512 images
247+
INFO:python_coreml_stable_diffusion.pipeline:Done.
248+
Running on local URL: http://0.0.0.0:7860
249+
250+
To create a public link, set `share=True` in `launch()`.
251+
```
252+
253+
Open `http://0.0.0.0:7860` in your browser to start your Core ML Stable Diffusion adventure.
254+
255+
256+
Web UI relies on gradio, a great interface framework. If you have not installed it, then execute the above command, the program will try to install it automatically.
257+
258+
If the installation fails, you can try to manually execute the following command to complete the dependency installation.
259+
260+
```bash
261+
pip install gradio
262+
```
263+
264+
When the installation is complete, re-execute the above command to start the Web UI.
265+
266+
</details>
267+
223268
## <a name="faq"></a> FAQ
224269

225270
<details>

assets/webui.jpg

147 KB
Loading

python_coreml_stable_diffusion/web.py

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
try:
2+
import gradio as gr
3+
import python_coreml_stable_diffusion.pipeline as pipeline
4+
from diffusers import StableDiffusionPipeline
5+
6+
def init(args):
7+
pipeline.logger.info("Initializing PyTorch pipe for reference configuration")
8+
pytorch_pipe = StableDiffusionPipeline.from_pretrained(args.model_version,
9+
use_auth_token=True)
10+
11+
user_specified_scheduler = None
12+
if args.scheduler is not None:
13+
user_specified_scheduler = pipeline.SCHEDULER_MAP[
14+
args.scheduler].from_config(pytorch_pipe.scheduler.config)
15+
16+
coreml_pipe = pipeline.get_coreml_pipe(pytorch_pipe=pytorch_pipe,
17+
mlpackages_dir=args.i,
18+
model_version=args.model_version,
19+
compute_unit=args.compute_unit,
20+
scheduler_override=user_specified_scheduler)
21+
22+
23+
def infer(prompt, steps):
24+
pipeline.logger.info("Beginning image generation.")
25+
image = coreml_pipe(
26+
prompt=prompt,
27+
height=coreml_pipe.height,
28+
width=coreml_pipe.width,
29+
num_inference_steps=steps,
30+
)
31+
images = []
32+
images.append(image["images"][0])
33+
return images
34+
35+
36+
demo = gr.Blocks()
37+
38+
with demo:
39+
gr.Markdown(
40+
"<center><h1>Core ML Stable Diffusion</h1>Run Stable Diffusion on Apple Silicon with Core ML</center>")
41+
with gr.Group():
42+
with gr.Box():
43+
with gr.Row():
44+
with gr.Column():
45+
with gr.Row():
46+
text = gr.Textbox(
47+
label="Prompt",
48+
lines=11,
49+
placeholder="Enter your prompt",
50+
)
51+
with gr.Row():
52+
btn = gr.Button("Generate image")
53+
with gr.Row():
54+
steps = gr.Slider(label="Steps", minimum=1,
55+
maximum=50, value=10, step=1)
56+
with gr.Column():
57+
gallery = gr.Gallery(
58+
label="Generated image", elem_id="gallery"
59+
)
60+
61+
text.submit(infer, inputs=[text, steps], outputs=gallery)
62+
btn.click(infer, inputs=[text, steps], outputs=gallery)
63+
64+
demo.launch(debug=True, server_name="0.0.0.0")
65+
66+
67+
if __name__ == "__main__":
68+
parser = pipeline.argparse.ArgumentParser()
69+
70+
parser.add_argument(
71+
"-i",
72+
required=True,
73+
help=("Path to input directory with the .mlpackage files generated by "
74+
"python_coreml_stable_diffusion.torch2coreml"))
75+
parser.add_argument(
76+
"--model-version",
77+
default="CompVis/stable-diffusion-v1-4",
78+
help=
79+
("The pre-trained model checkpoint and configuration to restore. "
80+
"For available versions: https://huggingface.co/models?search=stable-diffusion"
81+
))
82+
parser.add_argument(
83+
"--compute-unit",
84+
choices=pipeline.get_available_compute_units(),
85+
default="ALL",
86+
help=("The compute units to be used when executing Core ML models. "
87+
f"Options: {pipeline.get_available_compute_units()}"))
88+
parser.add_argument(
89+
"--scheduler",
90+
choices=tuple(pipeline.SCHEDULER_MAP.keys()),
91+
default=None,
92+
help=("The scheduler to use for running the reverse diffusion process. "
93+
"If not specified, the default scheduler from the diffusers pipeline is utilized"))
94+
95+
args = parser.parse_args()
96+
init(args)
97+
98+
except ModuleNotFoundError as moduleNotFound:
99+
print(f'Found that `gradio` is not installed, try to install it automatically')
100+
try:
101+
import subprocess
102+
import sys
103+
104+
subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio'])
105+
print(f'Successfully installed missing package `gradio`.')
106+
print(f'Now re-execute the command :D')
107+
except subprocess.CalledProcessError:
108+
print(f'Automatic package installation failed, try manually executing `pip install gradio`, then retry the command again.')

0 commit comments

Comments
 (0)