Skip to content

Commit d242581

Browse files
committed
Add logging functionality to app.py and text2img.py, update HTML response generation, and enhance Dockerfile with LOG_LEVEL environment variable. Expand README with GitHub Actions details for Docker image publishing.
1 parent 911779c commit d242581

File tree

7 files changed

+189
-11
lines changed

7 files changed

+189
-11
lines changed
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
name: Create Release
2+
3+
on:
4+
push:
5+
tags:
6+
- 'v*'
7+
8+
permissions:
9+
contents: write
10+
11+
jobs:
12+
create-release:
13+
runs-on: ubuntu-latest
14+
steps:
15+
- name: Checkout
16+
uses: actions/checkout@v4
17+
18+
- name: Create GitHub Release
19+
uses: softprops/action-gh-release@v2
20+
with:
21+
tag_name: ${{ github.ref_name }}
22+
name: Release ${{ github.ref_name }}
23+
draft: false
24+
prerelease: false
25+
generate_release_notes: true
26+

.github/workflows/docker-publish.yml

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,78 @@
1+
name: Build and Publish Docker Image
2+
3+
on:
4+
push:
5+
branches: [ main ]
6+
tags:
7+
- 'v*'
8+
9+
permissions:
10+
contents: read
11+
packages: write
12+
13+
jobs:
14+
build-and-push:
15+
runs-on: ubuntu-latest
16+
steps:
17+
- name: Checkout
18+
uses: actions/checkout@v4
19+
20+
- name: Log in to GHCR
21+
uses: docker/login-action@v3
22+
with:
23+
registry: ghcr.io
24+
username: ${{ github.actor }}
25+
password: ${{ secrets.GITHUB_TOKEN }}
26+
27+
- name: Extract Docker metadata
28+
id: meta
29+
uses: docker/metadata-action@v5
30+
with:
31+
images: ghcr.io/${{ github.repository }}
32+
tags: |
33+
type=ref,event=branch
34+
type=ref,event=tag
35+
type=raw,value=latest,enable={{is_default_branch}}
36+
type=sha
37+
38+
- name: Set up Buildx
39+
uses: docker/setup-buildx-action@v3
40+
41+
- name: Build and push (CPU)
42+
uses: docker/build-push-action@v5
43+
with:
44+
context: .
45+
push: true
46+
tags: ${{ steps.meta.outputs.tags }}
47+
labels: ${{ steps.meta.outputs.labels }}
48+
build-args: |
49+
TORCH_INDEX_URL=https://download.pytorch.org/whl/cpu
50+
51+
- name: Build and push (GPU)
52+
uses: docker/build-push-action@v5
53+
with:
54+
context: .
55+
push: true
56+
tags: ghcr.io/${{ github.repository }}:gpu
57+
labels: ${{ steps.meta.outputs.labels }}
58+
build-args: |
59+
TORCH_INDEX_URL=https://download.pytorch.org/whl/cu121
60+
61+
release-nightly:
62+
if: github.ref == 'refs/heads/main'
63+
runs-on: ubuntu-latest
64+
needs: build-and-push
65+
permissions:
66+
contents: write
67+
steps:
68+
- name: Create Nightly Release
69+
uses: softprops/action-gh-release@v2
70+
with:
71+
tag_name: nightly-${{ github.run_number }}
72+
name: Nightly ${{ github.run_number }}
73+
draft: false
74+
prerelease: true
75+
generate_release_notes: true
176
name: Docker
277

378
# This workflow uses actions that are not certified by GitHub.
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
name: Generate Release Notes
2+
3+
on:
4+
release:
5+
types: [published]
6+
7+
permissions:
8+
contents: write
9+
10+
jobs:
11+
generate-release-notes:
12+
runs-on: ubuntu-latest
13+
steps:
14+
- name: Checkout
15+
uses: actions/checkout@v4
16+
17+
- name: Generate Release Notes
18+
uses: actions/github-script@v7
19+
with:
20+
script: |
21+
const { owner, repo } = context.repo;
22+
const tag = context.payload.release.tag_name;
23+
const { data: commits } = await github.rest.repos.listCommits({ owner, repo, per_page: 100 });
24+
const notes = commits.map(c => `- ${c.commit.message}`).join('\n');
25+
await github.rest.repos.updateRelease({ owner, repo, release_id: context.payload.release.id, body: notes });
26+

Dockerfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,4 +65,5 @@ HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 CMD wget
6565

6666
VOLUME ["/data"]
6767

68+
ENV LOG_LEVEL=INFO
6869
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8000"]

README.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -138,4 +138,23 @@ By enabling the safety checker, you add an extra layer of content moderation to
138138
- Caching optimized by installing dependencies before copying source
139139
- `.dockerignore` recommended for smaller builds
140140

141+
## GitHub Actions and Releases
142+
143+
This repo ships with GitHub Actions to:
144+
145+
- Build and push Docker images to GHCR on pushes to `main` (`.github/workflows/docker-publish.yml`).
146+
- CPU image tags follow branch/tag; `latest` on default branch, plus a `gpu` tag for CUDA wheels.
147+
- Create GitHub Releases when pushing tags like `v1.0.0` (`.github/workflows/create-release.yml`).
148+
- Uses GitHub’s automatic release notes.
149+
- Update release notes content on publish (`.github/workflows/release-notes.yml`).
150+
151+
After a `main` push, pull with:
152+
153+
```bash
154+
docker pull ghcr.io/<owner>/<repo>:latest
155+
docker pull ghcr.io/<owner>/<repo>:gpu
156+
```
157+
158+
Make sure your repo visibility allows GHCR pulls, or authenticate: `docker login ghcr.io`.
159+
141160
---

app.py

Lines changed: 29 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1+
# flake8: noqa
12
import os
2-
import io
3-
import base64
43
import threading
54
import datetime
5+
import logging
66
from typing import List, Optional
77

88
import torch
@@ -13,6 +13,13 @@
1313
from diffusers import DiffusionPipeline
1414

1515

16+
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
17+
logging.basicConfig(
18+
level=getattr(logging, LOG_LEVEL, logging.INFO),
19+
format="%(asctime)s %(levelname)s %(name)s %(message)s",
20+
)
21+
logger = logging.getLogger("lcm_app")
22+
1623
app = FastAPI(title="LCM Text2Image")
1724

1825

@@ -47,6 +54,8 @@ def _create_pipeline() -> DiffusionPipeline:
4754
use_cuda = torch.cuda.is_available()
4855
torch_dtype = torch.float16 if use_cuda else torch.float32
4956

57+
logger.info("Initializing diffusion pipeline: model_id=%s, device=%s, dtype=%s, safety=%s",
58+
MODEL_ID, "cuda" if use_cuda else "cpu", str(torch_dtype), "enabled" if SAFETY_CHECKER != "disabled" else "disabled")
5059
pipe = DiffusionPipeline.from_pretrained(
5160
MODEL_ID,
5261
custom_pipeline=LCM_CUSTOM_PIPELINE,
@@ -58,6 +67,7 @@ def _create_pipeline() -> DiffusionPipeline:
5867

5968
device = "cuda" if use_cuda else "cpu"
6069
pipe = pipe.to(device)
70+
logger.info("Pipeline ready on device=%s", device)
6171
return pipe
6272

6373

@@ -87,6 +97,7 @@ def _generate_filename(prompt: str, timestamp: str, index: int) -> str:
8797
def _on_startup():
8898
# Optionally preload model to avoid first-request latency
8999
if PRELOAD_MODEL:
100+
logger.info("Preloading model on startup")
90101
_get_pipeline()
91102

92103

@@ -157,17 +168,17 @@ def healthz():
157168
</div>
158169
<div>
159170
<label>Steps</label>
160-
<input type=\"number\" name=\"num_inference_steps\" min=\"1\" max=\"20\" value=\""" + str(DEFAULT_STEPS) + """ />
171+
<input type=\"number\" name=\"num_inference_steps\" min=\"1\" max=\"20\" value=\"{DEFAULT_STEPS}\" />
161172
</div>
162173
<div>
163174
<label>Guidance</label>
164-
<input type=\"number\" step=\"0.5\" name=\"guidance_scale\" value=\""" + str(DEFAULT_GUIDANCE) + """ />
175+
<input type=\"number\" step=\"0.5\" name=\"guidance_scale\" value=\"{DEFAULT_GUIDANCE}\" />
165176
</div>
166177
</div>
167178
<div class=\"row\">
168179
<div>
169180
<label>LCM Origin Steps</label>
170-
<input type=\"number\" name=\"lcm_origin_steps\" min=\"1\" max=\"20\" value=\""" + str(DEFAULT_LCM_ORIGIN_STEPS) + """ />
181+
<input type=\"number\" name=\"lcm_origin_steps\" min=\"1\" max=\"20\" value=\"{DEFAULT_LCM_ORIGIN_STEPS}\" />
171182
</div>
172183
</div>
173184
<button id=\"go\" type=\"submit\">Generate</button>
@@ -181,7 +192,13 @@ def healthz():
181192

182193
@app.get("/", response_class=HTMLResponse)
183194
def index():
184-
return HTMLResponse(INDEX_HTML)
195+
html = (
196+
INDEX_HTML
197+
.replace("{DEFAULT_STEPS}", str(DEFAULT_STEPS))
198+
.replace("{DEFAULT_GUIDANCE}", str(DEFAULT_GUIDANCE))
199+
.replace("{DEFAULT_LCM_ORIGIN_STEPS}", str(DEFAULT_LCM_ORIGIN_STEPS))
200+
)
201+
return HTMLResponse(html)
185202

186203

187204
@app.post("/api/generate")
@@ -199,16 +216,20 @@ def api_generate(
199216
guidance = guidance_scale or DEFAULT_GUIDANCE
200217
origin_steps = lcm_origin_steps or DEFAULT_LCM_ORIGIN_STEPS
201218

219+
logger.info("Generation request: prompt=%r, num_images=%s, steps=%s, guidance=%s, origin_steps=%s",
220+
prompt[:80], num_images, steps, guidance, origin_steps)
202221
try:
203222
pipe = _get_pipeline()
204223
images = pipe(
205224
prompt=prompt,
206225
num_inference_steps=steps,
207226
guidance_scale=guidance,
208227
lcm_origin_steps=origin_steps,
228+
num_images_per_prompt=max(1, num_images),
209229
output_type="pil",
210230
).images
211231
except Exception as e:
232+
logger.exception("Generation failed: %s", e)
212233
raise HTTPException(status_code=500, detail=f"Generation failed: {e}")
213234

214235
ts = datetime.datetime.now().strftime("%m-%d-%H-%M-%S")
@@ -221,13 +242,14 @@ def api_generate(
221242
try:
222243
_save_image(image, filepath, metadata)
223244
except Exception as e:
245+
logger.exception("Failed to save image: %s", e)
224246
raise HTTPException(status_code=500, detail=f"Failed to save image: {e}")
225247
files.append({
226248
"name": filename,
227249
"path": filepath,
228250
"url": f"/outputs/{filename}",
229251
})
230-
252+
logger.info("Generated %d images", len(files))
231253
return JSONResponse({"files": files})
232254

233255

text2img.py

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
1+
# flake8: noqa
12
import os
23
import datetime
4+
import logging
5+
36
import torch
47
from tqdm import tqdm
58
from PIL import PngImagePlugin
@@ -29,11 +32,17 @@ def generate_filename(prompt, timestamp, index):
2932
default_lcm_origin_steps = int(os.getenv("LCM_ORIGIN_STEPS", "8"))
3033
safety_checker = os.getenv("SAFETY_CHECKER", "disabled").lower() # "disabled" or "default"
3134

35+
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO").upper()
36+
logging.basicConfig(level=getattr(logging, LOG_LEVEL, logging.INFO),
37+
format="%(asctime)s %(levelname)s %(name)s %(message)s")
38+
logger = logging.getLogger("lcm_cli")
39+
3240
os.makedirs(save_path, exist_ok=True)
3341

3442
args = {} if safety_checker != "disabled" else {"safety_checker": None}
3543

3644
# Initialize the pipeline
45+
logger.info("Initializing diffusion pipeline: model_id=%s", model_id)
3746
pipe = DiffusionPipeline.from_pretrained(
3847
model_id,
3948
custom_pipeline=lcm_custom_pipeline,
@@ -45,7 +54,7 @@ def generate_filename(prompt, timestamp, index):
4554
# Check if CUDA (GPU support) is available, else use CPU
4655
device = "cuda" if torch.cuda.is_available() else "cpu"
4756
torch_dtype = torch.float16 if device == "cuda" else None
48-
print(f"Using device: {device}")
57+
logger.info("Using device: %s; dtype: %s", device, str(torch_dtype))
4958

5059
# Set the device and dtype for the pipeline
5160
pipe.to(torch_device=device, torch_dtype=torch_dtype)
@@ -77,7 +86,7 @@ def generate_filename(prompt, timestamp, index):
7786
output_type="pil",
7887
).images
7988
except Exception as e:
80-
print(f"Error generating image: {e}")
89+
logger.exception("Error generating image: %s", e)
8190
continue
8291

8392
metadata = {"prompt": prompt, "num_steps": num_inference_steps}
@@ -88,6 +97,6 @@ def generate_filename(prompt, timestamp, index):
8897
output_path = os.path.join(save_path, filename)
8998
save_image(image, output_path, metadata)
9099

91-
print(f"Images saved to {save_path}")
100+
logger.info("Images saved to %s", save_path)
92101

93-
print("Image generation completed.")
102+
logger.info("Image generation completed.")

0 commit comments

Comments
 (0)