You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 20, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: docs/guides/python/ai-podcast-part-1.mdx
+16Lines changed: 16 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -456,10 +456,26 @@ Nitric.run()
456
456
457
457
Once that's done we can give the project another test, just to make sure everything is still working as expected.
458
458
459
+
If nitric isn't still running you can start it again with:
460
+
459
461
```bash
460
462
nitric start
461
463
```
462
464
465
+
First we'll make sure that our new model download code is working by running:
466
+
467
+
```bash
468
+
curl -X POST http://localhost:4001/download-audio-model
469
+
```
470
+
471
+
Then we can test the audio generation again with:
472
+
473
+
```bash
474
+
curl -X POST http://localhost:4001/audio/test -d "Okay this is cool, but let's wait and see what comes next"
475
+
```
476
+
477
+
You should get a similiar result to before. The main difference is that the model will be downloaded and cached in a nitric bucket before the audio generation starts.
478
+
463
479
## Defining our service docker images
464
480
465
481
So that the AI workload can use GPUs in the cloud we'll need to make sure it ships with drivers and libraries to support that. We can do this by specifying a custom Dockerfile for our batch service under `docker/torch.dockerfile`.
0 commit comments