Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.

Commit 4f6eaac

Browse files
committed
Add additional instructions for re-testing.
1 parent 78c5ab6 commit 4f6eaac

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

docs/guides/python/ai-podcast-part-1.mdx

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -456,10 +456,26 @@ Nitric.run()
456456

457457
Once that's done we can give the project another test, just to make sure everything is still working as expected.
458458

459+
If nitric isn't still running you can start it again with:
460+
459461
```bash
460462
nitric start
461463
```
462464

465+
First we'll make sure that our new model download code is working by running:
466+
467+
```bash
468+
curl -X POST http://localhost:4001/download-audio-model
469+
```
470+
471+
Then we can test the audio generation again with:
472+
473+
```bash
474+
curl -X POST http://localhost:4001/audio/test -d "Okay this is cool, but let's wait and see what comes next"
475+
```
476+
477+
You should get a similiar result to before. The main difference is that the model will be downloaded and cached in a nitric bucket before the audio generation starts.
478+
463479
## Defining our service docker images
464480

465481
So that the AI workload can use GPUs in the cloud we'll need to make sure it ships with drivers and libraries to support that. We can do this by specifying a custom Dockerfile for our batch service under `docker/torch.dockerfile`.

0 commit comments

Comments
 (0)