Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.

Commit b3b5f54

Browse files
spellcheck
1 parent 13e6cb7 commit b3b5f54

File tree

2 files changed

+8
-7
lines changed

2 files changed

+8
-7
lines changed

dictionary.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ CDKs
1212
CORS
1313
ECR
1414
GCP
15+
VRAM
1516
GCR
1617
HPC
1718
IAM

docs/guides/python/podcast-transcription.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -172,11 +172,11 @@ We'll then create our Job and set the required memory to `12000`. This is a safe
172172

173173
| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
174174
| ------ | ---------- | ------------------ | ------------------ | ------------- | -------------- |
175-
| tiny | 39 M | tiny.en | tiny | ~1 GB | ~32x |
176-
| base | 74 M | base.en | base | ~1 GB | ~16x |
177-
| small | 244 M | small.en | small | ~2 GB | ~6x |
178-
| medium | 769 M | medium.en | medium | ~5 GB | ~2x |
179-
| large | 1550 M | N/A | large | ~10 GB | 1x |
175+
| tiny | 39 M | tiny.en | tiny | `~1 GB ` | `~32x ` |
176+
| base | 74 M | base.en | base | `~1 GB` | `~16x` |
177+
| small | 244 M | small.en | small | `~2 GB` | `~6x` |
178+
| medium | 769 M | medium.en | medium | `~5 GB` | `~2x` |
179+
| large | 1550 M | N/A | large | `~10 GB` | `1x` |
180180

181181
```python title:src/job/transcribe.py
182182
# !collapse(1:7) collapsed
@@ -222,7 +222,7 @@ async def transcribe_podcast(ctx: JobContext):
222222
Nitric.run()
223223
```
224224

225-
We'll then load our model and transcribe the audio. This is where we can choose the model based on balancing speed, size, and accuracy. We can turn off FP16 with `fp16=False` which will use FP32 instead. This will depend on what is supported on your CPU when testing locally, however, FP16 and FP32 are supported on Lambda.
225+
We'll then load our model and transcribe the audio. This is where we can choose the model based on balancing speed, size, and accuracy. We can turn off `FP16` with `fp16=False` which will use `FP32` instead. This will depend on what is supported on your CPU when testing locally, however, `FP16` and `FP32` are supported on Lambda.
226226

227227
```python title:src/job/transcribe.py
228228
# !collapse(1:7) collapsed
@@ -492,6 +492,6 @@ You can destroy the project once it is finished using `nitric down`.
492492

493493
## Summary
494494

495-
In this guide, we've created a podcast transcription service using OpenAI Whisper and Nitric's Python SDK. We showed how to use batch jobs to run long-running workloads and connect these jobs to buckets to store generated transcripts. We also demonstrated how to expose buckets using simple CRUD routes on a cloud API. Finally, we were able to create dockerfiles with GPU support to optimise the generation speeds on the cloud.
495+
In this guide, we've created a podcast transcription service using OpenAI Whisper and Nitric's Python SDK. We showed how to use batch jobs to run long-running workloads and connect these jobs to buckets to store generated transcripts. We also demonstrated how to expose buckets using simple CRUD routes on a cloud API. Finally, we were able to create dockerfiles with GPU support to optimize the generation speeds on the cloud.
496496

497497
For more information and advanced usage, refer to the [Nitric documentation](/docs).

0 commit comments

Comments
 (0)