Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions dictionary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -228,9 +228,12 @@ EC2
preflight
nav
MacOS
quantized
[0-9]+px
^.+[-:_]\w+$
[a-z]+([A-Z0-9]|[A-Z0-9]\w+)
([A-Z][a-z0-9]+)((\d)|([A-Z0-9][a-z0-9]+))*([A-Z])?
([a-zA-Z0-9]+\.[a-zA-Z0-9]+)+
^([0-9]+)
^\d+B$
\..+️
7 changes: 5 additions & 2 deletions docs/guides/python/ai-podcast-part-1.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -382,6 +382,9 @@ async def do_download_audio_model(ctx: MessageContext):
@main_api.post("/download-model")
async def download_audio(ctx: HttpContext):
model_id = ctx.req.query.get("model", audio_model_id)

if isinstance(model_id, list):
model_id = model_id[0]
# asynchronously download the model
await download_audio_model.publish({ "model_id": model_id })

Expand Down Expand Up @@ -662,7 +665,7 @@ nitric stack new test aws
This will generate a nitric stack file called `test` which defines how we want to deploy a stack to AWS. We can update this stack file with settings to configure our batch service and the AWS Compute environment it will run in.

```yaml title: nitric.test.yaml
provider: nitric/aws@1.14.2
provider: nitric/aws@1.15.4
# The target aws region to deploy to
# See available regions:
# https://docs.aws.amazon.com/general/latest/gr/lambda-service.html
Expand Down Expand Up @@ -747,4 +750,4 @@ You can see the status of your batch jobs in the [AWS Batch console](https://con

## Next steps

In part two of this guide we'll look at adding an LLM agent to our project to automatically generate scripts for our podcasts from small prompts.
In [part two](./ai-podcast-part-2) of this guide we'll look at adding an LLM agent to our project to automatically generate scripts for our podcasts from small prompts.
Loading
Loading