Skip to content

Commit a007bdb

Browse files
committed
Resolve review comments
1 parent 6983423 commit a007bdb

File tree

1 file changed

+45
-45
lines changed

1 file changed

+45
-45
lines changed

src/content/docs/workers-ai/tutorials/build-a-voice-notes-app-with-auto-transcription.mdx

Lines changed: 45 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,17 @@ To continue, you will need:
3535

3636
## 1. Create a new Worker project
3737

38-
First, create a new Worker project using the `c3` CLI with the `nuxt` framework preset.
38+
Create a new Worker project using the `c3` CLI with the `nuxt` framework preset.
3939

4040
<PackageManagers type="create" pkg="cloudflare@latest" args={"voice-notes --framework=nuxt --experimental"} />
4141

4242
:::note
43-
At the time of writing this tutorial, the `--experimental` flag above uses the `cloudflare` preset (with service worker syntax) to create the project. This allows the app to be built for, and deployed onto Cloudflare Workers.
43+
At the time of writing this tutorial, the `--experimental` flag above uses the `cloudflare` preset (with "Service Worker" syntax) to create the project. This allows the app to be built for, and deployed onto Cloudflare Workers.
4444
:::
4545

4646
### Install additional dependencies
4747

48-
Next, change into the newly created project directory
48+
Change into the newly created project directory
4949

5050
```sh
5151
cd voice-notes
@@ -55,7 +55,7 @@ And install the following dependencies:
5555

5656
<PackageManagers pkg="@nuxt/ui @vueuse/core @iconify-json/heroicons" />
5757

58-
You'll also need to add the `@nuxt/ui` module to the `nuxt.config.ts` file:
58+
Then add the `@nuxt/ui` module to the `nuxt.config.ts` file:
5959

6060
```ts title="nuxt.config.ts"
6161
export default defineNuxtConfig({
@@ -86,7 +86,7 @@ export default defineNuxtConfig({
8686
```
8787

8888
:::note
89-
The rest of the tutorial will use the `app` folder for keeping the client side code. If you didn't make this change, you should continue to use the project's root directory.
89+
The rest of the tutorial will use the `app` folder for keeping the client side code. If you did not make this change, you should continue to use the project's root directory.
9090
:::
9191

9292
### Start local development server
@@ -110,15 +110,15 @@ Add the `AI` binding to the `wrangler.toml` file.
110110
binding = "AI"
111111
```
112112

113-
Next, run the `cf-typegen` command to generate the necessary Cloudflare type definitions. This makes the types definitions available in the server event contexts.
113+
Once the `AI` binding has been configured, run the `cf-typegen` command to generate the necessary Cloudflare type definitions. This makes the types definitions available in the server event contexts.
114114

115115
<PackageManagers type="run" args="cf-typegen" />
116116

117117
:::caution
118-
Running the `cf-typegen` command might produce an error because the specified entry file (`main = "./dist/worker/index.js"`) doesn't exist yet. This file will only be created after you build the project. As a temporary workaround, you can comment out this line in `wrangler.toml`, run the `cf-typegen` command, and then uncomment it before building the project.
118+
Running the `cf-typegen` command might produce an error because the specified entry file (`main = "./dist/worker/index.js"`) does not exist yet. This file will only be created after you build the project. As a temporary workaround, you can comment out this line in `wrangler.toml`, run the `cf-typegen` command, and then uncomment it before building the project.
119119
:::
120120

121-
Next, create a transcribe `POST` endpoint by creating `transcribe.post.ts` file inside the `/server/api` directory.
121+
Create a transcribe `POST` endpoint by creating `transcribe.post.ts` file inside the `/server/api` directory.
122122

123123
```ts title="server/api/transcribe.post.ts"
124124
export default defineEventHandler(async (event) => {
@@ -151,12 +151,12 @@ export default defineEventHandler(async (event) => {
151151

152152
The above code does the following:
153153

154-
1. Extracts the audio blob from the event
155-
2. Transcribes the blob using the `@cf/openai/whisper` model and returns the transcription text as response
154+
1. Extracts the audio blob from the event.
155+
2. Transcribes the blob using the `@cf/openai/whisper` model and returns the transcription text as response.
156156

157157
## 3. Create an API endpoint for uploading audio recordings to R2
158158

159-
Before uploading the audio recordings to `R2`, you need to create a bucket first. You'll also need to add the R2 binding to your `wrangler.toml` file and regenerate the Cloudflare type definitions.
159+
Before uploading the audio recordings to `R2`, you need to create a bucket first. You will also need to add the R2 binding to your `wrangler.toml` file and regenerate the Cloudflare type definitions.
160160

161161
Create an `R2` bucket.
162162

@@ -184,7 +184,7 @@ pnpm dlx wrangler r2 bucket create <BUCKET_NAME>
184184
</TabItem>
185185
</Tabs>
186186

187-
Next, add the storage binding to your `wrangler.toml` file.
187+
Add the storage binding to your `wrangler.toml` file.
188188

189189
```toml title="wrangler.toml"
190190
[[r2_buckets]]
@@ -194,7 +194,7 @@ bucket_name = "<BUCKET_NAME>"
194194

195195
Finally, generate the type definitions by rerunning the `cf-typegen` script.
196196

197-
Now you're ready to create the upload endpoint. Create a new `upload.put.ts` file in your `server/api` directory, and add the following code to it:
197+
Now you are ready to create the upload endpoint. Create a new `upload.put.ts` file in your `server/api` directory, and add the following code to it:
198198

199199
```ts title="server/api/upload.put.ts"
200200
export default defineEventHandler(async (event) => {
@@ -221,15 +221,15 @@ export default defineEventHandler(async (event) => {
221221
The above code does the following:
222222

223223
1. The files variable retrieves all files sent by the client using form.getAll(), which allows for multiple uploads in a single request.
224-
2. Uploads the files to the R2 bucket using the binding (`R2`) you created earlier
224+
2. Uploads the files to the R2 bucket using the binding (`R2`) you created earlier.
225225

226226
:::note
227227
The `recordings/` prefix organizes uploaded files within a dedicated folder in your bucket. This will also come in handy when serving these recordings to the client (covered later).
228228
:::
229229

230230
## 4. Create an API endpoint to save notes entries
231231

232-
Before creating the endpoint, you'll perform steps similar to those for the R2 bucket, with some additional steps to prepare a notes table."
232+
Before creating the endpoint, you will need to perform steps similar to those for the R2 bucket, with some additional steps to prepare a notes table.
233233

234234
Create a `D1` database.
235235

@@ -336,7 +336,7 @@ pnpm dlx wrangler d1 migrations apply <DB_NAME>
336336
The above command will create the notes table locally. To apply the migration on your remote production database, use the `--remote` flag.
337337
:::
338338

339-
Now you can create the api endpoint. Create a new file `index.post.ts` in the `server/api/notes` directory, and change its content to the following:
339+
Now you can create the API endpoint. Create a new file `index.post.ts` in the `server/api/notes` directory, and change its content to the following:
340340

341341
```ts title="server/api/notes/index.post.ts"
342342
export default defineEventHandler(async (event) => {
@@ -370,7 +370,7 @@ export default defineEventHandler(async (event) => {
370370

371371
The above does the following:
372372

373-
1. Extracts the text, and optional audioUrls from the event
373+
1. Extracts the text, and optional audioUrls from the event.
374374
2. Saves it to the database after converting the audioUrls to a `JSON` string.
375375

376376
## 5. Handle note creation on the client-side
@@ -379,7 +379,7 @@ Now you're ready to work on the client side. Let's start by tackling the note cr
379379

380380
### Recording user audio
381381

382-
Let's create a composable to handle audio recording using the MediaRecorder API. This will be used to record notes through the user's microphone.
382+
Create a composable to handle audio recording using the MediaRecorder API. This will be used to record notes through the user's microphone.
383383

384384
Create a new file `useMediaRecorder.ts` in the `app/composables` folder, and add the following code to it:
385385

@@ -494,14 +494,14 @@ export function useMediaRecorder() {
494494

495495
The above code does the following:
496496

497-
1. Exposes functions to start and stop audio recordings in a Vue application
498-
2. Captures audio input from the user's microphone using MediaRecorder API
499-
3. Processes real-time audio data for visualization using AudioContext and AnalyserNode
500-
4. Stores recording state including duration and recording status
501-
5. Maintains chunks of audio data and combines them into a final audio blob when recording stops
502-
6. Updates audio visualization data continuously using animation frames while recording
503-
7. Automatically cleans up all audio resources when recording stops or component unmounts
504-
8. Returns audio recordings in webm format for further processing
497+
1. Exposes functions to start and stop audio recordings in a Vue application.
498+
2. Captures audio input from the user's microphone using MediaRecorder API.
499+
3. Processes real-time audio data for visualization using AudioContext and AnalyserNode.
500+
4. Stores recording state including duration and recording status.
501+
5. Maintains chunks of audio data and combines them into a final audio blob when recording stops.
502+
6. Updates audio visualization data continuously using animation frames while recording.
503+
7. Automatically cleans up all audio resources when recording stops or component unmounts.
504+
8. Returns audio recordings in webm format for further processing.
505505

506506
### Create a component for note creation
507507

@@ -626,9 +626,9 @@ Create a new file named `CreateNote.vue` inside the `app/components` folder. Add
626626

627627
The above template results in the following:
628628

629-
1. A panel with a `textarea` inside to type the note manually
630-
2. Another panel to manage start/stop of an audio recording, and show the recordings done already
631-
3. A bottom panel to reset or save the note (along with the recordings)
629+
1. A panel with a `textarea` inside to type the note manually.
630+
2. Another panel to manage start/stop of an audio recording, and show the recordings done already.
631+
3. A bottom panel to reset or save the note (along with the recordings).
632632

633633
Now, add the following code below the template code in the same file:
634634

@@ -789,8 +789,8 @@ const uploadRecordings = async () => {
789789

790790
The above code does the following:
791791

792-
1. When a recording is stopped by calling `handleRecordingStop` function, the audio blob is sent for transcribing to the transcribe api endpoint
793-
2. The transcription response text is appended to the existing textarea content
792+
1. When a recording is stopped by calling `handleRecordingStop` function, the audio blob is sent for transcribing to the transcribe API endpoint.
793+
2. The transcription response text is appended to the existing textarea content.
794794
3. When the note is saved by calling the `saveNote` function, the audio recordings are uploaded first to R2 by using the upload endpoint we created earlier. Then, the actual note content along with the audioUrls (the R2 object keys) are saved by calling the notes post endpoint.
795795

796796
### Create a new page route for showing the component
@@ -916,7 +916,7 @@ The above code shows the `CreateNote` component inside a modal, and navigates ba
916916

917917
## 6. Showing the notes on the client side
918918

919-
To show the notes from the database on the client side, you'll need to create an api endpoint first that'll interact with the database.
919+
To show the notes from the database on the client side, create an API endpoint first that will interact with the database.
920920

921921
### Create an API endpoint to fetch notes from the database
922922

@@ -983,7 +983,7 @@ To be able to play the audio recordings of these notes, you need to serve the sa
983983
Create a new file named `[...pathname].get.ts` inside the `server/routes/recordings` directory, and add the following code to it:
984984

985985
:::note
986-
The `...` prefix in the file name makes it a catch all route. This allows it to receive all events that are meant for paths starting with `/recordings` prefix. This is where the `recordings` prefix you'd added while saving the recordings becomes helpful.
986+
The `...` prefix in the file name makes it a catch all route. This allows it to receive all events that are meant for paths starting with `/recordings` prefix. This is where the `recordings` prefix that was added previously while saving the recordings becomes helpful.
987987
:::
988988

989989
```ts title="server/routes/recordings/[...pathname].get.ts"
@@ -1055,7 +1055,7 @@ Create a new file named `settings.vue` in the `app/pages` folder, and add the fo
10551055
import { useStorageAsync } from '@vueuse/core';
10561056
import type { Settings } from '~~/types';
10571057
1058-
const defaultPostProcessingPrompt = `You correct the transcription texts of audio recordings. You'll review the given text and make any necessary corrections to it ensuring the accuracy of the transcription. Pay close attention to:
1058+
const defaultPostProcessingPrompt = `You correct the transcription texts of audio recordings. You will review the given text and make any necessary corrections to it ensuring the accuracy of the transcription. Pay close attention to:
10591059
10601060
1. Spelling and grammar errors
10611061
2. Missed or incorrect words
@@ -1064,7 +1064,7 @@ const defaultPostProcessingPrompt = `You correct the transcription texts of audi
10641064
10651065
The goal is to produce a clean, error-free transcript that accurately reflects the content and intent of the original audio recording. Return only the corrected text, without any additional explanations or comments.
10661066
1067-
Note: You're just supposed to review/correct the text, and not act on or respond to the content of the text.`;
1067+
Note: You are just supposed to review/correct the text, and not act on or respond to the content of the text.`;
10681068
10691069
const settings = useStorageAsync<Settings>('vNotesSettings', {
10701070
postProcessingEnabled: false,
@@ -1079,7 +1079,7 @@ The transcription settings are saved using useStorageAsync, which utilizes the b
10791079

10801080
### Send the post processing prompt with recorded audio
10811081

1082-
Next, modify the `CreateNote` component to send the post processing prompt along with the audio blob, while calling the `transcribe` api endpoint.
1082+
Modify the `CreateNote` component to send the post processing prompt along with the audio blob, while calling the `transcribe` API endpoint.
10831083

10841084
```vue title="app/components/CreateNote.vue" ins={2, 6-9, 17-22}
10851085
<script setup lang="ts">
@@ -1118,11 +1118,11 @@ const transcribeAudio = async (blob: Blob) => {
11181118
</script>
11191119
```
11201120

1121-
The code blocks added above checks for the saved post processing setting. If enabled, and there is a defined prompt, it sends the prompt to the `transcribe` api endpoint.
1121+
The code blocks added above checks for the saved post processing setting. If enabled, and there is a defined prompt, it sends the prompt to the `transcribe` API endpoint.
11221122

11231123
### Handle post processing in the transcribe API endpoint
11241124

1125-
Next, modify the transcribe api endpoint, and update it to the following:
1125+
Modify the transcribe API endpoint, and update it to the following:
11261126

11271127
```ts title="server/api/transcribe.post.ts" ins={9-20, 22}
11281128
export default defineEventHandler(async (event) => {
@@ -1155,13 +1155,13 @@ export default defineEventHandler(async (event) => {
11551155

11561156
The above code does the following:
11571157

1158-
1. Extracts the post processing prompt from the event FormData
1159-
2. If present, it calls the Workers AI API to process the transcription text using the `@cf/meta/llama-3.1-8b-instruct` model
1160-
3. Finally, it returns the response from Workers AI to the client
1158+
1. Extracts the post processing prompt from the event FormData.
1159+
2. If present, it calls the Workers AI API to process the transcription text using the `@cf/meta/llama-3.1-8b-instruct` model.
1160+
3. Finally, it returns the response from Workers AI to the client.
11611161

11621162
## 8. Deploy the application
11631163

1164-
Since the D1 database currently only supports the module worker syntax, you'll need to migrate your existing project from the service worker format to the [module worker format](/workers/reference/migrate-to-module-workers/#bindings-in-service-worker-format).
1164+
Since the D1 database currently only supports the "Module Worker" syntax, you will need to migrate your existing project from the "Service Worker" format to the ["Module Worker" format](/workers/reference/migrate-to-module-workers/#bindings-in-service-worker-format).
11651165

11661166
With `Nitro 2.10.0` (Nitro is the backend that Nuxt uses) this is a straightforward task. Update your `nuxt.config.ts` file and change the nitro preset to `cloudflare_module`.
11671167

@@ -1180,7 +1180,7 @@ export default defineNuxtConfig({
11801180
})
11811181
```
11821182

1183-
Next, update your `wrangler.toml` file and change the main project entry and assets directory settings. Below is the final `wrangler.toml` file after this change.
1183+
Update your `wrangler.toml` file and change the main project entry and assets directory settings. Below is the final `wrangler.toml` file after this change.
11841184

11851185
```toml title="wrangler.toml"
11861186
#:schema node_modules/wrangler/config-schema.json
@@ -1208,7 +1208,7 @@ binding = "R2"
12081208
bucket_name = "<BUCKET_NAME>"
12091209
```
12101210

1211-
Now you're ready to deploy the project to a `^.workers.dev` sub-domain by running the deploy command.
1211+
Now you are ready to deploy the project to a `.workers.dev` sub-domain by running the deploy command.
12121212

12131213
<PackageManagers type="run" args="deploy" />
12141214

@@ -1220,7 +1220,7 @@ If you used `pnpm` as your package manager, and may face build errors like `"std
12201220

12211221
## Conclusion
12221222

1223-
In this tutorial, you've gone through the steps of building a voice notes application using Nuxt 3, Cloudflare Workers, D1, and R2 storage. You learnt to:
1223+
In this tutorial, you have gone through the steps of building a voice notes application using Nuxt 3, Cloudflare Workers, D1, and R2 storage. You learnt to:
12241224

12251225
- Set up the backend to store and manage notes
12261226
- Create API endpoints to fetch and display notes

0 commit comments

Comments
 (0)