Skip to content

Commit eff4fbe

Browse files
committed
match order
1 parent 7192946 commit eff4fbe

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

src/content/docs/workers-ai/features/batch-api/get-started.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -112,8 +112,8 @@ After sending your batch request, you will receive a response similar to:
112112
```json output
113113
{
114114
"status": "queued",
115-
"request_id": "000-000-000",
116-
"model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast"
115+
"model": "@cf/meta/llama-3.3-70b-instruct-fp8-fast",
116+
"request_id": "000-000-000"
117117
}
118118
```
119119

@@ -125,7 +125,7 @@ After sending your batch request, you will receive a response similar to:
125125

126126
Once your batch request is queued, use the `request_id` to poll for its status. During processing, the API returns a status "queued" or "running" indicating that the request is still in the queue or being processed.
127127

128-
```typescript title=example
128+
```typescript title=src/index.ts
129129
// Polling the status of the batch request using the request_id
130130
const status = env.AI.run("@cf/meta/llama-3.3-70b-instruct-fp8-fast", {
131131
request_id: "000-000-000",

0 commit comments

Comments
 (0)