diff --git a/src/content/docs/workers-ai/function-calling/embedded/api-reference.mdx b/src/content/docs/workers-ai/function-calling/embedded/api-reference.mdx index 1a9a417124dcde6..00ef13dff952c39 100644 --- a/src/content/docs/workers-ai/function-calling/embedded/api-reference.mdx +++ b/src/content/docs/workers-ai/function-calling/embedded/api-reference.mdx @@ -19,16 +19,16 @@ This wrapper method enables you to do embedded function calling. You pass it the * `AI Binding`Ai * The AI binding, such as `env.AI`. -* `model`BaseAiTextGenerationModels +* `model`BaseAiTextGenerationModels * The ID of the model that supports function calling. For example, `@hf/nousresearch/hermes-2-pro-mistral-7b`. -* `input`Object - * `messages`RoleScopedChatInput\[] - * `tools`AiTextGenerationToolInputWithFunction\[] -* `config`Object - * `streamFinalResponse`boolean optional - * `maxRecursiveToolRuns`number optional - * `strictValidation`boolean optional - * `verbose`boolean optional +* `input`Object + * `messages`RoleScopedChatInput\[] + * `tools`AiTextGenerationToolInputWithFunction\[] +* `config`Object + * `streamFinalResponse`boolean optional + * `maxRecursiveToolRuns`number optional + * `strictValidation`boolean optional + * `verbose`boolean optional * `trimFunction`boolean optional - For the `trimFunction`, you can pass it `autoTrimTools`, which is another helper method we've devised to automatically choose the correct tools (using an LLM) before sending it off for inference. This means that your final inference call will have fewer input tokens. @@ -40,11 +40,11 @@ This method lets you automatically create tool schemas based on OpenAPI specs, s -* `spec`string +* `spec`string * The OpenAPI specification in either JSON or YAML format, or a URL to a remote OpenAPI specification. * `config`Config optional - Configuration options for the createToolsFromOpenAPISpec function - * `overrides`ConfigRule\[] optional - * `matchPatterns`RegExp\[] optional + * `overrides`ConfigRule\[] optional + * `matchPatterns`RegExp\[] optional * `options` Object optional \{ `verbose` boolean optional \} diff --git a/src/content/docs/workers-ai/index.mdx b/src/content/docs/workers-ai/index.mdx index 982ec8fd9060f29..3386835af607f40 100644 --- a/src/content/docs/workers-ai/index.mdx +++ b/src/content/docs/workers-ai/index.mdx @@ -15,16 +15,16 @@ import { CardGrid, Description, Feature, LinkTitleCard, Plan, RelatedProduct, Re -Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. +Run machine learning models, powered by serverless GPUs, on Cloudflare's global network. Workers AI allows you to run AI models in a serverless way, without having to worry about scaling, maintaining, or paying for unused infrastructure. You can invoke models running on GPUs on Cloudflare's network from your own code — from [Workers](/workers/), [Pages](/pages/), or anywhere via [the Cloudflare API](/api/resources/ai/methods/run/). -Workers AI gives you access to: +Workers AI gives you access to: - **50+ [open-source models](/workers-ai/models/)**, available as a part of our model catalog -- Serverless, **pay-for-what-you-use** [pricing model](/workers-ai/platform/pricing/) +- Serverless, **pay-for-what-you-use** [pricing model](/workers-ai/platform/pricing/) - All as part of a **fully-featured developer platform**, including [AI Gateway](/ai-gateway/), [Vectorize](/vectorize/), [Workers](/workers/) and more...
diff --git a/src/content/docs/workers-ai/tutorials/fine-tune-models-with-autotrain.mdx b/src/content/docs/workers-ai/tutorials/fine-tune-models-with-autotrain.mdx index f425a8a1433f2b9..91e61e9ab7e8b76 100644 --- a/src/content/docs/workers-ai/tutorials/fine-tune-models-with-autotrain.mdx +++ b/src/content/docs/workers-ai/tutorials/fine-tune-models-with-autotrain.mdx @@ -47,7 +47,7 @@ In order to give your AutoTrain ample memory, you will need to need to choose a :::note -These GPUs will cost money. A typical AutoTrain session typically costs less than $1 USD. +These GPUs will cost money. A typical AutoTrain session typically costs less than $1 USD. ::: The notebook contains a few interactive sections that we will need to change. @@ -82,7 +82,7 @@ We only need to change a few of these fields to ensure things work on Cloudflare At the time of this writing, changing the quantization field breaks the code generation. You may need to edit the code and put quotes around the value. -Change the line that says `quantization = none` to `quantization = "none"`. +Change the line that says `quantization = none` to `quantization = "none"`. ::: ## 3. Upload your CSV file to the Notebook diff --git a/src/content/docs/workers-ai/tutorials/image-generation-playground/image-generator-flux-newmodels.mdx b/src/content/docs/workers-ai/tutorials/image-generation-playground/image-generator-flux-newmodels.mdx index 45601f0543dabf2..98f98faf7291077 100644 --- a/src/content/docs/workers-ai/tutorials/image-generation-playground/image-generator-flux-newmodels.mdx +++ b/src/content/docs/workers-ai/tutorials/image-generation-playground/image-generator-flux-newmodels.mdx @@ -20,7 +20,7 @@ next: true import { Details, DirectoryListing, Stream } from "~/components" -In part 2, Kristian expands upon the existing environment built in part 1, by showing you how to integrate new AI models and introduce new parameters that allow you to customize how images are generated. +In part 2, Kristian expands upon the existing environment built in part 1, by showing you how to integrate new AI models and introduce new parameters that allow you to customize how images are generated.