diff --git a/src/content/docs/ai-gateway/providers/workersai.mdx b/src/content/docs/ai-gateway/providers/workersai.mdx index 79422005b577acd..5d8cc4334f40979 100644 --- a/src/content/docs/ai-gateway/providers/workersai.mdx +++ b/src/content/docs/ai-gateway/providers/workersai.mdx @@ -114,6 +114,6 @@ Workers AI supports the following parameters for AI gateways: - `id` string - Name of your existing [AI Gateway](/ai-gateway/get-started/#create-gateway). Must be in the same account as your Worker. - `skipCache` boolean(default: false) - - Controls whether the request should [skip the cache](/ai-gateway/configuration/caching/#skip-cache-cf-skip-cache). + - Controls whether the request should [skip the cache](/ai-gateway/configuration/caching/#skip-cache-cf-aig-skip-cache). - `cacheTtl` number - - Controls the [Cache TTL](/ai-gateway/configuration/caching/#cache-ttl-cf-cache-ttl). + - Controls the [Cache TTL](/ai-gateway/configuration/caching/#cache-ttl-cf-aig-cache-ttl). diff --git a/src/content/docs/developer-spotlight/tutorials/creating-a-recommendation-api.mdx b/src/content/docs/developer-spotlight/tutorials/creating-a-recommendation-api.mdx index 4de7150c02e40b7..43287c5a2e1ddb5 100644 --- a/src/content/docs/developer-spotlight/tutorials/creating-a-recommendation-api.mdx +++ b/src/content/docs/developer-spotlight/tutorials/creating-a-recommendation-api.mdx @@ -173,7 +173,7 @@ Let's start implementing step-by-step. ### Bind Workers AI and Vectorize to your Worker -This API requires the use of Workers AI and Vectorize. To use these resources from a Worker, you will need to first create the resources then [bind](/workers/runtime-apis/bindings/#what-is-a-binding) them to a Worker. First, let's create a Vectorize index with Wrangler using the command `wrangler vectorize create {index_name} --dimensions={number_of_dimensions} --metric={similarity_metric}`. The values for `dimensions` and `metric` depend on the type of [Text Embedding Model](/workers-ai/models/#text-embeddings) you are using for data vectorization (Embedding). For example, if you are using the `bge-large-en-v1.5` model, the command is: +This API requires the use of Workers AI and Vectorize. To use these resources from a Worker, you will need to first create the resources then [bind](/workers/runtime-apis/bindings/#what-is-a-binding) them to a Worker. First, let's create a Vectorize index with Wrangler using the command `wrangler vectorize create {index_name} --dimensions={number_of_dimensions} --metric={similarity_metric}`. The values for `dimensions` and `metric` depend on the type of [Text Embedding Model](/workers-ai/models/) you are using for data vectorization (Embedding). For example, if you are using the `bge-large-en-v1.5` model, the command is: ```sh npx wrangler vectorize create stripe-products --dimensions=1024 --metric=cosine diff --git a/src/content/docs/reference-architecture/diagrams/ai/ai-asset-creation.mdx b/src/content/docs/reference-architecture/diagrams/ai/ai-asset-creation.mdx index 1223203503a6fcc..198999947776b9f 100644 --- a/src/content/docs/reference-architecture/diagrams/ai/ai-asset-creation.mdx +++ b/src/content/docs/reference-architecture/diagrams/ai/ai-asset-creation.mdx @@ -34,13 +34,13 @@ Example uses of such compositions of AI models can be employed to generation vis ![Figure 1:Content-based asset generation](~/assets/images/reference-architecture/ai-asset-generation-diagrams/ai-asset-generation.svg "Figure 1: Content-based asset generation") 1. **Client upload**: Send POST request with content to API endpoint. -2. **Prompt generation**: Generate prompt for later-stage text-to-image model by calling [Workers AI](/workers-ai/) [text generation models](/workers-ai/models/#text-generation) with content as input. +2. **Prompt generation**: Generate prompt for later-stage text-to-image model by calling [Workers AI](/workers-ai/) [text generation models](/workers-ai/models/) with content as input. 3. **Safety check**: Check for compliance with safety guidelines by calling [Workers AI](/workers-ai/) [text classification models](/workers-ai/models/#text-classification) with the previously generated prompt as input. 4. **Image generation**: Generate image by calling [Workers AI](/workers-ai/) [text-to-image models](/workers-ai/models/#text-to-image) previously generated prompt. ## Related resources - [Community project: content-based asset creation demo](https://auto-asset.pages.dev/) -- [Workers AI: Text generation models](/workers-ai/models/#text-generation) +- [Workers AI: Text generation models](/workers-ai/models/) - [Workers AI: Text-to-image models](/workers-ai/models/#text-to-image) - [Workers AI: llamaguard-7b-awq](/workers-ai/models/llamaguard-7b-awq/) diff --git a/src/content/docs/workers-ai/tutorials/how-to-choose-the-right-text-generation-model.mdx b/src/content/docs/workers-ai/tutorials/how-to-choose-the-right-text-generation-model.mdx index 7754b72e65121d4..f7f9b825b1faa6c 100644 --- a/src/content/docs/workers-ai/tutorials/how-to-choose-the-right-text-generation-model.mdx +++ b/src/content/docs/workers-ai/tutorials/how-to-choose-the-right-text-generation-model.mdx @@ -36,7 +36,7 @@ You can [download the Workers AI Text Generation Exploration notebook](/workers- Models come in different shapes and sizes, and choosing the right one for the task, can cause analysis paralysis. -The good news is that on the [Workers AI Text Generation](/workers-ai/models/#text-generation) interface is always the same, no matter which model you choose. +The good news is that on the [Workers AI Text Generation](/workers-ai/models/) interface is always the same, no matter which model you choose. In an effort to aid you in your journey of finding the right model, this notebook will help you get to know your options in a speed dating type of scenario. @@ -130,7 +130,7 @@ def speed_date(models, questions): Who better to tell you about the specific models than themselves?! -The timing here is specific to the entire completion, but remember all Text Generation models on [Workers AI support streaming](/workers-ai/models/#text-generation). +The timing here is specific to the entire completion, but remember all Text Generation models on [Workers AI support streaming](/workers-ai/models/). ```python models = [ diff --git a/src/content/partials/workers-ai/openai-compatibility.mdx b/src/content/partials/workers-ai/openai-compatibility.mdx index 94cc34e116a0f5c..64bc69da90146c0 100644 --- a/src/content/partials/workers-ai/openai-compatibility.mdx +++ b/src/content/partials/workers-ai/openai-compatibility.mdx @@ -3,4 +3,4 @@ --- -Workers AI supports OpenAI compatible endpoints for [text generation](/workers-ai/models/#text-generation) (`/v1/chat/completions`) and [text embedding models](/workers-ai/models/#text-embeddings) (`/v1/embeddings`). This allows you to use the same code as you would for your OpenAI commands, but swap in Workers AI easily. +Workers AI supports OpenAI compatible endpoints for [text generation](/workers-ai/models/) (`/v1/chat/completions`) and [text embedding models](/workers-ai/models/) (`/v1/embeddings`). This allows you to use the same code as you would for your OpenAI commands, but swap in Workers AI easily.