You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/changelog/workers-ai/2025-02-25-json-mode.mdx
+26-23Lines changed: 26 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ date: 2025-02-25T15:00:00Z
6
6
7
7
import { TypeScriptExample } from"~/components";
8
8
9
-
Workers AI now supports structured JSON outputs with [JSON mode](/workers-ai/json-mode/), which allows you to request a structured output response when interacting with AI models.
9
+
Workers AI now supports structured JSON outputs with [JSON mode](/workers-ai/features/json-mode/), which allows you to request a structured output response when interacting with AI models.
10
10
11
11
This makes it much easier to retrieve structured data from your AI models, and avoids the (error prone!) need to parse large unstructured text responses to extract your data.
Copy file name to clipboardExpand all lines: src/content/docs/reference-architecture/diagrams/ai/bigquery-workers-ai.mdx
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ This version of the integration is aimed at workflows that require interaction w
28
28
29
29
1. A user makes a request to a [Worker](https://workers.cloudflare.com/) endpoint. (Which can optionally incorporate [Access](/cloudflare-one/policies/access/) in front of it to authenticate users).
30
30
2. Worker fetches [securely stored](/workers/configuration/secrets/) Google Cloud Platform service account information such as service key and generates a JSON Web Token to issue an authenticated API request to BigQuery.
31
-
3. Worker receives the data from BigQuery and [transforms it into a format](/workers-ai/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
31
+
3. Worker receives the data from BigQuery and [transforms it into a format](/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
32
32
4. Using its [native integration](/workers-ai/configuration/bindings/) with Workers AI, the Worker forwards the data from BigQuery which is then run against one of Cloudflare's hosted AI models.
33
33
5. The original data retrieved from BigQuery alongside the AI-generated information is returned to the user as a response to the request initiated in step 1.
34
34
@@ -40,20 +40,20 @@ For periodic or longer workflows, you may opt for a batch approach. This diagram
40
40
41
41
1.[A Cron Trigger](/workers/configuration/cron-triggers/) invokes the Worker without any user interaction.
42
42
2. Worker fetches [securely stored](/workers/configuration/secrets/) Google Cloud Platform service account information such as service key and generates a JSON Web Token to issue an authenticated API request to BigQuery.
43
-
3. Worker receives the data from BigQuery and [transforms it into a format](/workers-ai/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
43
+
3. Worker receives the data from BigQuery and [transforms it into a format](/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/#6-format-results-from-the-query) that will make it easier to iterate when interacting with Workers AI.
44
44
4. Using its [native integration](/workers-ai/configuration/bindings/) with Workers AI, the Worker forwards the data from BigQuery to generate some content related to it.
45
45
5. Optionally, you can store the BigQuery data and the AI-generated data in a variety of different Cloudflare services.
46
-
* Into [D1](/d1/), a SQL database.
47
-
* If in step four you used Workers AI to generate embeddings, you can store them in [Vectorize](/vectorize/). To learn more about this type of solution, please consider reviewing the reference architecture diagram on [Retrieval Augmented Generation](/reference-architecture/diagrams/ai/ai-rag/).
48
-
* To [Workers KV](/kv/) if the output of your data will be stored and consumed in a key/value fashion.
49
-
* If you prefer to save the data fetched from BigQuery and Workers AI into objects (such as images, files, JSONs), you can use [R2](/r2/), our egress-free object storage to do so.
46
+
- Into [D1](/d1/), a SQL database.
47
+
- If in step four you used Workers AI to generate embeddings, you can store them in [Vectorize](/vectorize/). To learn more about this type of solution, please consider reviewing the reference architecture diagram on [Retrieval Augmented Generation](/reference-architecture/diagrams/ai/ai-rag/).
48
+
- To [Workers KV](/kv/) if the output of your data will be stored and consumed in a key/value fashion.
49
+
- If you prefer to save the data fetched from BigQuery and Workers AI into objects (such as images, files, JSONs), you can use [R2](/r2/), our egress-free object storage to do so.
50
50
6. You can set up an integration so a system or a user gets notified whenever a new result is available or if an error occurs. It's also worth mentioning that Workers by themselves can already provide additional [observability](/workers/observability/).
51
-
* Sending an email with all the data retrieved and generated in the previous step is possible using [Email Routing](/email-routing/email-workers/send-email-workers/).
52
-
* Since Workers allows you to issue HTTP requests, you can notify a webhook or API endpoint once the process finishes or if there's an error.
51
+
- Sending an email with all the data retrieved and generated in the previous step is possible using [Email Routing](/email-routing/email-workers/send-email-workers/).
52
+
- Since Workers allows you to issue HTTP requests, you can notify a webhook or API endpoint once the process finishes or if there's an error.
53
53
54
54
## Related resources
55
55
56
-
-[Tutorial: Using BigQuery with Workers AI](/workers-ai/tutorials/using-bigquery-with-workers-ai/)
56
+
-[Tutorial: Using BigQuery with Workers AI](/workers-ai/guides/tutorials/using-bigquery-with-workers-ai/)
57
57
-[Workers AI: Get Started](/workers-ai/get-started/workers-wrangler/)
Copy file name to clipboardExpand all lines: src/content/docs/vectorize/reference/what-is-a-vector-database.mdx
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,6 @@ title: Vector databases
3
3
pcx_content_type: concept
4
4
sidebar:
5
5
order: 2
6
-
7
6
---
8
7
9
8
Vector databases are a key part of building scalable AI-powered applications. Vector databases provide long term memory, on top of an existing machine learning model.
@@ -14,10 +13,10 @@ Without a vector database, you would need to train your model (or models) or re-
14
13
15
14
A vector database determines what other data (represented as vectors) is near your input query. This allows you to build different use-cases on top of a vector database, including:
16
15
17
-
* Semantic search, used to return results similar to the input of the query.
18
-
* Classification, used to return the grouping (or groupings) closest to the input query.
19
-
* Recommendation engines, used to return content similar to the input based on different criteria (for example previous product sales, or user history).
20
-
* Anomaly detection, used to identify whether specific data points are similar to existing data, or different.
16
+
- Semantic search, used to return results similar to the input of the query.
17
+
- Classification, used to return the grouping (or groupings) closest to the input query.
18
+
- Recommendation engines, used to return content similar to the input based on different criteria (for example previous product sales, or user history).
19
+
- Anomaly detection, used to identify whether specific data points are similar to existing data, or different.
21
20
22
21
Vector databases can also power [Retrieval Augmented Generation](https://arxiv.org/abs/2005.11401) (RAG) tasks, which allow you to bring additional context to LLMs (Large Language Models) by using the context from a vector search to augment the user prompt.
23
22
@@ -44,16 +43,17 @@ Instead of passing the prompt directly to the LLM, in the RAG approach you:
44
43
1. Generate vector embeddings from an existing dataset or corpus (for example, the dataset you want to use to add additional context to the LLMs response). An existing dataset or corpus could be a product documentation, research data, technical specifications, or your product catalog and descriptions.
45
44
2. Store the output embeddings in a Vectorize database index.
46
45
47
-
When a user initiates a prompt, instead of passing it (without additional context) to the LLM, you *augment* it with additional context:
46
+
When a user initiates a prompt, instead of passing it (without additional context) to the LLM, you _augment_ it with additional context:
48
47
49
48
1. The user prompt is passed into the same ML model used for your dataset, returning a vector embedding representation of the query.
50
49
2. This embedding is used as the query (semantic search) against the vector database, which returns similar vectors.
51
50
3. These vectors are used to look up the content they relate to (if not embedded directly alongside the vectors as metadata).
52
51
4. This content is provided as context alongside the original user prompt, providing additional context to the LLM and allowing it to return an answer that is likely to be far more contextual than the standalone prompt.
53
52
54
-
Refer to the [RAG using Workers AI tutorial](/workers-ai/tutorials/build-a-retrieval-augmented-generation-ai/) to learn how to combine Workers AI and Vectorize for generative AI use-cases.
53
+
Refer to the [RAG using Workers AI tutorial](/workers-ai/guides/tutorials/build-a-retrieval-augmented-generation-ai/) to learn how to combine Workers AI and Vectorize for generative AI use-cases.
55
54
56
-
<sup>1</sup> You can learn more about the theory behind RAG by reading the [RAG paper](https://arxiv.org/abs/2005.11401).
55
+
<sup>1</sup> You can learn more about the theory behind RAG by reading the [RAG
56
+
paper](https://arxiv.org/abs/2005.11401).
57
57
58
58
## Terminology
59
59
@@ -85,9 +85,9 @@ Refer to the [dimensions](/vectorize/best-practices/create-indexes/#dimensions)
85
85
86
86
The distance metric is an index used for vector search. It defines how it determines how close your query vector is to other vectors within the index.
87
87
88
-
* Distance metrics determine how the vector search engine assesses similarity between vectors.
89
-
* Cosine, Euclidean (L2), and Dot Product are the most commonly used distance metrics in vector search.
90
-
* The machine learning model and type of embedding you use will determine which distance metric is best suited for your use-case.
91
-
* Different metrics determine different scoring characteristics. For example, the `cosine` distance metric is well suited to text, sentence similarity and/or document search use-cases. `euclidean` can be better suited for image or speech recognition use-cases.
88
+
- Distance metrics determine how the vector search engine assesses similarity between vectors.
89
+
- Cosine, Euclidean (L2), and Dot Product are the most commonly used distance metrics in vector search.
90
+
- The machine learning model and type of embedding you use will determine which distance metric is best suited for your use-case.
91
+
- Different metrics determine different scoring characteristics. For example, the `cosine` distance metric is well suited to text, sentence similarity and/or document search use-cases. `euclidean` can be better suited for image or speech recognition use-cases.
92
92
93
93
Refer to the [distance metrics](/vectorize/best-practices/create-indexes/#distance-metrics) documentation to learn how to configure a distance metric when creating a Vectorize index.
Copy file name to clipboardExpand all lines: src/content/docs/workers-ai/features/fine-tunes/index.mdx
+4-6Lines changed: 4 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,24 +3,22 @@ pcx_content_type: navigation
3
3
title: Fine-tunes
4
4
sidebar:
5
5
order: 3
6
-
7
6
---
8
7
9
-
import { Feature } from"~/components"
8
+
import { Feature } from"~/components";
10
9
11
10
Learn how to use Workers AI to get fine-tuned inference.
12
11
13
-
<Featureheader="Fine-tuned inference with LoRAs"href="/workers-ai/fine-tunes/loras/"cta="Run inference with LoRAs">
12
+
<Featureheader="Fine-tuned inference with LoRAs"href="/workers-ai/features/fine-tunes/loras/"cta="Run inference with LoRAs">
14
13
15
14
Upload a LoRA adapter and run fine-tuned inference with one of our base models.
16
15
17
-
18
16
</Feature>
19
17
20
-
***
18
+
---
21
19
22
20
## What is fine-tuning?
23
21
24
22
Fine-tuning is a general term for modifying an AI model by continuing to train it with additional data. The goal of fine-tuning is to increase the probability that a generation is similar to your dataset. Training a model from scratch is not practical for many use cases given how expensive and time consuming they can be to train. By fine-tuning an existing pre-trained model, you benefit from its capabilities while also accomplishing your desired task.
25
23
26
-
[Low-Rank Adaptation](https://arxiv.org/abs/2106.09685) (LoRA) is a specific fine-tuning method that can be applied to various model architectures, not just LLMs. It is common that the pre-trained model weights are directly modified or fused with additional fine-tune weights in traditional fine-tuning methods. LoRA, on the other hand, allows for the fine-tune weights and pre-trained model to remain separate, and for the pre-trained model to remain unchanged. The end result is that you can train models to be more accurate at specific tasks, such as generating code, having a specific personality, or generating images in a specific style.
24
+
[Low-Rank Adaptation](https://arxiv.org/abs/2106.09685) (LoRA) is a specific fine-tuning method that can be applied to various model architectures, not just LLMs. It is common that the pre-trained model weights are directly modified or fused with additional fine-tune weights in traditional fine-tuning methods. LoRA, on the other hand, allows for the fine-tune weights and pre-trained model to remain separate, and for the pre-trained model to remain unchanged. The end result is that you can train models to be more accurate at specific tasks, such as generating code, having a specific personality, or generating images in a specific style.
0 commit comments