diff --git a/src/content/docs/1.1.1.1/faq.mdx b/src/content/docs/1.1.1.1/faq.mdx
index a79c373a4b4ddd6..822b6830c8f4a72 100644
--- a/src/content/docs/1.1.1.1/faq.mdx
+++ b/src/content/docs/1.1.1.1/faq.mdx
@@ -15,7 +15,7 @@ Below you will find answers to our most commonly asked questions. If you cannot
## What is 1.1.1.1?
-1.1.1.1 is Cloudflare's fast and secure DNS resolver. When you request to visit an application like `cloudflare.com`, your computer needs to know which server to connect you to so that it can load the application. Computers don’t know how to do this name to address translation, so they ask a specialized server to do it for them.
+1.1.1.1 is Cloudflare's fast and secure DNS resolver. When you request to visit an application like `cloudflare.com`, your computer needs to know which server to connect you to so that it can load the application. Computers don't know how to do this name to address translation, so they ask a specialized server to do it for them.
This specialized server is called a DNS recursive resolver. The resolver’s job is to find the address for a given name, like `2400:cb00:2048:1::c629:d7a2` for `cloudflare.com`, and return it to the computer that asked for it.
@@ -30,13 +30,13 @@ Visit [1.1.1.1/help](https://one.one.one.one/help) to make sure your system is c
## What do DNS resolvers do?
-DNS resolvers are like address books for the Internet. They translate the name of places to addresses so that your browser can figure out how to get there. DNS resolvers do this by working backwards from the top until they find the website your are looking for.
+DNS resolvers are like address books for the Internet. They translate the name of places to addresses so that your browser can figure out how to get there. DNS resolvers do this by working backwards from the top until they find the website you are looking for.
Every resolver knows how to find the invisible `.` at the end of domain names (for example, `cloudflare.com.`). There are [hundreds of root servers](http://www.root-servers.org/) all over the world that host the `.` file, and resolvers are [hard coded to know the IP addresses](http://www.internic.net/domain/named.root) of those servers. Cloudflare itself hosts [that file](http://www.internic.net/domain/root.zone) on all of its servers around the world through a [partnership with ISC](https://blog.cloudflare.com/f-root/).
The resolver asks one of the root servers where to find the next link in the chain — the top-level domain (abbreviated to TLD) or domain ending. An example of a TLD is `.com` or `.org`. Luckily, the root servers store the locations of all the TLD servers, so they can return which IP address the DNS resolver should go ask next.
-The resolver then asks the TLD’s servers where it can find the domain it is looking for. For example, a resolver might ask `.com` where to find `cloudflare.com`. TLDs host a file containing the location of every domain using the TLD.
+The resolver then asks the TLD's servers where it can find the domain it is looking for. For example, a resolver might ask `.com` where to find `cloudflare.com`. TLDs host a file containing the location of every domain using the TLD.
Once the resolver has the final IP address, it returns the answer to the computer that asked.
@@ -70,7 +70,7 @@ Cloudflare [stopped supporting the ANY query](https://blog.cloudflare.com/deprec
## What is query name minimization?
-Cloudflare minimizes privacy leakage by only sending minimal query name to authoritative DNS servers. For example, if a client is looking for foo.bar.example.com, the only part of the query 1.1.1.1 discloses to .com is that we want to know who’s responsible for example.com and the zone internals stay hidden.
+Cloudflare minimizes privacy leakage by only sending minimal query name to authoritative DNS servers. For example, if a client is looking for foo.bar.example.com, the only part of the query 1.1.1.1 discloses to .com is that we want to know who's responsible for example.com and the zone internals stay hidden.
## What are root hints?
diff --git a/src/content/docs/agents/api-reference/rag.mdx b/src/content/docs/agents/api-reference/rag.mdx
index b29014c20452f39..52900437c267d3f 100644
--- a/src/content/docs/agents/api-reference/rag.mdx
+++ b/src/content/docs/agents/api-reference/rag.mdx
@@ -22,7 +22,7 @@ If you're brand-new to vector databases and Vectorize, visit the [Vectorize tuto
You can query a vector index (or indexes) from any method on your Agent: any Vectorize index you attach is available on `this.env` within your Agent. If you've [associated metadata](/vectorize/best-practices/insert-vectors/#metadata) with your vectors that maps back to data stored in your Agent, you can then look up the data directly within your Agent using `this.sql`.
-Here's an example of how to give an Agent retrieval capabilties:
+Here's an example of how to give an Agent retrieval capabilities:
diff --git a/src/content/docs/agents/api-reference/run-workflows.mdx b/src/content/docs/agents/api-reference/run-workflows.mdx
index 442ba327df4ca26..8653145a76ed314 100644
--- a/src/content/docs/agents/api-reference/run-workflows.mdx
+++ b/src/content/docs/agents/api-reference/run-workflows.mdx
@@ -97,7 +97,7 @@ You can also call a Workflow that is defined in a different Workers script from
// Required:
"name": "EMAIL_WORKFLOW",
"class_name": "MyWorkflow",
- // Optional: set tthe script_name field if your Workflow is defined in a
+ // Optional: set the script_name field if your Workflow is defined in a
// different project from your Agent
"script_name": "email-workflows"
}
diff --git a/src/content/docs/agents/api-reference/using-ai-models.mdx b/src/content/docs/agents/api-reference/using-ai-models.mdx
index ee5950231f75afc..c9a60db00a46857 100644
--- a/src/content/docs/agents/api-reference/using-ai-models.mdx
+++ b/src/content/docs/agents/api-reference/using-ai-models.mdx
@@ -32,7 +32,7 @@ Importantly, Agents can call AI models on their own — autonomously — and can
Modern [reasoning models](https://platform.openai.com/docs/guides/reasoning) or "thinking" model can take some time to both generate a response _and_ stream the response back to the client.
-Instead of buffering the entire response, or risking the client disconecting, you can stream the response back to the client by using the [WebSocket API](/agents/api-reference/websockets/).
+Instead of buffering the entire response, or risking the client disconnecting, you can stream the response back to the client by using the [WebSocket API](/agents/api-reference/websockets/).
@@ -121,7 +121,7 @@ export class MyAgent extends Agent {
-Your wrangler configuration will need an `ai` binding added:
+Your Wrangler configuration will need an `ai` binding added:
@@ -174,7 +174,7 @@ export class MyAgent extends Agent {
-Your wrangler configuration will need an `ai` binding added. This is shared across both Workers AI and AI Gateway.
+Your Wrangler configuration will need an `ai` binding added. This is shared across both Workers AI and AI Gateway.
```toml
diff --git a/src/content/docs/agents/model-context-protocol/authorization.mdx b/src/content/docs/agents/model-context-protocol/authorization.mdx
index eced8674c664697..1392cd370296164 100644
--- a/src/content/docs/agents/model-context-protocol/authorization.mdx
+++ b/src/content/docs/agents/model-context-protocol/authorization.mdx
@@ -124,7 +124,7 @@ Read the docs for the [Workers oAuth Provider Library](https://github.com/cloudf
### (3) Bring your own OAuth Provider
-If your application already implements an Oauth Provider itself, or you use [Stytch](https://stytch.com/), [Auth0](https://auth0.com/), [WorkOS](https://workos.com/), or authorization-as-a-service provider, you can use this in the same way that you would use a third-party OAuth provider, described above in (2).
+If your application already implements an OAuth Provider itself, or you use [Stytch](https://stytch.com/), [Auth0](https://auth0.com/), [WorkOS](https://workos.com/), or authorization-as-a-service provider, you can use this in the same way that you would use a third-party OAuth provider, described above in (2).
You can use the auth provider to:
- Allow users to authenticate to your MCP server through email, social logins, SSO (single sign-on), and MFA (multi-factor authentication).
diff --git a/src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx b/src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx
index ca721bace853af6..bf202fc0c4dacc0 100644
--- a/src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx
+++ b/src/content/docs/ai-gateway/guardrails/set-up-guardrail.mdx
@@ -30,7 +30,7 @@ After enabling Guardrails, you can monitor results through **AI Gateway Logs** i
When a request is blocked by guardrails, you will receive a structured error response. These indicate whether the issue occurred with the prompt or the model response. Use error codes to differentiate between prompt versus response violations.
-- **Prompt blocked**
+- **Prompt blocked**
- `"code": 2016`
- `"message": "Prompt blocked due to security configurations"`
diff --git a/src/content/docs/ai-gateway/guardrails/usage-considerations.mdx b/src/content/docs/ai-gateway/guardrails/usage-considerations.mdx
index f1510e8172491f4..60a8ef82cba26d7 100644
--- a/src/content/docs/ai-gateway/guardrails/usage-considerations.mdx
+++ b/src/content/docs/ai-gateway/guardrails/usage-considerations.mdx
@@ -14,7 +14,7 @@ Since Guardrails runs on Workers AI, enabling it incurs usage on Workers AI. You
- **Model availability**: If at least one hazard category is set to `block`, but AI Gateway is unable to receive a response from Workers AI, the request will be blocked. Conversely, if a hazard category is set to `flag` and AI Gateway cannot obtain a response from Workers AI, the request will proceed without evaluation. This approach prioritizes availability, allowing requests to continue even when content evaluation is not possible.
- **Latency impact**: Enabling Guardrails adds some latency. Enabling Guardrails introduces additional latency to requests. Typically, evaluations using Llama Guard 3 8B on Workers AI add approximately 500 milliseconds per request. However, larger requests may experience increased latency, though this increase is not linear. Consider this when balancing safety and performance.
- **Handling long content**: When evaluating long prompts or responses, Guardrails automatically segments the content into smaller chunks, processing each through separate Guardrail requests. This approach ensures comprehensive moderation but may result in increased latency for longer inputs.
-- **Supported languages**: Llama Guard 3.3 8B supports content safety classification in the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai.
+- **Supported languages**: Llama Guard 3.3 8B supports content safety classification in the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai.
:::note
diff --git a/src/content/docs/ai-gateway/index.mdx b/src/content/docs/ai-gateway/index.mdx
index f75d406618f0cd3..cc458a347f62d6d 100644
--- a/src/content/docs/ai-gateway/index.mdx
+++ b/src/content/docs/ai-gateway/index.mdx
@@ -79,7 +79,7 @@ Run machine learning models, powered by serverless GPUs, on Cloudflare’s globa
-Build full-stack AI applications with Vectorize, Cloudflare’s vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.
+Build full-stack AI applications with Vectorize, Cloudflare's vector database. Adding Vectorize enables you to perform tasks such as semantic search, recommendations, anomaly detection or can be used to provide context and memory to an LLM.
diff --git a/src/content/docs/ai-gateway/observability/logging/index.mdx b/src/content/docs/ai-gateway/observability/logging/index.mdx
index 007c534f96dc7fd..6a5ff291dfd599b 100644
--- a/src/content/docs/ai-gateway/observability/logging/index.mdx
+++ b/src/content/docs/ai-gateway/observability/logging/index.mdx
@@ -51,7 +51,7 @@ curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_id}/openai/chat/
To manage your log storage effectively, you can:
- Set Storage Limits: Configure a limit on the number of logs stored per gateway in your gateway settings to ensure you only pay for what you need.
-- Enable Automatic Log Deletion: Activate the Automatic Log Deletion feature in your gateway settings to automatically delete the oldest logs once the log limit you’ve set or the default storage limit of 10 million logs is reached. This ensures new logs are always saved without manual intervention.
+- Enable Automatic Log Deletion: Activate the Automatic Log Deletion feature in your gateway settings to automatically delete the oldest logs once the log limit you've set or the default storage limit of 10 million logs is reached. This ensures new logs are always saved without manual intervention.
## How to delete logs