diff --git a/public/__redirects b/public/__redirects
index c8acd29682dfc4..812548da6500dc 100644
--- a/public/__redirects
+++ b/public/__redirects
@@ -1591,6 +1591,8 @@
/workers/testing/vitest-integration/get-started/migrate-from-miniflare-2/ /workers/testing/vitest-integration/migration-guides/migrate-from-miniflare-2/ 301
/workers/testing/vitest-integration/get-started/migrate-from-unstable-dev/ /workers/testing/vitest-integration/migration-guides/migrate-from-unstable-dev/ 301
/workers/testing/vitest-integration/get-started/write-your-first-test/ /workers/testing/vitest-integration/write-your-first-test/ 301
+/workers/databases/native-integrations/fauna/ /workers/databases/native-integrations/ 301
+/workers/tutorials/store-data-with-fauna/ https://fauna.com/blog/the-future-of-fauna 301
# workers ai
/workers-ai/models/llm/ /workers-ai/models/#text-generation 301
diff --git a/src/content/docs/workers/databases/connecting-to-databases.mdx b/src/content/docs/workers/databases/connecting-to-databases.mdx
index 265f7fbe85feba..674fd21b3e944f 100644
--- a/src/content/docs/workers/databases/connecting-to-databases.mdx
+++ b/src/content/docs/workers/databases/connecting-to-databases.mdx
@@ -11,7 +11,7 @@ Cloudflare Workers can connect to and query your data in both SQL and NoSQL data
- Cloudflare's own [D1](/d1/), a serverless SQL-based database.
- Traditional hosted relational databases, including Postgres and MySQL, using [Hyperdrive](/hyperdrive/) (recommended) to significantly speed up access.
-- Serverless databases, including Supabase, MongoDB Atlas, PlanetScale, FaunaDB, and Prisma.
+- Serverless databases, including Supabase, MongoDB Atlas, PlanetScale, and Prisma.
### D1 SQL database
@@ -49,16 +49,15 @@ Serverless databases provide HTTP-based proxies and drivers, also known as serve
By providing a way to query your database with HTTP, these serverless databases and drivers eliminate several roundtrips needed to establish a secure connection.
-| Database | Integration | Library or Driver | Connection Method |
-| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------- | -------------------------- |
-| [Fauna](https://docs.fauna.com/fauna/current/build/integration/cloudflare/) | [Yes](/workers/databases/native-integrations/fauna/) | [fauna](https://github.com/fauna/fauna-js) | API through client library |
-| [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Yes](/workers/databases/native-integrations/planetscale/) | [@planetscale/database](https://github.com/planetscale/database-js) | API via client library |
-| [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Yes](/workers/databases/native-integrations/supabase/) | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | API via client library |
-| [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | No | [prisma](https://github.com/prisma/prisma) | API via client library |
-| [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Yes](/workers/databases/native-integrations/neon/) | [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | API via client library |
-| [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | No | API | GraphQL API via fetch() |
-| [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [Yes](/workers/databases/native-integrations/upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library |
-| [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | No | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library |
+| Database | Integration | Library or Driver | Connection Method |
+| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------- |
+| [PlanetScale](https://planetscale.com/blog/introducing-the-planetscale-serverless-driver-for-javascript) | [Yes](/workers/databases/native-integrations/planetscale/) | [@planetscale/database](https://github.com/planetscale/database-js) | API via client library |
+| [Supabase](https://github.com/supabase/supabase/tree/master/examples/with-cloudflare-workers) | [Yes](/workers/databases/native-integrations/supabase/) | [@supabase/supabase-js](https://github.com/supabase/supabase-js) | API via client library |
+| [Prisma](https://www.prisma.io/docs/guides/deployment/deployment-guides/deploying-to-cloudflare-workers) | No | [prisma](https://github.com/prisma/prisma) | API via client library |
+| [Neon](https://blog.cloudflare.com/neon-postgres-database-from-workers/) | [Yes](/workers/databases/native-integrations/neon/) | [@neondatabase/serverless](https://neon.tech/blog/serverless-driver-for-postgres/) | API via client library |
+| [Hasura](https://hasura.io/blog/building-applications-with-cloudflare-workers-and-hasura-graphql-engine/) | No | API | GraphQL API via fetch() |
+| [Upstash Redis](https://blog.cloudflare.com/cloudflare-workers-database-integration-with-upstash/) | [Yes](/workers/databases/native-integrations/upstash/) | [@upstash/redis](https://github.com/upstash/upstash-redis) | API via client library |
+| [TiDB Cloud](https://docs.pingcap.com/tidbcloud/integrate-tidbcloud-with-cloudflare) | No | [@tidbcloud/serverless](https://github.com/tidbcloud/serverless-js) | API via client library |
:::note[Easier setup with database integrations]
diff --git a/src/content/docs/workers/databases/native-integrations/fauna.mdx b/src/content/docs/workers/databases/native-integrations/fauna.mdx
deleted file mode 100644
index ca470741350891..00000000000000
--- a/src/content/docs/workers/databases/native-integrations/fauna.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
----
-pcx_content_type: configuration
-title: Fauna
----
-
-import { Render } from "~/components";
-
-[Fauna](https://fauna.com/) is a true serverless database that combines document flexibility with native relational capabilities, offering auto-scaling, multi-active replication, and HTTPS connectivity.
-
-
-
-## Set up an integration with Fauna
-
-To set up an integration with Fauna:
-
-1. You need to have an existing Fauna database to connect to. [Create a Fauna database with demo data](https://docs.fauna.com/fauna/current/get-started/quick-start/?lang=javascript#create-a-database).
-
-2. Once your database is created with demo data, you can query it directly using the Shell tab in the Fauna dashboard:
-
- ```sh
- Customer.all()
- ```
-
-3. Add the Fauna database integration to your Worker:
-
- 1. Log in to the [Cloudflare dashboard](https://dash.cloudflare.com) and select your account.
- 2. In **Account Home**, select **Workers & Pages**.
- 3. In **Overview**, select your Worker.
- 4. Select **Integrations** > **Fauna**.
- 5. Follow the setup flow, selecting the database created in step 1.
-
-4. In your Worker, install the `fauna` driver to connect to your database and start manipulating data:
-
- ```sh
- npm install fauna
- ```
-
-5. The following example shows how to make a query to your Fauna database in a Worker. The credentials needed to connect to Fauna have been automatically added as secrets to your Worker through the integration.
-
- ```javascript
- import { Client, fql } from "fauna";
-
- export default {
- async fetch(request, env) {
- const fauna = new Client({
- secret: env.FAUNA_SECRET,
- });
- const query = fql`Customer.all()`;
- const result = await fauna.query(query);
- return Response.json(result.data);
- },
- };
- ```
-
-6. You can manage the Cloudflare Fauna integration from the [Fauna Dashboard](https://dashboard.fauna.com/):
-
- - To view Fauna keys for an integrated Cloudflare Worker, select your database and click the **Keys** tab.
-
- Keys for a Cloudflare Worker integration are prepended with `_cloudflare_key_`.
-
- You can delete the key to disable the integration.
-
- - When you connect a Cloudflare Worker to your database, Fauna creates an OAuth client app in your Fauna account.
-
- To view your account's OAuth apps, go to **Account Settings > OAuth Apps** in the Fauna Dashboard.
-
- 
-
- You can delete the app to disable the integration.
-
-To learn more about Fauna, refer to [Fauna's official documentation](https://docs.fauna.com/).
diff --git a/src/content/docs/workers/get-started/quickstarts.mdx b/src/content/docs/workers/get-started/quickstarts.mdx
index 89edac9a9cecbb..63b39fe5515c2b 100644
--- a/src/content/docs/workers/get-started/quickstarts.mdx
+++ b/src/content/docs/workers/get-started/quickstarts.mdx
@@ -56,12 +56,6 @@ npm create cloudflare@latest -- --template
description="Use Vite to render pages on Cloudflare's global network with great DX. Includes i18n, markdown support and more."
/>
-
-
---
## Frameworks
diff --git a/src/content/docs/workers/tutorials/store-data-with-fauna/index.mdx b/src/content/docs/workers/tutorials/store-data-with-fauna/index.mdx
deleted file mode 100644
index 234ad2485d343e..00000000000000
--- a/src/content/docs/workers/tutorials/store-data-with-fauna/index.mdx
+++ /dev/null
@@ -1,530 +0,0 @@
----
-updated: 2024-09-05
-difficulty: Beginner
-pcx_content_type: tutorial
-title: Create a serverless, globally distributed REST API with Fauna
-tags:
- - Hono
-languages:
- - TypeScript
----
-
-import { Render, TabItem, Tabs, PackageManagers, WranglerConfig } from "~/components";
-
-In this tutorial, you learn how to store and retrieve data in your Cloudflare Workers applications by building a REST API that manages an inventory catalog using [Fauna](https://fauna.com/) as its data layer.
-
-## Learning goals
-
-- How to store and retrieve data from Fauna in Workers.
-- How to use Wrangler to store secrets securely.
-- How to use [Hono](https://hono.dev) as a web framework for your Workers.
-
-Building with Fauna, Workers, and Hono enables you to create a globally distributed, strongly consistent, fully serverless REST API in a single repository.
-
-Fauna is a document-based database with a flexible schema. This allows you to define the structure of your data – whatever it may be – and store documents that adhere to that structure. In this tutorial, you will build a product inventory, where each `product` document must contain the following properties:
-
-- **title** - A human-friendly string that represents the title or name of a product.
-- **serialNumber** - A machine-friendly string that uniquely identifies the product.
-- **weightLbs** - A floating point number that represents the weight in pounds of the product.
-- **quantity** - A non-negative integer that represents how many items of a particular product there are in the inventory.
-
-Documents are stored in a [collection](https://docs.fauna.com/fauna/current/reference/schema_entities/collection/). Collections in document databases are groups of related documents.
-
-For this tutorial, all API endpoints are public. However, Fauna also offers multiple avenues for securing endpoints and collections. Refer to [Choosing an authentication strategy with Fauna](https://fauna.com/blog/choosing-an-authentication-strategy-with-fauna) for more information on authenticating users to your applications with Fauna.
-
-
-
-## Set up Fauna
-
-### Create your database
-
-To create a database, log in to the [Fauna Dashboard](https://dashboard.fauna.com/) and click **Create Database**. When prompted, select your preferred [Fauna region group](https://docs.fauna.com/fauna/current/manage/region-groups/) and other database settings.
-
-:::note[Fauna Account]
-
-If you do not have a Fauna account, [sign up](https://dashboard.fauna.com/register) and deploy this template using the free tier.
-
-:::
-
-### Create a collection
-
-Create a `Products` collection for the database with the following query. To run the query in the Fauna Dashboard, select your database and click the **Shell** tab:
-
-```js title="Create a new collection"
-Collection.create({ name: "Products" });
-```
-
-The query outputs a result similar to the following:
-
-```js title="Output"
-{
- name: "Products",
- coll: Collection,
- ts: Time("2099-08-28T15:03:53.773Z"),
- history_days: 0,
- indexes: {},
- constraints: []
-}
-```
-
-### Create a secret key
-
-In production, the Worker will use the Cloudflare Fauna integration to automatically connect to Fauna. The integration creates any credentials needed for authentication with Fauna.
-
-For local development, you must manually create a [Fauna authentication key](https://docs.fauna.com/fauna/current/learn/security/keys/) and pass the key's secret to your Worker as a [development secret](#add-your-fauna-database-key-for-local-development).
-
-To create a Fauna authentication key:
-
-1. In the upper left pane of Fauna Dashboard’s Explorer page, select your database, and click the **Keys** tab.
-
-2. Click **Create Key**.
-
-3. Choose a **Role** of **Server**.
-
-4. Click **Save**.
-
-5. Copy the **Key Secret**. The secret is scoped to the database.
-
-:::caution[Protect your keys]
-
-Server keys can read and write all documents in all collections and can call all [user-defined functions](https://docs.fauna.com/fauna/current/cookbook/data_model/user_defined_functions) (UDFs). Protect server keys and do not commit them to source control repositories.
-
-:::
-
-## Manage your inventory with Workers
-
-### Create a new Worker project
-
-Create a new project by using [C3](https://github.com/cloudflare/workers-sdk/tree/main/packages/create-cloudflare).
-
-
-
-To continue with this guide:
-
-- For _What would you like to start with_?, select `Framework Starter`.
-- For _Which development framework do you want to use?_, select `Hono`.
-- For, _Do you want to deploy your application?_, select `No`.
-
-Then, move into your newly created directory:
-
-```sh
-cd fauna-workers
-```
-
-Update the Wrangler file to set the name for the Worker.
-
-
-
-```toml title="wrangler.toml"
-name = "fauna-workers"
-```
-
-
-
-### Add your Fauna database key for local development
-
-For local development, add a `.dev.vars` file on the project root and add your Fauna key's secret as a [development secret](/workers/configuration/secrets/#local-development-with-secrets):
-
-```plain title=".dev.vars"
-DATABASE_KEY=
-```
-
-### Add the Fauna integration
-
-Deploy your Worker to Cloudflare to ensure that everything is set up correctly:
-
-```sh
-npm run deploy
-```
-
-1. Login to your [Cloudflare dashboard](https://dash.cloudflare.com/).
-
-2. Select the **Integrations** tab and click on the **Fauna** integration.
-
- 
-
-3. Login to your Fauna account.
-
-4. Select the Fauna database you created earlier.
-
-5. Select `server` role as your database role.
-
-6. Enter `DATABASE_KEY` as the **Secret Name**.
-
-7. Select **Finish**.
-
-8. Navigate to **Settings** tab and select **Variables**. Notice that a new variable `DATABASE_KEY`
- is added to your Worker.
-
-The integration creates a new Fauna authentication key and stores the key's secret in the Worker's `DATABASE_KEY` secret. The deployed Worker uses this key.
-
-:::note
-
-You can manage the generated Fauna key in the Fauna Dashboard. See the [Clouflare Fauna integration docs](/workers/databases/native-integrations/fauna).
-
-:::
-
-### Install dependencies
-
-Install [the Fauna JavaScript driver](https://github.com/fauna/fauna-js) in your newly created Worker project.
-
-
-
-```sh title="Install the Fauna driver"
-npm install fauna
-```
-
-
-
-```sh title="Install the Fauna driver"
-yarn add fauna
-```
-
-
-
-### Base inventory logic
-
-Replace the contents of your `src/index.ts` file with the skeleton of your API:
-
-```ts title="src/index.ts"
-import { Hono } from "hono";
-import { Client, fql, ServiceError } from "fauna";
-
-type Bindings = {
- DATABASE_KEY: string;
-};
-
-type Variables = {
- faunaClient: Client;
-};
-
-type Product = {
- id: string;
- serialNumber: number;
- title: string;
- weightLbs: number;
- quantity: number;
-};
-
-const app = new Hono<{ Bindings: Bindings; Variables: Variables }>();
-
-app.use("*", async (c, next) => {
- const faunaClient = new Client({
- secret: c.env.DATABASE_KEY,
- });
- c.set("faunaClient", faunaClient);
- await next();
-});
-
-app.get("/", (c) => {
- return c.text("Hello World");
-});
-
-export default app;
-```
-
-This is custom middleware to initialize the Fauna client and set the instance with `c.set()` for later use in another handler:
-
-```js title="Custom middleware for the Fauna Client"
-app.use("*", async (c, next) => {
- const faunaClient = new Client({
- secret: c.env.DATABASE_KEY,
- });
- c.set("faunaClient", faunaClient);
- await next();
-});
-```
-
-You can access the `DATABASE_KEY` environment variable from `c.env.DATABASE_KEY`. Workers run on a [custom JavaScript runtime](/workers/runtime-apis/) instead of Node.js, so you cannot use `process.env` to access your environment variables.
-
-### Create product documents
-
-Add your first Hono handler to the `src/index.ts` file. This route accepts `POST` requests to the `/products` endpoint:
-
-```ts title="Create product documents"
-app.post("/products", async (c) => {
- const { serialNumber, title, weightLbs } =
- await c.req.json>();
- const query = fql`Products.create({
- serialNumber: ${serialNumber},
- title: ${title},
- weightLbs: ${weightLbs},
- quantity: 0
- })`;
- const result = await c.var.faunaClient.query(query);
- return c.json(result.data);
-});
-```
-
-:::caution[Handler order]
-
-In Hono, you should place your handler below the custom middleware.
-This is because middleware and handlers are executed in sequence from top to bottom.
-If you place the handler first, you cannot retrieve the instance of the Fauna client using `c.var.faunaClient`.
-
-:::
-
-This route applied an FQL query in the `fql` function that creates a new document in the **Products** collection:
-
-```js title="Create query in FQL inside JavaScript"
-fql`Products.create({
- serialNumber: ${serialNumber},
- title: ${title},
- weightLbs: ${weightLbs},
- quantity: 0
-})`;
-```
-
-To review what a document looks like, run the following query. In the Fauna dashboard, go to **Explorer** > Region name > Database name like a `cloudflare_rest_api` > the **SHELL** window:
-
-```js title="Create query in pure FQL"
-Products.create({
- serialNumber: "A48432348",
- title: "Gaming Console",
- weightLbs: 5,
- quantity: 0,
-});
-```
-
-Fauna returns the created document:
-
-```js title="Newly created document"
-{
- id: "",
- coll: Products,
- ts: "",
- serialNumber: "A48432348",
- title: "Gaming Console",
- weightLbs: 5,
- quantity: 0
-}
-```
-
-Examining the route you create, when the query is successful, the data newly created document is returned in the response body:
-
-```js title="Return the new document data"
-return c.json({
- productId: result.data,
-});
-```
-
-### Error handling
-
-If Fauna returns any error, an exception is raised by the client. You can catch this exception in `app.onError()`, then retrieve and respond with the result from the instance of `ServiceError`.
-
-```ts title="Handle errors"
-app.onError((e, c) => {
- if (e instanceof ServiceError) {
- return c.json(
- {
- status: e.httpStatus,
- code: e.code,
- message: e.message,
- },
- e.httpStatus,
- );
- }
- console.trace(e);
- return c.text("Internal Server Error", 500);
-});
-```
-
-### Retrieve product documents
-
-Next, create a route that reads a single document from the **Products** collection.
-
-Add the following handler to your `src/index.ts` file. This route accepts `GET` requests at the `/products/:productId` endpoint:
-
-```ts title="Retrieve product documents"
-app.get("/products/:productId", async (c) => {
- const productId = c.req.param("productId");
- const query = fql`Products.byId(${productId})`;
- const result = await c.var.faunaClient.query(query);
- return c.json(result.data);
-});
-```
-
-The FQL query uses the [`byId()`](https://docs.fauna.com/fauna/current/reference/schema_entities/collection/instance-byid) method to retrieve a full document from the **Productions** collection:
-
-```js title="Retrieve a document by ID in FQL inside JavaScript"
-fql`Products.byId(productId)`;
-```
-
-If the document exists, return it in the response body:
-
-```ts title="Return the document in the response body"
-return c.json(result.data);
-```
-
-If not, an error is returned.
-
-### Delete product documents
-
-The logic to delete product documents is similar to the logic for retrieving products. Add the following route to your `src/index.ts` file:
-
-```ts title="Delete product documents"
-app.delete("/products/:productId", async (c) => {
- const productId = c.req.param("productId");
- const query = fql`Products.byId(${productId})!.delete()`;
- const result = await c.var.faunaClient.query(query);
- return c.json(result.data);
-});
-```
-
-The only difference from the previous route is that you use the [`delete()`](https://docs.fauna.com/fauna/current/reference/auth/key/delete) method, combined with the `byId()` method, to delete a document.
-
-When the delete operation is successful, Fauna returns the deleted document and the route forwards the deleted document in the response's body. If not, an error is returned.
-
-## Test and deploy your Worker
-
-Before deploying your Worker, test it locally by using Wrangler's [`dev`](/workers/wrangler/commands/#dev) command:
-
-
-
-```sh title="Develop your Worker"
-npm run dev
-```
-
-
-
-```sh title="Develop your Worker"
-yarn dev
-```
-
-
-
-Once the development server is up and running, start making HTTP requests to your Worker.
-
-First, create a new product:
-
-```sh title="Create a new product"
-curl \
- --data '{"serialNumber": "H56N33834", "title": "Bluetooth Headphones", "weightLbs": 0.5}' \
- --header 'Content-Type: application/json' \
- --request POST \
- http://127.0.0.1:8787/products
-```
-
-You should receive a `200` response similar to the following:
-
-```json title="Create product response"
-{
- "productId": ""
-}
-```
-
-:::note
-
-Copy the `productId` value for use in the remaining test queries.
-
-:::
-
-Next, read the document you created:
-
-```sh title="Read a document"
-curl \
- --header 'Content-Type: application/json' \
- --request GET \
- http://127.0.0.1:8787/products/
-```
-
-The response should be the new document serialized to JSON:
-
-```json title="Read product response"
-{
- "coll": {
- "name": "Products"
- },
- "id": "",
- "ts": {
- "isoString": ""
- },
- "serialNumber": "H56N33834",
- "title": "Bluetooth Headphones",
- "weightLbs": 0.5,
- "quantity": 0
-}
-```
-
-Finally, deploy your Worker using the [`wrangler deploy`](/workers/wrangler/commands/#deploy) command:
-
-
-
-```sh title="Deploy your Worker"
-npm run deploy
-```
-
-
-
-```sh title="Deploy your Worker"
-yarn deploy
-```
-
-
-
-This publishes the Worker to your `*.workers.dev` subdomain.
-
-## Update inventory quantity
-
-As the last step, implement a route to update the quantity of a product in your inventory, which is `0` by default.
-
-This will present a problem. To calculate the total quantity of a product, you first need to determine how many items there currently are in your inventory. If you solve this in two queries, first reading the quantity and then updating it, the original data might change.
-
-Add the following route to your `src/index.ts` file. This route responds to HTTP `PATCH` requests on the `/products/:productId/add-quantity` URL endpoint:
-
-```ts title="Update inventory quantity"
-app.patch("/products/:productId/add-quantity", async (c) => {
- const productId = c.req.param("productId");
- const { quantity } = await c.req.json>();
- const query = fql`Products.byId(${productId}){ quantity : .quantity + ${quantity}}`;
- const result =
- await c.var.faunaClient.query>(query);
- return c.json(result.data);
-});
-```
-
-Examine the FQL query in more detail:
-
-```js title="Update query in FQL inside JavaScript"
-fql`Products.byId(${productId}){ quantity : .quantity + ${quantity}}`;
-```
-
-:::note[Consistency guarantees in Fauna]
-
-Even if multiple Workers update this quantity from different parts of the world, Fauna guarantees the consistency of the data across all Fauna regions. This article on [consistency](https://fauna.com/blog/consistency-without-clocks-faunadb-transaction-protocol?utm_source=Cloudflare&utm_medium=referral&utm_campaign=Q4_CF_2021) explains how Fauna's distributed protocol works without the need for atomic clocks.
-
-:::
-
-Test your update route:
-
-```sh title="Update product inventory"
-curl \
- --data '{"quantity": 5}' \
- --header 'Content-Type: application/json' \
- --request PATCH \
- http://127.0.0.1:8787/products//add-quantity
-```
-
-The response should be the entire updated document with five additional items in the quantity:
-
-```json title="Update product response"
-{
- "quantity": 5
-}
-```
-
-Update your Worker by deploying it to Cloudflare.
-
-
-
-```sh title="Update your Worker in Cloudflare"
-npm run deploy
-```
-
-
-
-```sh title="Update your Worker in Cloudflare"
-yarn deploy
-```
-
-