diff --git a/src/content/docs/workers/get-started/guide.mdx b/src/content/docs/workers/get-started/cli.mdx
similarity index 95%
rename from src/content/docs/workers/get-started/guide.mdx
rename to src/content/docs/workers/get-started/cli.mdx
index fce8efe6d0daa7c..5a49f429350d983 100644
--- a/src/content/docs/workers/get-started/guide.mdx
+++ b/src/content/docs/workers/get-started/cli.mdx
@@ -2,7 +2,7 @@
title: CLI
pcx_content_type: get-started
sidebar:
- order: 1
+ order: 2
head:
- tag: title
content: Get started - CLI
@@ -48,11 +48,11 @@ cd my-first-worker
In your project directory, C3 will have generated the following:
-* `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
-* `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax.
-* `package.json`: A minimal Node dependencies configuration file.
-* `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
-* `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
+- `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
+- `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax.
+- `package.json`: A minimal Node dependencies configuration file.
+- `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
+- `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
diff --git a/src/content/docs/workers/get-started/index.mdx b/src/content/docs/workers/get-started/index.mdx
index a0625b2cbe2954c..7e0c2f26675f71f 100644
--- a/src/content/docs/workers/get-started/index.mdx
+++ b/src/content/docs/workers/get-started/index.mdx
@@ -2,13 +2,57 @@
pcx_content_type: navigation
title: Get started
sidebar:
- order: 2
+ order: 1
group:
- hideIndex: true
+ hideIndex: false
---
-import { DirectoryListing, Render } from "~/components";
+import {
+ DirectoryListing,
+ Render,
+ CardGrid,
+ Card,
+ LinkCard,
+} from "~/components";
-Build your first Worker.
+## What are Cloudflare Workers?
-
+Cloudflare Workers let you deploy and run code on [Cloudflare’s global network of data centers](https://www.cloudflare.com/network/). You can think of each Worker as its own server: it accepts incoming HTTP requests, processes them, and returns a response. Unlike traditional servers, you do not have to manually scale resources up or down — Cloudflare automatically spins up and shuts down Workers as traffic fluctuates, and you pay only for the time your code is actually running rather than for idle or [wall-clock time](https://blog.cloudflare.com/workers-pricing-scale-to-zero/).
+
+## How Workers fit into a modern web stack
+
+In many traditional setups, frontend (HTML, CSS, and JavaScript) and backend logic (APIs, authentication, or data fetching) are deployed separately — sometimes even on different platforms. While some providers allow you to host both your frontend and serverless functions in one place, they typically run these functions in a limited set of regions. With Cloudflare Workers, you deploy your entire application — static assets and dynamic logic — to data centers **worldwide**.
+
+This allows you to manage everything in a single project, without needing to think about regions or how to synchronize deployments. The platform supports [popular frameworks](/workers/frameworks/), so you can keep using your desired framework for your frontend. The key difference is that your server-side code runs alongside your frontend code on Cloudflare’s network. This design minimizes latency on every request, and cuts down the number of moving parts by combining hosting, routing, and server-side execution in one platform.
+
+## When your app needs to persist data
+
+Beyond compute, most applications need a way to store and retrieve data. Cloudflare offers native, cost-effective storage services that run on the same global network as Workers, allowing you to run entire applications in a single platform — without managing central servers. These storage products ([Workers KV](/kv), [R2](/r2), [Durable Objects](/durable-objects/), [D1](/d1/)) integrate directly with Workers via [bindings](/workers/runtime-apis/bindings), so that requests to read or write data can stay on Cloudflare’s internal network. Since Cloudflare runs both the compute (your Worker) and the storage, the Worker doesn’t have to make a round trip over the public Internet to fetch data. To learn which storage product is right for your project, read [our guide](/workers/platform/storage-options/).
+
+## Choose your path to get started
+
+
+
+
+
+
+
+
+
+
diff --git a/src/content/docs/workers/get-started/prompting.mdx b/src/content/docs/workers/get-started/prompting.mdx
index ca440aa98fc0a49..0b8520fc464e012 100644
--- a/src/content/docs/workers/get-started/prompting.mdx
+++ b/src/content/docs/workers/get-started/prompting.mdx
@@ -2,12 +2,19 @@
title: Prompting
pcx_content_type: concept
sidebar:
- order: 3
+ order: 4
---
-import { Tabs, TabItem, GlossaryTooltip, Type, Badge, TypeScriptExample } from "~/components";
+import {
+ Tabs,
+ TabItem,
+ GlossaryTooltip,
+ Type,
+ Badge,
+ TypeScriptExample,
+} from "~/components";
import { Code } from "@astrojs/starlight/components";
-import BasePrompt from '~/content/partials/prompts/base-prompt.txt?raw';
+import BasePrompt from "~/content/partials/prompts/base-prompt.txt?raw";
One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output.
@@ -16,30 +23,32 @@ Below is an extensive example prompt that can help you build applications using
### Getting started with Workers using a prompt
To use the prompt:
+
1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard
2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude)
3. Make sure to enter your part of the prompt at the end between the `` and `` tags.
Base prompt:
+
The prompt above adopts several best practices, including:
-* Using `` tags to structure the prompt
-* API and usage examples for products and use-cases
-* Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
-* Recommendations on Cloudflare products to use for specific storage or state needs
+- Using `` tags to structure the prompt
+- API and usage examples for products and use-cases
+- Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
+- Recommendations on Cloudflare products to use for specific storage or state needs
### Additional uses
You can use the prompt in several ways:
-* Within the user context window, with your own user prompt inserted between the `` tags (**easiest**)
-* As the `system` prompt for models that support system prompts
-* Adding it to the prompt library and/or file context within your preferred IDE:
- * Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai)
- * Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context.
- * Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat.
+- Within the user context window, with your own user prompt inserted between the `` tags (**easiest**)
+- As the `system` prompt for models that support system prompts
+- Adding it to the prompt library and/or file context within your preferred IDE:
+ - Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai)
+ - Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context.
+ - Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat.
:::note
@@ -56,15 +65,15 @@ If you are building an AI application that will itself generate code, you can ad
```ts
-import workersPrompt from "./workersPrompt.md"
+import workersPrompt from "./workersPrompt.md";
// Llama 3.3 from Workers AI
-const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast"
+const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast";
export default {
async fetch(req: Request, env: Env, ctx: ExecutionContext) {
const openai = new OpenAI({
- apiKey: env.WORKERS_AI_API_KEY
+ apiKey: env.WORKERS_AI_API_KEY,
});
const stream = await openai.chat.completions.create({
@@ -76,8 +85,9 @@ export default {
{
role: "user",
// Imagine something big!
- content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..."
- }
+ content:
+ "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ...",
+ },
],
model: PREFERRED_MODEL,
stream: true,
@@ -92,7 +102,7 @@ export default {
(async () => {
try {
for await (const chunk of stream) {
- const content = chunk.choices[0]?.delta?.content || '';
+ const content = chunk.choices[0]?.delta?.content || "";
await writer.write(encoder.encode(content));
}
} finally {
@@ -102,24 +112,22 @@ export default {
return new Response(transformStream.readable, {
headers: {
- 'Content-Type': 'text/plain; charset=utf-8',
- 'Transfer-Encoding': 'chunked'
- }
+ "Content-Type": "text/plain; charset=utf-8",
+ "Transfer-Encoding": "chunked",
+ },
});
- }
-}
-
+ },
+};
```
-
## Additional resources
To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure:
-* OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
-* The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic
-* Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts
-* Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
-* GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
+- OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
+- The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic
+- Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts
+- Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
+- GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
diff --git a/src/content/docs/workers/get-started/quickstarts.mdx b/src/content/docs/workers/get-started/quickstarts.mdx
index 89edac9a9cecbb1..2119e4b3d41c28b 100644
--- a/src/content/docs/workers/get-started/quickstarts.mdx
+++ b/src/content/docs/workers/get-started/quickstarts.mdx
@@ -3,7 +3,7 @@ type: overview
pcx_content_type: get-started
title: Quickstarts
sidebar:
- order: 3
+ order: 5
head: []
description: GitHub repositories that are designed to be a starting point for
building a new Cloudflare Workers project.