Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: CLI
pcx_content_type: get-started
sidebar:
order: 1
order: 2
head:
- tag: title
content: Get started - CLI
Expand Down Expand Up @@ -48,11 +48,11 @@ cd my-first-worker

In your project directory, C3 will have generated the following:

* `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
* `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax.
* `package.json`: A minimal Node dependencies configuration file.
* `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
* `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).
- `wrangler.jsonc`: Your [Wrangler](/workers/wrangler/configuration/#sample-wrangler-configuration) configuration file.
- `index.js` (in `/src`): A minimal `'Hello World!'` Worker written in [ES module](/workers/reference/migrate-to-module-workers/) syntax.
- `package.json`: A minimal Node dependencies configuration file.
- `package-lock.json`: Refer to [`npm` documentation on `package-lock.json`](https://docs.npmjs.com/cli/v9/configuring-npm/package-lock-json).
- `node_modules`: Refer to [`npm` documentation `node_modules`](https://docs.npmjs.com/cli/v7/configuring-npm/folders#node-modules).

</Details>

Expand Down
54 changes: 49 additions & 5 deletions src/content/docs/workers/get-started/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,57 @@
pcx_content_type: navigation
title: Get started
sidebar:
order: 2
order: 1
group:
hideIndex: true
hideIndex: false
---

import { DirectoryListing, Render } from "~/components";
import {
DirectoryListing,
Render,
CardGrid,
Card,
LinkCard,
} from "~/components";

Build your first Worker.
## What are Cloudflare Workers?

<DirectoryListing />
Cloudflare Workers let you deploy and run code on [Cloudflare’s global network of data centers](https://www.cloudflare.com/network/). You can think of each Worker as its own server: it accepts incoming HTTP requests, processes them, and returns a response. Unlike traditional servers, you do not have to manually scale resources up or down — Cloudflare automatically spins up and shuts down Workers as traffic fluctuates, and you pay only for the time your code is actually running rather than for idle or [wall-clock time](https://blog.cloudflare.com/workers-pricing-scale-to-zero/).

## How Workers fit into a modern web stack

In many traditional setups, frontend (HTML, CSS, and JavaScript) and backend logic (APIs, authentication, or data fetching) are deployed separately — sometimes even on different platforms. While some providers allow you to host both your frontend and serverless functions in one place, they typically run these functions in a limited set of regions. With Cloudflare Workers, you deploy your entire application — static assets and dynamic logic — to data centers **worldwide**.

This allows you to manage everything in a single project, without needing to think about regions or how to synchronize deployments. The platform supports [popular frameworks](/workers/frameworks/), so you can keep using your desired framework for your frontend. The key difference is that your server-side code runs alongside your frontend code on Cloudflare’s network. This design minimizes latency on every request, and cuts down the number of moving parts by combining hosting, routing, and server-side execution in one platform.

## When your app needs to persist data

Beyond compute, most applications need a way to store and retrieve data. Cloudflare offers native, cost-effective storage services that run on the same global network as Workers, allowing you to run entire applications in a single platform — without managing central servers. These storage products (KV, R2, Durable Objects, and D1) integrate directly with Workers via [bindings](/workers/runtime-apis/bindings), so that requests to read or write data can stay on Cloudflare’s internal network. Since Cloudflare runs both the compute (your Worker) and the storage ([Workers KV](/kv), [R2](/r2), [Durable Objects](/durable-objects/), [D1](/d1/)), the Worker doesn’t have to make a round trip over the public Internet to fetch data. To learn which storage product is right for your project, read [our guide](/workers/platform/storage-options/).

## Choose your path to get started

<CardGrid>
<LinkCard
title="CLI"
href="/workers/get-started/cli/"
description="Follow our getting started guide using our CLI, Wrangler, to deploy your first Worker project."
/>

<LinkCard
title="Dashboard"
href="/workers/get-started/dashboard/"
description="Quickly create and edit Workers in your browser. Perfect for smaller scripts, rapid prototyping, or when you prefer not to install extra tools locally."
/>

<LinkCard
title="Prompting"
href="/workers/get-started/prompting/"
description="Use AI tools or large language models (LLMs) by providing a specialized prompt that can generate or refactor Worker code."
/>

<LinkCard
title="Quickstarts"
href="/workers/get-started/quickstarts/"
description="Start from curated GitHub templates for common scenarios and adapt them to your needs."
/>
</CardGrid>
70 changes: 39 additions & 31 deletions src/content/docs/workers/get-started/prompting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,19 @@
title: Prompting
pcx_content_type: concept
sidebar:
order: 3
order: 4
---

import { Tabs, TabItem, GlossaryTooltip, Type, Badge, TypeScriptExample } from "~/components";
import {
Tabs,
TabItem,
GlossaryTooltip,
Type,
Badge,
TypeScriptExample,
} from "~/components";
import { Code } from "@astrojs/starlight/components";
import BasePrompt from '~/content/partials/prompts/base-prompt.txt?raw';
import BasePrompt from "~/content/partials/prompts/base-prompt.txt?raw";

One of the fastest ways to build an application is by using AI to assist with writing the boiler plate code. When building, iterating on or debugging applications using AI tools and Large Language Models (LLMs), a well-structured and extensive prompt helps provide the model with clearer guidelines & examples that can dramatically improve output.

Expand All @@ -16,30 +23,32 @@ Below is an extensive example prompt that can help you build applications using
### Getting started with Workers using a prompt <Badge text="Beta" variant="caution" size="small" />

To use the prompt:

1. Use the click-to-copy button at the top right of the code block below to copy the full prompt to your clipboard
2. Paste into your AI tool of choice (for example OpenAI's ChatGPT or Anthropic's Claude)
3. Make sure to enter your part of the prompt at the end between the `<user_prompt>` and `</user_prompt>` tags.

Base prompt:

<Code code={BasePrompt} collapse={"30-10000"} lang="md" />

The prompt above adopts several best practices, including:

* Using `<xml>` tags to structure the prompt
* API and usage examples for products and use-cases
* Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
* Recommendations on Cloudflare products to use for specific storage or state needs
- Using `<xml>` tags to structure the prompt
- API and usage examples for products and use-cases
- Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Identified issues

  • Vale Style Guide - (cloudflare.LatinTerms-warning) Use 'for example' instead of 'e.g.', but consider rewriting the sentence.

Proposed fix

Suggested change
- Guidance on how to generate configuration (e.g. `wrangler.jsonc`) as part of the models response.
- Guidance on how to generate configuration (for example, `wrangler.jsonc`) as part of the models response.

I replaced 'e.g.' with 'for example' to align with the style guide recommendation. The term 'e.g.' was not in a code reference, so the correction is applicable.

- Recommendations on Cloudflare products to use for specific storage or state needs

### Additional uses

You can use the prompt in several ways:

* Within the user context window, with your own user prompt inserted between the `<user_prompt>` tags (**easiest**)
* As the `system` prompt for models that support system prompts
* Adding it to the prompt library and/or file context within your preferred IDE:
* Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai)
* Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context.
* Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat.
- Within the user context window, with your own user prompt inserted between the `<user_prompt>` tags (**easiest**)
- As the `system` prompt for models that support system prompts
- Adding it to the prompt library and/or file context within your preferred IDE:
- Cursor: add the prompt to [your Project Rules](https://docs.cursor.com/context/rules-for-ai)
- Zed: use [the `/file` command](https://zed.dev/docs/assistant/assistant-panel) to add the prompt to the Assistant context.
- Windsurf: use [the `@-mention` command](https://docs.codeium.com/chat/overview) to include a file containing the prompt to your Chat.

:::note

Expand All @@ -56,15 +65,15 @@ If you are building an AI application that will itself generate code, you can ad
<TypeScriptExample filename="index.ts">

```ts
import workersPrompt from "./workersPrompt.md"
import workersPrompt from "./workersPrompt.md";

// Llama 3.3 from Workers AI
const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast"
const PREFERRED_MODEL = "@cf/meta/llama-3.3-70b-instruct-fp8-fast";

export default {
async fetch(req: Request, env: Env, ctx: ExecutionContext) {
const openai = new OpenAI({
apiKey: env.WORKERS_AI_API_KEY
apiKey: env.WORKERS_AI_API_KEY,
});

const stream = await openai.chat.completions.create({
Expand All @@ -76,8 +85,9 @@ export default {
{
role: "user",
// Imagine something big!
content: "Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ..."
}
content:
"Build an AI Agent using Workflows. The Workflow should be triggered by a GitHub webhook on a pull request, and ...",
},
],
model: PREFERRED_MODEL,
stream: true,
Expand All @@ -92,7 +102,7 @@ export default {
(async () => {
try {
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
const content = chunk.choices[0]?.delta?.content || "";
await writer.write(encoder.encode(content));
}
} finally {
Expand All @@ -102,24 +112,22 @@ export default {

return new Response(transformStream.readable, {
headers: {
'Content-Type': 'text/plain; charset=utf-8',
'Transfer-Encoding': 'chunked'
}
"Content-Type": "text/plain; charset=utf-8",
"Transfer-Encoding": "chunked",
},
});
}
}

},
};
```

</TypeScriptExample>


## Additional resources

To get the most out of AI models and tools, we recommend reading the following guides on prompt engineering and structure:

* OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
* The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic
* Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts
* Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
* GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
- OpenAI's [prompt engineering](https://platform.openai.com/docs/guides/prompt-engineering) guide and [best practices](https://platform.openai.com/docs/guides/reasoning-best-practices) for using reasoning models.
- The [prompt engineering](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) guide from Anthropic
- Google's [quick start guide](https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf) for writing effective prompts
- Meta's [prompting documentation](https://www.llama.com/docs/how-to-guides/prompting/) for their Llama model family.
- GitHub's guide for [prompt engineering](https://docs.github.com/en/copilot/using-github-copilot/copilot-chat/prompt-engineering-for-copilot-chat) when using Copilot Chat.
2 changes: 1 addition & 1 deletion src/content/docs/workers/get-started/quickstarts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ type: overview
pcx_content_type: get-started
title: Quickstarts
sidebar:
order: 3
order: 5
head: []
description: GitHub repositories that are designed to be a starting point for
building a new Cloudflare Workers project.
Expand Down
Loading