Skip to content

Commit 9e0f736

Browse files
committed
Added Type component
1 parent c8ad009 commit 9e0f736

File tree

1 file changed

+12
-9
lines changed

1 file changed

+12
-9
lines changed

src/content/docs/browser-rendering/rest-api/crawl-endpoint.mdx

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ sidebar:
55
order: 11
66
---
77

8-
import { Render } from "~/components";
8+
import { Type, MetaInfo, Render } from "~/components";
99

1010
The `/crawl` endpoint automates the process of scraping content from webpages starting with a single URL and crawling to a specified number or depth of links. The response can be returned in either HTML, Markdown, or JSON.
1111

@@ -39,14 +39,14 @@ If you are on a Workers Free plan, your crawl may fail if it hits the [limit of
3939
### Initiate the crawl job
4040

4141
Here are the basic parameters you can use to initiate your crawl job:
42-
- `url` — (Required) Starts crawling from this URL
43-
- `limit` — (Optional) Maximum number of pages to crawl (default is 10, maximum is 100,000)
44-
- `depth` — (Optional) Maximum link depth to crawl from the starting URL
45-
- `formats` — (Optional) Response format (default is HTML, other options are Markdown and JSON)
46-
47-
The API will respond immediately with a job `id` you will use to retrieve the status and results of the crawl job.
48-
49-
See the [advanced usage section below](/browser-rendering/rest-api/crawl-endpoint/#initiate-the-crawl-job) for additional parameters.
42+
- `url` <Type text="String" /> <MetaInfo text="Required" />
43+
- Starts crawling from this URL
44+
- `limit` <Type text="Number" /> <MetaInfo text="Optional" />
45+
- Maximum number of pages to crawl (default is 10, maximum is 100,000)
46+
- `depth` <Type text="Number" /> <MetaInfo text="Optional" />
47+
- Maximum link depth to crawl from the starting URL
48+
- `formats` <Type text="Array of strings" /> <MetaInfo text="Optional" />
49+
- Response format (default is HTML, other options are Markdown and JSON)
5050

5151
Here is an example that uses the basic parameters:
5252
```bash
@@ -65,6 +65,8 @@ curl -X POST 'https://api.cloudflare.com/client/v4/accounts/{account_id}/browser
6565
}'
6666
```
6767

68+
The API will respond immediately with a job `id` you will use to retrieve the status and results of the crawl job.
69+
6870
Here is an example of the response, which includes a job `id`:
6971

7072
```json output
@@ -75,6 +77,7 @@ Here is an example of the response, which includes a job `id`:
7577
"success": true
7678
}
7779
```
80+
See the [advanced usage section below](/browser-rendering/rest-api/crawl-endpoint/#initiate-the-crawl-job) for additional parameters.
7881

7982
### Request results of the crawl job
8083

0 commit comments

Comments
 (0)