You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**[Git](src/git)** - Tools to read, search, and manipulate Git repositories
11
13
-**[Google Drive](src/gdrive)** - File access and search capabilities for Google Drive
12
14
-**[PostgreSQL](src/postgres)** - Read-only database access with schema inspection
15
+
-**[Sqlite](src/sqlite)** - Database interaction and business intelligence capabilities
13
16
-**[Slack](src/slack)** - Channel management and messaging capabilities
17
+
-**[Sentry](src/sentry)** - Retrieving and analyzing issues from Sentry.io
14
18
-**[Memory](src/memory)** - Knowledge graph-based persistent memory system
15
19
-**[Puppeteer](src/puppeteer)** - Browser automation and web scraping
16
20
-**[Brave Search](src/brave-search)** - Web and local search using Brave's Search API
17
21
-**[Google Maps](src/google-maps)** - Location services, directions, and place details
18
22
-**[Fetch](src/fetch)** - Web content fetching and conversion for efficient LLM usage
19
23
24
+
## 🌎 Community Servers
25
+
26
+
-**[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy, configure & interrogate your resources on the Cloudflare developer platform (e.g. Workers/KV/R2/D1)
27
+
-**[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Interact with your crash reporting and real using monitoring data on your Raygun account
28
+
20
29
## 🚀 Getting Started
21
30
22
31
### Using MCP Servers in this Repository
@@ -101,7 +110,7 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
Copy file name to clipboardExpand all lines: src/fetch/README.md
+10-33Lines changed: 10 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,20 +2,27 @@
2
2
3
3
A Model Context Protocol server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
4
4
5
-
Presently the server only supports fetching HTML content.
5
+
The fetch tool will truncate the response, but by using the `start_index` argument, you can specify where to start the content extraction. This lets models read a webpage in chunks, until they find the information they need.
6
6
7
7
### Available Tools
8
8
9
9
-`fetch` - Fetches a URL from the internet and extracts its contents as markdown.
10
+
-`url` (string, required): URL to fetch
11
+
-`max_length` (integer, optional): Maximum number of characters to return (default: 5000)
12
+
-`start_index` (integer, optional): Start content from this character index (default: 0)
13
+
-`raw` (boolean, optional): Get raw content without markdown conversion (default: false)
10
14
11
15
### Prompts
12
16
13
17
-**fetch**
14
18
- Fetch a URL and extract its contents as markdown
15
-
- Argument: `url` (string, required): URL to fetch
19
+
- Arguments:
20
+
-`url` (string, required): URL to fetch
16
21
17
22
## Installation
18
23
24
+
Optionally: Install node.js, this will cause the fetch server to use a different HTML simplifier that is more robust.
25
+
19
26
### Using uv (recommended)
20
27
21
28
When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will
@@ -67,36 +74,6 @@ Add to your Claude settings:
67
74
```
68
75
</details>
69
76
70
-
### Configure for Zed
71
-
72
-
Add to your Zed settings.json:
73
-
74
-
<details>
75
-
<summary>Using uvx</summary>
76
-
77
-
```json
78
-
"context_servers": [
79
-
"mcp-server-fetch": {
80
-
"command": "uvx",
81
-
"args": ["mcp-server-fetch"]
82
-
}
83
-
],
84
-
```
85
-
</details>
86
-
87
-
<details>
88
-
<summary>Using pip installation</summary>
89
-
90
-
```json
91
-
"context_servers": {
92
-
"mcp-server-fetch": {
93
-
"command": "python",
94
-
"args": ["-m", "mcp_server_fetch"]
95
-
}
96
-
},
97
-
```
98
-
</details>
99
-
100
77
### Customization - robots.txt
101
78
102
79
By default, the server will obey a websites robots.txt file if the request came from the model (via a tool), but not if
@@ -105,7 +82,7 @@ the request was user initiated (via a prompt). This can be disabled by adding th
105
82
106
83
### Customization - User-agent
107
84
108
-
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the
85
+
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the
0 commit comments