Skip to content

Commit fb8429a

Browse files
Merge branch 'main' into add-elevenlabs
2 parents 30de909 + 49489d4 commit fb8429a

File tree

30 files changed

+1616
-1555
lines changed

30 files changed

+1616
-1555
lines changed

README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,9 @@ A growing set of community-developed and maintained servers demonstrates various
7979
- **[coin_api_mcp](https://github.com/longmans/coin_api_mcp)** - Provides access to [coinmarketcap](https://coinmarketcap.com/) cryptocurrency data.
8080
- **[Contentful-mcp](https://github.com/ivo-toby/contentful-mcp)** - Read, update, delete, publish content in your [Contentful](https://contentful.com) space(s) from this MCP Server.
8181
- **[Data Exploration](https://github.com/reading-plus-ai/mcp-server-data-exploration)** - MCP server for autonomous data exploration on .csv-based datasets, providing intelligent insights with minimal effort. NOTE: Will execute arbitrary Python code on your machine, please use with caution!
82+
- **[Dataset Viewer](https://github.com/privetin/dataset-viewer)** - Browse and analyze Hugging Face datasets with features like search, filtering, statistics, and data export
8283
- **[DevRev](https://github.com/kpsunil97/devrev-mcp-server)** - An MCP server to integrate with DevRev APIs to search through your DevRev Knowledge Graph where objects can be imported from diff. sources listed [here](https://devrev.ai/docs/import#available-sources).
84+
- **[Dify](https://github.com/YanxingLiu/dify-mcp-server)** - A simple implementation of an MCP server for dify workflows.
8385
- **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Integrate with Docker to manage containers, images, volumes, and networks.
8486
- **[Elasticsearch](https://github.com/cr7258/elasticsearch-mcp-server)** - MCP server implementation that provides Elasticsearch interaction.
8587
- **[ElevenLabs](https://github.com/mamertofabian/elevenlabs-mcp-server)** - A server that integrates with ElevenLabs text-to-speech API capable of generating full voiceovers with multiple voices.
@@ -111,10 +113,13 @@ A growing set of community-developed and maintained servers demonstrates various
111113
- **[oatpp-mcp](https://github.com/oatpp/oatpp-mcp)** - C++ MCP integration for Oat++. Use [Oat++](https://oatpp.io) to build MCP servers.
112114
- **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through your Obsidian vault or any directory containing Markdown notes
113115
- **[OpenAPI](https://github.com/snaggle-ai/openapi-mcp-server)** - Interact with [OpenAPI](https://www.openapis.org/) APIs.
116+
- **[OpenCTI](https://github.com/Spathodea-Network/opencti-mcp)** - Interact with OpenCTI platform to retrieve threat intelligence data including reports, indicators, malware and threat actors.
114117
- **[OpenRPC](https://github.com/shanejonas/openrpc-mpc-server)** - Interact with and discover JSON-RPC APIs via [OpenRPC](https://open-rpc.org).
115118
- **[Pandoc](https://github.com/vivekVells/mcp-pandoc)** - MCP server for seamless document format conversion using Pandoc, supporting Markdown, HTML, and plain text, with other formats like PDF, csv and docx in development.
116119
- **[Pinecone](https://github.com/sirmews/mcp-pinecone)** - MCP server for searching and uploading records to Pinecone. Allows for simple RAG features, leveraging Pinecone's Inference API.
120+
- **[Placid.app](https://github.com/felores/placid-mcp-server)** - Generate image and video creatives using Placid.app templates
117121
- **[Playwright](https://github.com/executeautomation/mcp-playwright)** - This MCP Server will help you run browser automation and webscraping using Playwright
122+
- **[Postman](https://github.com/shannonlal/mcp-postman)** - MCP server for running Postman Collections locally via Newman. Allows for simple execution of Postman Server and returns the results of whether the collection passed all the tests.
118123
- **[RAG Web Browser](https://github.com/apify/mcp-server-rag-web-browser)** An MCP server for Apify's RAG Web Browser Actor to perform web searches, scrape URLs, and return content in Markdown.
119124
- **[Rememberizer AI](https://github.com/skydeckai/mcp-server-rememberizer)** - An MCP server designed for interacting with the Rememberizer data source, facilitating enhanced knowledge retrieval.
120125
- **[Salesforce MCP](https://github.com/smn2gnt/MCP-Salesforce)** - Interact with Salesforce Data and Metadata

package-lock.json

Lines changed: 13 additions & 7 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

src/aws-kb-retrieval-server/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
COPY src/aws-kb-retrieval-server /app
44
COPY tsconfig.json /tsconfig.json

src/brave-search/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
# Must be entire project because `prepare` script is run during `npm install` and requires all files.
44
COPY src/brave-search /app

src/everart/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
COPY src/everart /app
44
COPY tsconfig.json /tsconfig.json

src/everything/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
COPY src/everything /app
44
COPY tsconfig.json /tsconfig.json

src/fetch/pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ classifiers = [
1717
]
1818
dependencies = [
1919
"markdownify>=0.13.1",
20-
"mcp>=1.0.0",
20+
"mcp>=1.1.3",
2121
"protego>=0.3.1",
2222
"pydantic>=2.0.0",
2323
"readabilipy>=0.2.0",

src/fetch/src/mcp_server_fetch/server.py

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
from mcp.server import Server
88
from mcp.server.stdio import stdio_server
99
from mcp.types import (
10+
ErrorData,
1011
GetPromptResult,
1112
Prompt,
1213
PromptArgument,
@@ -79,15 +80,15 @@ async def check_may_autonomously_fetch_url(url: str, user_agent: str) -> None:
7980
headers={"User-Agent": user_agent},
8081
)
8182
except HTTPError:
82-
raise McpError(
83+
raise McpError(ErrorData(
8384
INTERNAL_ERROR,
8485
f"Failed to fetch robots.txt {robot_txt_url} due to a connection issue",
85-
)
86+
))
8687
if response.status_code in (401, 403):
87-
raise McpError(
88+
raise McpError(ErrorData(
8889
INTERNAL_ERROR,
8990
f"When fetching robots.txt ({robot_txt_url}), received status {response.status_code} so assuming that autonomous fetching is not allowed, the user can try manually fetching by using the fetch prompt",
90-
)
91+
))
9192
elif 400 <= response.status_code < 500:
9293
return
9394
robot_txt = response.text
@@ -96,15 +97,15 @@ async def check_may_autonomously_fetch_url(url: str, user_agent: str) -> None:
9697
)
9798
robot_parser = Protego.parse(processed_robot_txt)
9899
if not robot_parser.can_fetch(str(url), user_agent):
99-
raise McpError(
100+
raise McpError(ErrorData(
100101
INTERNAL_ERROR,
101102
f"The sites robots.txt ({robot_txt_url}), specifies that autonomous fetching of this page is not allowed, "
102103
f"<useragent>{user_agent}</useragent>\n"
103104
f"<url>{url}</url>"
104105
f"<robots>\n{robot_txt}\n</robots>\n"
105106
f"The assistant must let the user know that it failed to view the page. The assistant may provide further guidance based on the above information.\n"
106107
f"The assistant can tell the user that they can try manually fetching the page by using the fetch prompt within their UI.",
107-
)
108+
))
108109

109110

110111
async def fetch_url(
@@ -124,12 +125,12 @@ async def fetch_url(
124125
timeout=30,
125126
)
126127
except HTTPError as e:
127-
raise McpError(INTERNAL_ERROR, f"Failed to fetch {url}: {e!r}")
128+
raise McpError(ErrorData(INTERNAL_ERROR, f"Failed to fetch {url}: {e!r}"))
128129
if response.status_code >= 400:
129-
raise McpError(
130+
raise McpError(ErrorData(
130131
INTERNAL_ERROR,
131132
f"Failed to fetch {url} - status code {response.status_code}",
132-
)
133+
))
133134

134135
page_raw = response.text
135136

@@ -221,11 +222,11 @@ async def call_tool(name, arguments: dict) -> list[TextContent]:
221222
try:
222223
args = Fetch(**arguments)
223224
except ValueError as e:
224-
raise McpError(INVALID_PARAMS, str(e))
225+
raise McpError(ErrorData(INVALID_PARAMS, str(e)))
225226

226227
url = str(args.url)
227228
if not url:
228-
raise McpError(INVALID_PARAMS, "URL is required")
229+
raise McpError(ErrorData(INVALID_PARAMS, "URL is required"))
229230

230231
if not ignore_robots_txt:
231232
await check_may_autonomously_fetch_url(url, user_agent_autonomous)
@@ -253,7 +254,7 @@ async def call_tool(name, arguments: dict) -> list[TextContent]:
253254
@server.get_prompt()
254255
async def get_prompt(name: str, arguments: dict | None) -> GetPromptResult:
255256
if not arguments or "url" not in arguments:
256-
raise McpError(INVALID_PARAMS, "URL is required")
257+
raise McpError(ErrorData(INVALID_PARAMS, "URL is required"))
257258

258259
url = arguments["url"]
259260

src/filesystem/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
WORKDIR /app
44

src/gdrive/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM node:22.12-alpine as builder
1+
FROM node:22.12-alpine AS builder
22

33
COPY src/gdrive /app
44
COPY tsconfig.json /tsconfig.json

0 commit comments

Comments
 (0)