MCP server that exposes PaperBanana's diagram and plot generation as tools for Claude Code, Cursor, or any MCP-compatible client.
| Tool | Description |
|---|---|
generate_diagram |
Generate a methodology diagram from text context + caption |
generate_plot |
Generate a statistical plot from JSON data + intent description |
evaluate_diagram |
Compare a generated diagram against a human reference (4 dimensions) |
No local clone needed. Add the config below to your MCP client.
Add to .claude/claude_code_config.json (or project-level):
{
"mcpServers": {
"paperbanana": {
"command": "uvx",
"args": ["--from", "paperbanana[mcp]", "paperbanana-mcp"],
"env": { "GOOGLE_API_KEY": "your-google-api-key" }
}
}
}Add to .cursor/mcp.json in your project:
{
"mcpServers": {
"paperbanana": {
"command": "uvx",
"args": ["--from", "paperbanana[mcp]", "paperbanana-mcp"],
"env": { "GOOGLE_API_KEY": "your-google-api-key" }
}
}
}For contributors or local development:
pip install -e ".[mcp]"This installs fastmcp and registers the paperbanana-mcp console script. Then use the same MCP config as above but replace the uvx command with a direct call:
{
"mcpServers": {
"paperbanana": {
"command": "paperbanana-mcp",
"env": { "GOOGLE_API_KEY": "your-google-api-key" }
}
}
}This repo ships with 3 Claude Code skills in .claude/skills/:
| Skill | Description |
|---|---|
/generate-diagram <file> [caption] |
Generate a methodology diagram from a text file |
/generate-plot <data-file> [intent] |
Generate a statistical plot from CSV or JSON data |
/evaluate-diagram <generated> <reference> |
Evaluate a diagram against a human reference |
Skills are available automatically when you clone the repo and use Claude Code.
User: Generate a diagram for this methodology:
"Our framework uses a two-phase pipeline: first a linear planning
phase with Retriever, Planner, and Stylist agents, followed by
an iterative refinement phase with Visualizer and Critic agents."
Caption: "Overview of the PaperBanana multi-agent framework"
User: Create a bar chart from this data:
{"models": ["GPT-4", "Claude", "Gemini"], "accuracy": [0.92, 0.94, 0.91]}
Intent: "Bar chart comparing model accuracy on benchmark"
User: Evaluate the diagram at ./output.png against the reference at ./reference.png
Context: [methodology text]
Caption: "System architecture overview"
The server reads configuration from environment variables and .env files.
| Variable | Default | Description |
|---|---|---|
GOOGLE_API_KEY |
(none) | Google API key (required) |
SKIP_SSL_VERIFICATION |
false |
Disable SSL verification for proxied environments |
After publishing to PyPI, you can submit PaperBanana to MCP directories for discoverability:
- Official MCP Registry - uses the
mcp-publisherCLI; see their docs for the current submission process - Smithery.ai - submit through their website
- Glama.ai - community listing submission
- mcp.so - community-driven, submit via their GitHub