The most impactful contribution right now is improving the reference dataset. Output quality scales directly with reference quality, so even a single well-chosen example helps. However, we are open to improving distributions or adding new features. Support is always welcome.
PaperBanana uses a curated set of methodology diagrams for in-context learning. We need more diverse, high-quality samples across different diagram styles and research domains.
- The diagram clearly illustrates system architecture, pipeline flow, or framework structure
- Landscape layout with aspect ratio roughly between 1.5 and 2.5 (width / height)
- The methodology section in the paper is self-contained enough to describe the approach
- The diagram is not a results plot, ablation table, t-SNE visualization, or data sample
Open a Discussion post or an Issue with:
- arXiv link (or other public paper URL)
- Figure number of the methodology diagram
- (Optional) Which category it falls under: Agent & Reasoning, Vision & Perception, Generative & Learning, or Science & Applications
We'll handle extraction and curation from there.
Add a complete reference example via pull request. Each example is a directory under data/reference_sets/ containing three files:
data/reference_sets/your_example_name/
├── methodology.txt # Extracted methodology section text
├── diagram.png # Methodology diagram image
└── metadata.json # Caption and metadata
metadata.json format:
{
"paper_title": "Full paper title",
"arxiv_id": "2601.23265",
"figure_number": 2,
"caption": "Original figure caption from the paper",
"category": "agent_reasoning",
"source_url": "https://arxiv.org/abs/2601.23265",
"aspect_ratio": 1.85
}Valid categories: agent_reasoning, vision_perception, generative_learning, science_applications
Before submitting, verify:
- The methodology text matches what the diagram actually depicts
- The diagram image is clean (no scan artifacts, readable at 800px width)
- The aspect ratio is between 1.5 and 2.5
- The paper is publicly available
- Science & Applications: domain-specific architectures outside core ML
- Vision & Perception: detection, segmentation, multimodal pipelines
git clone https://github.com/llmsresearch/paperbanana.git
cd paperbanana
pip install -e ".[dev,google]"pytest tests/ -vWe use ruff for linting and formatting:
ruff check paperbanana/ mcp_server/ tests/ scripts/
ruff format paperbanana/ mcp_server/ tests/ scripts/- Fork the repo and create a branch from
main - Make your changes with clear, descriptive commit messages
- Add or update tests if applicable
- Ensure
pytestandruff checkpass - Open a PR with a brief description of what changed and why
- Provider support: Adding backends beyond Gemini (OpenAI, Anthropic, local models via Ollama)
- Reference set tooling: Improving the automated extraction pipeline in
scripts/ - Evaluation: Expanding the VLM-as-Judge evaluation with additional metrics or human correlation studies
- MCP server: Additional tools, better error handling, support for more IDE clients
- Documentation: Usage examples, tutorials, edge case documentation
When opening an issue, include:
- What you were trying to do
- The input you used (methodology text and caption)
- The output you got (attach the generated image if possible)
- Python version and OS
- Any error messages or tracebacks
For diagram quality issues specifically, attaching both the generated output and the expected result (or a reference from the paper) helps us diagnose whether the issue is in retrieval, planning, or rendering.
Use GitHub Discussions for questions, ideas, and general conversation. Issues are for bugs and concrete feature requests.