Skip to content

davydeng1/brainstorm-track2

 
 

Repository files navigation

BrainStorm 2026 - Track 2: The "Compass" Challenge

Build a real-time visualization tool to guide neurosurgeons in placing a brain-computer interface array over the optimal region of the motor cortex.

🎯 The Challenge

Design and build a web application that:

  • Processes a live stream of neural data from a 1024-channel micro-ECoG array
  • Identifies areas of functionally tuned neural activity (not just individual transient hotspots)
  • Visualizes tuned regions relative to the array position
  • Provides clear, intuitive visual guidance for array placement optimization
  • Works in the high-pressure environment of an operating room

Key insight: Neural activity over tuned regions is not uniform — it depends on cursor movement direction. A good solution identifies coherent areas of interest rather than chasing individual activation spikes.

📚 Documentation

Document Purpose
Overview Challenge description, requirements, judging criteria
Installation Setup instructions
Getting Started Development workflow and signal processing hints
Data Dataset formats, signal content, and processing guidance
Data Stream WebSocket protocol reference
User Persona Understanding your target user
Submissions Live evaluation and how to submit
FAQ Common questions and rules

🚀 Quick Start

# 1. Install UV (if needed)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 2. Setup
make install

# 3. Download data (start with super_easy, develop with hard)
uv run python -m scripts.download super_easy
uv run python -m scripts.download hard

# 4. Stream data (Terminal 1)
uv run brainstorm-stream --from-file data/hard/

# 5. Run example app (Terminal 2)
uv run brainstorm-serve
# Open http://localhost:8000

The example app shows a basic heatmap. Your solution should go far beyond this!

See Installation for detailed setup and Getting Started for development guidance.

📊 The Data

Four difficulty levels on HuggingFace:

Difficulty Description Use Case
super_easy Crystal-clear signals Understanding the signal
easy Clean signals, minimal noise Initial development
medium Moderate noise Testing robustness
hard Challenging conditions Final testing & live evaluation
  • Array: 1024 channels (32×32 grid)
  • Sampling Rate: 500 Hz
  • Protocol: WebSocket (JSON batches)

See Data for detailed format and signal processing guidance.

🎨 What Makes a Great Solution?

User Experience (40%)

  • Instantly interpretable (< 1 second to understand)
  • Readable from 6 feet away (high contrast, large indicators)
  • Visualizes tuned areas relative to the array
  • Provides directional guidance for movement optimization
  • Unambiguous "found it" signal when positioned correctly

Technical Execution (40%)

  • Accurate identification of tuned regions
  • Real-time performance (low latency, smooth updates)
  • Robust to noise and bad channels
  • Aggregates signal over time to identify stable areas

Innovation (20%)

  • Novel visualization approaches beyond simple heatmaps
  • Creative signal processing
  • Compelling video demonstration

🛠️ What You Can Modify

You CAN:

  • Modify or replace the example app completely
  • Build a custom backend/middleware (Python, Node, etc.)
  • Use any signal processing or visualization approach
  • Add any dependencies or frameworks

You CANNOT:

  • Modify the data streaming protocol
  • Change how data is transmitted during evaluation

📦 Key Files

brainstorm2026-track2/
├── scripts/
│   ├── download.py       # Download datasets from HuggingFace
│   ├── stream_data.py    # Stream data locally
│   ├── serve.py          # Static file server for example app
│   └── control_client.py # Send keyboard controls (live eval)
├── example_app/          # Minimal reference implementation
├── data/                 # Downloaded datasets (gitignored)
└── docs/                 # Full documentation

🎥 Deliverables

⚠️ Your submission is a VIDEO — recorded during live evaluation and uploaded to YouTube.

  1. Video demo (3-5 minutes) — Screen recording during live evaluation with voice narration
  2. SUBMISSION.YAML — Updated with your YouTube link and pushed to main
  3. Your application (code repository with documentation)

See Submissions for detailed instructions.

🏥 Design for the Operating Room

  • User: Clinical operator with neuroscience PhD (see User Persona)
  • Environment: Crowded, high-stress operating room
  • Viewing Distance: Up to 6 feet from screen
  • Cognitive Load: Must be immediately interpretable
  • Stakes: This guides permanent array placement in a patient's brain

💡 Tips

  • Start simple and iterate
  • Test with streaming data early
  • Develop with the hard dataset — this matches final evaluation
  • Prioritize clarity over complexity
  • Think like a surgeon, not a researcher
  • Make it readable from 6 feet away

About

Brainstorm 2026 Track2

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 60.2%
  • JavaScript 21.0%
  • CSS 8.9%
  • Makefile 6.3%
  • HTML 3.6%