A web-based zoomable platform that lets users explore massive NASA images, annotate features, and interact with AI to learn more about celestial objects.
infiniscope/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI entrypoint
│ │ ├── routes/
│ │ │ ├── tiles.py # Serve image tiles
│ │ │ ├── features.py # CRUD for features
│ │ │ └── chat.py # AI Q&A endpoints
│ │ ├── db/
│ │ │ └── mongo.py # MongoDB connection
│ │ ├── services/
│ │ │ ├── pinecone_service.py # Vector DB queries
│ │ │ ├── openai_service.py # Embeddings + Chat
│ │ │ └── tiling_service.py # Preprocessing / serving tiles
│ │ ├── models/
│ │ │ └── feature.py # Pydantic model for features
│ │ └── utils/
│ │ └── helpers.py # Misc helpers
│ └── requirements.txt # FastAPI, motor, pinecone-client, openai
│
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ ├── ImageViewer.tsx # OpenSeadragon integration
│ │ │ ├── AnnotationLayer.tsx # Overlay for labels
│ │ │ ├── ChatPanel.tsx # Chat interface with AI
│ │ │ └── Navbar.tsx
│ │ ├── pages/
│ │ │ └── Home.tsx # Main page
│ │ ├── api/
│ │ │ └── backend.ts # Axios calls to FastAPI
│ │ └── App.tsx
│ ├── package.json
│ └── vite.config.ts
│
├── README.md
The goal is to build an interactive exploration platform where users can:
- Explore massive NASA images with smooth pan/zoom.
- Label known features (craters, galaxies, storms, etc.).
- Ask AI-powered questions about features via a chat interface.
- Discover new patterns using AI-assisted search and annotations.
Responsibilities:
- Display massive image datasets using tiling.
- Handle smooth zoom/pan without loading full images.
- Overlay labels & annotations for known features.
- Provide a chat sidebar for AI interaction.
- Allow user contributions (new labels, feature notes).
- Persist session state (viewport, selected feature, chat history).
Key Components:
- Image Viewer → Integrates OpenSeadragon; fetches tiles from
/tiles/{zoom}/{x}/{y}. - Annotation Layer → Displays points, polygons, or bounding boxes linked to coordinates.
- Chat Panel → Enables Q&A with AI, contextualised by the selected feature.
Responsibilities:
- Serve image tiles for frontend.
- Manage feature metadata (coordinates + descriptions).
- Handle AI chat requests (via OpenAI + Pinecone).
- Support user-contributed labels (store in MongoDB).
Key APIs:
-
Image Tile Server →
/tiles/{dataset}/{z}/{x}/{y}- Returns small image tiles.
- Preprocessing pipeline creates Deep Zoom tiles.
-
Feature Store API
GET /features/{dataset}→ Returns features list.POST /features→ Allows user-submitted labels.
-
AI Chat API
POST /chat→ Takes user query + context (dataset, viewport, feature).- Uses Pinecone for feature lookup + OpenAI for contextual answers.
-
Search/Pattern Detection (Optional)
- AI clustering/anomaly detection on tiles.
- Suggests possible new features.
-
Data Preparation
- Each labelled feature stored with
{id, dataset, coords, description}. - Embeddings generated using OpenAI and stored in Pinecone.
- Each labelled feature stored with
-
Query Handling
- User selects a feature or viewport.
- Backend queries Pinecone for nearest features.
- Retrieved context passed into OpenAI for natural answers.
-
Example
- User: “What’s this crater’s age?”
- AI: “This is Gale Crater, about 154 km wide, where the Curiosity rover landed in 2012.”
User (Frontend)
│
▼
[React + OpenSeadragon] ←→ /tiles/{z}/{x}/{y} ←→ [FastAPI Tile Server]
│
├─> /features → Feature Store (MongoDB)
│
├─> /chat → [FastAPI AI API]
│ ├─> Pinecone (vector search)
│ └─> OpenAI (chat model)
│
▼
UI updates with AI response + highlighted feature