An agentic financial research assistant that reasons through complex queries, autonomously selects from 22+ specialized data tools, and delivers real-time streamed answers backed by hard numbers.
Ask it "Compare Apple and Microsoft's free cash flow over the last 3 years" and watch it resolve tickers, pull financial statements for both companies, and synthesize the analysis — all in one turn.
Trader Jim runs a multi-step reasoning loop: an LLM examines your query, decides which financial data tools to call, reviews the results, and iterates until it has enough information to give a comprehensive answer. It isn't just a simple chatbot with a database lookup or shallow prompting — it's an autonomous agent that plans and executes research strategies.
"What's happening with NVDA?"
│
├─ Agent decides: need price data + recent news + analyst estimates
│
├─ Calls get_price_snapshot(NVDA) → $142.50, +3.2% today
├─ Calls get_analyst_estimates(NVDA) → Consensus EPS $0.89
├─ Calls web_search("NVDA news") → Earnings beat, new GPU launch
│
├─ Reviews results, decides: enough data to answer
│
└─ Streams final analysis with specific data points
| Category | Tools | What You Get |
|---|---|---|
| Prices | Stock & crypto snapshots, historical OHLCV | Real-time quotes, price history, crypto markets |
| Financials | Income, balance sheet, cash flow, segmented revenue | Quarterly and annual financial statements |
| Valuation | Metrics snapshot, historical metrics, analyst estimates | P/E, market cap, EPS forecasts, consensus targets |
| SEC Filings | 10-K, 10-Q, 8-K parsers | Read specific sections of regulatory filings |
| Market Intel | Top movers, related tickers, insider trades, news | Market-wide signals and company-specific events |
| Web Search | Tavily / xAI search | Current events, earnings calls, macro context |
- Agentic reasoning — Up to 10 iterations of think → act → observe cycles per query
- Meta-routing — A router LLM intelligently selects which financial tools to call, handling multi-company and multi-metric queries in a single pass
- Context compaction — Tool results are summarized during iteration to stay within context limits, then full data is loaded for the final answer
- Streaming responses — Server-Sent Events deliver thinking steps, tool activity, and the final answer in real time
- Session memory — Multi-turn conversations with LLM-powered relevance selection from chat history
Beyond interactive chat, Trader Jim runs an autonomous trading pipeline on a schedule — no human in the loop.
Every weekday at 9:45 AM ET, the agent wakes up and runs a full research cycle:
9:45 AM ET — Morning Analysis Job
│
├─ 1. Scans market movers via get_full_market_snapshot
├─ 2. Filters to 3-5 candidates (>$100M cap, unusual volume, catalysts)
├─ 3. Deep-dives each: financials, price history, news, technicals
├─ 4. Compares risk/reward, scores confidence, selects best trade
├─ 5. Verifies current price with latest 15-min candle data
└─ 6. Outputs: ticker, entry price, take profit, stop loss + full rationale
Every 4 hours — Price Tracker
│
├─ Fetches latest 15-min candles for all open positions
├─ Checks if take profit or stop loss was hit (market hours only)
└─ Auto-closes positions when exit conditions are met
4:00 PM ET — Post-Trade Analysis
│
├─ Reviews all newly closed trades
├─ Calculates P&L and determines win/loss
├─ LLM analyzes what went right or wrong
└─ Generates lessons learned and strategy improvements
The agent can also recommend no trade if nothing looks compelling — a skipped trade is better than a losing trade.
The dashboard tracks all trade ideas in a hypothetical portfolio starting with $10,000 and $1,000 per position. The performance page shows:
- Equity curve — Portfolio value over time with area chart
- Win rate, avg win/loss, best/worst trade — Key statistics
- Open and closed positions — With entry, exit, P&L, and status (TP hit, SL hit, open)
- Individual trade analysis — Click any trade to see the full research, price chart with TP/SL levels, and post-trade LLM review
cd server
python run_job.py morning_stock_analysis # Generate a trade idea now
python run_job.py update_trade_prices # Update open positions
python run_job.py analyze_closed_trades # Analyze completed tradesServer (/server) — Python FastAPI backend. The agent loop orchestrates Gemini, tool execution, context summarization, and streaming. Tools are registered through a singleton registry with category filtering and API key validation.
Dashboard (/dashboard) — Next.js 15 frontend with React 19. Consumes SSE streams to render thinking steps, tool calls, and the final answer progressively. Four views: interactive chat, trade ideas log with price charts, portfolio performance with equity curve, and chat history.
Scheduler — APScheduler runs three cron jobs: morning trade analysis (9:45 AM ET), price tracking (every 4 hours), and post-trade review (4:00 PM ET). All persisted to Supabase.
- Python 3.12+
- Node.js 20+
- API keys: Google AI (Gemini), Financial Datasets
cd server
uv venv && source .venv/bin/activate && uv pip install -r requirements.txt
cp .env.example .env # Then fill in your API keys
python main.py # Runs at localhost:8000cd dashboard
npm install
cp .env.example .env
npm run dev # Runs at localhost:3000# Copy and configure environment files first
cp server/.env.example server/.env
cp dashboard/.env.example dashboard/.env
docker compose up --buildServer at localhost:8000, dashboard at localhost:3000.
| Variable | Required | Description |
|---|---|---|
GOOGLE_API_KEY |
Yes | Google AI (Gemini) API key |
FINANCIAL_DATASETS_API_KEY |
Yes | Financial Datasets API key |
ADMIN_USERNAME |
Yes | HTTP Basic Auth username |
ADMIN_PASSWORD |
Yes | HTTP Basic Auth password |
TAVILY_API_KEY |
No | Enables web search via Tavily |
XAI_API_KEY |
No | Alternative web search via xAI |
LLM_MODEL |
No | Gemini model name (default: gemini-3-flash-preview) |
TOOL_MODE |
No | meta (router, default) or direct (all tools exposed) |
ENABLE_CHAT_HISTORY_DB |
No | Enable Supabase persistence |
SUPABASE_URL |
No | Supabase project URL |
SUPABASE_SERVICE_ROLE_KEY |
No | Supabase service role key |
| Variable | Required | Description |
|---|---|---|
API_URL |
No | Backend URL for server-side API routes (default: http://localhost:8000) |
Compare AAPL and MSFT revenue growth over the last 4 quarters
What does Tesla's latest 10-K say about risk factors?
Show me the top market movers today and explain what's driving them
What's Bitcoin's price action this month vs Ethereum?
Break down Amazon's revenue by segment for the last 2 years
Who's been buying or selling NVDA stock recently? (insider trades)
What are analysts estimating for Google's next quarter EPS?
| Layer | Technology |
|---|---|
| LLM | Google Gemini via LangChain |
| Backend | Python 3.12, FastAPI, Uvicorn |
| Frontend | Next.js 15, React 19, TypeScript |
| Styling | Tailwind CSS |
| Data | Financial Datasets API, Tavily, xAI |
| Persistence | Supabase (optional) |
| Deployment | Docker, Docker Compose |
| Observability | OpenTelemetry, Grafana (optional) |
├── server/
│ ├── agent/ # Reasoning loop, context management, prompts
│ ├── api/ # HTTP endpoints (chat SSE, health, history)
│ ├── auth/ # HTTP Basic Auth
│ ├── db/ # Supabase client and repository
│ ├── llm/ # Gemini wrapper with streaming and retries
│ ├── scheduler/ # APScheduler background jobs
│ ├── session/ # Chat history and memory management
│ ├── tools/
│ │ ├── finance/ # 22 financial data tools
│ │ └── search/ # Web search (Tavily + xAI)
│ ├── config.py # Pydantic settings
│ └── main.py # FastAPI app entry point
│
├── dashboard/
│ ├── app/ # Next.js App Router pages and API routes
│ ├── components/ # Chat, layout, history, and UI components
│ ├── hooks/ # Custom React hooks
│ ├── lib/ # API client, auth, utilities
│ └── types/ # TypeScript type definitions
│
├── docker-compose.yml # Multi-container orchestration
└── CLAUDE.md # AI coding assistant context


