CallQuest is both a working agent and a reusable blueprint for building modern agentic systems. It ships end‑to‑end patterns (LLM‑first extraction, receipts/verification, discovery, ranking, notifications, observability) you can adapt to other domains (jobs, grants, CFPs, tenders, events, etc.).
- Finds relevant open calls (Israel + international) and filters out past/irrelevant ones.
- Extracts the key facts you actually need: title, organizer, deadline, apply link, fees/funding.
- Sends actionable alerts to Telegram and can export calendar entries (ICS).
- Every item links back to the original source; we don’t guess — we cite.
How it works (in short):
- The agent discovers reputable sources (search + quality scoring).
- Visits pages, extracts a clean opportunity JSON, verifies it, deduplicates.
- Ranks for your profile and notifies you only when it’s worth your time.
Getting started as an artist (dev-style for now):
- Start the server (see Quickstart). Create a profile, then send a test alert.
- POST /artists — JSON body example:
{ "name":"You", "languages":["en"], "disciplines":["residency","grant"], "telegram_chat_id":"<your_chat_id>" } - POST /alerts/test?artist_id= — confirms Telegram is wired.
- POST /artists — JSON body example:
Roadmap for artists: self-serve onboarding, email digests, better profile UI.
- LLM‑first with STRICT JSON: prompts in
root/src/prompts, loaded viaPROMPTS_DIR. - Verifiable by design: receipts persisted at each step;
/whyand/receiptsnever recompute. - Web tooling: MCP Playwright for navigation;
fetchWithTimeoutwith robots.txt + host allowlist. - Extraction → normalization → verification: all via LLM + Zod schemas (no regex, no hardcoded domain rules).
- Dedup: pgvector similarity + deadline proximity, status lifecycle (
active/expiring_soon/expired). - Dynamic source discovery: search API + LLM quality/legitimacy/freshness scoring.
- Personalization and multi‑objective ranking: relevance to profile + source quality + actionability.
- Actions: Telegram and ICS; apply‑URL validation; notification ledger.
- Observability: Prometheus + optional Langfuse traces.
- Durable workflows: Step Functions specs (infra/stepfunctions) as production‑ready skeleton.
- Resilience: retries, timeouts, circuit‑breaker wrappers around search; temporary blocklist for failing hosts.
Requirements: Node.js 20+, PostgreSQL 14+ with vector (pgvector) and pgcrypto.
- Install deps:
npm i - Copy
.env.example→.env, set minimal variables:DATABASE_URL— Postgres connection stringLLM_API_KEY— your LLM key (OpenAI/OpenRouter)LLM_PROVIDER_BASEURL— e.g. https://api.openai.com/v1 or https://openrouter.ai/api/v1- Optional:
MCP_PLAYWRIGHT_URL(external MCP), or use spawned mode via docker-compose - Optional:
SEARCH_PROVIDER,SEARCH_API_KEY,SEARCH_ENDPOINT(Serper/Tavily) - Optional:
TELEGRAM_BOT_TOKEN(create via @BotFather)
- Migrate DB:
npm run db:migrate - Discover sources (optional):
npm run discover - Start agent + server:
npm start(HTTP:/healthz,/metrics,/receipts,/why?url=...)
Try it:
- Extract JSON from a URL:
npm run extract -- --url https://example.com/open-call - Ingest + dedup:
npm run ingest -- --url https://example.com/open-call - Update statuses by deadlines:
npm run expire - Send daily digest (Telegram):
npm run digest
Where to get keys (one‑liners): OpenAI — platform.openai.com, OpenRouter — openrouter.ai, Telegram bot — @BotFather, Serper/Tavily — their developer dashboards.
Provider notes:
- Serper: set
SEARCH_PROVIDER=serper,SEARCH_ENDPOINT=https://google.serper.dev/search, headerX-API-KEYwithSEARCH_API_KEY. - Tavily: set
SEARCH_PROVIDER=tavily,SEARCH_ENDPOINT=https://api.tavily.com/search; Tavily expectsapi_keyin JSON body (we handle this automatically).
Telegram chat_id quick tip:
- DM @userinfobot to get your numeric
chat_id, or add your bot to a chat and callhttps://api.telegram.org/bot<token>/getUpdatesafter sending a message to fetch thechat.id.
- Replace prompts for your schema in
root/src/prompts/*. - Swap Zod schemas in
root/src/schemas/*. - Keep receipts and verification discipline — that’s the core trust pattern.
- Provide your sources via static list or search + LLM assessment (
root/src/agent/discovery.ts). - Personalization/ranking in
root/src/agent/ranking.ts; wire your delivery channels inroot/src/notify/*. - For durability, port steps into Step Functions or keep cron flows.
- Orchestrator:
root/src/agent/orchestrator.ts - Extraction/Normalization/Verification:
root/src/agent/extract.ts,root/src/agent/verify.ts - Discovery:
root/src/agent/discovery.ts,root/src/tools/search.ts - Ranking/Personalization:
root/src/agent/ranking.ts,root/src/db/artists.ts - Notifications:
root/src/notify/telegram.ts,root/src/notify/ics.ts,root/src/notify/compose.ts - Metrics:
root/src/metrics.ts; Server:root/src/server.ts - Migrations:
root/migrations/*
- Removed deterministic normalizers and any ad‑hoc heuristics — normalization is LLM‑based (
normalizeOpportunityLLM). - Deleted unused
util/metrics.ts; all metrics live inroot/src/metrics.ts. - Added HTML‑hash caching to avoid pointless re‑extraction calls.
- Step Functions are optional skeletons, not required for local runs.
- Crawling is guarded by robots.txt and a host allowlist. Set
ALLOWED_HOSTSas a comma‑separated list (or query/allowed-hosts). MCP navigation also respects the allowlist. - Failing hosts are temporarily blocked (env
BLOCKED_HOST_TTL_MS, default 15m) to avoid noisy retries.
MIT — see LICENSE.