Editor files/APIs → editors/*.js → cache.js (SQLite) → server.js (REST) → React SPA
- Editor adapters (
editors/*.js) — read chat data from local files, databases, or running processes - Cache layer (
cache.js) — normalizes everything into~/.agentlytics/cache.db - Express server (
server.js) — read-only REST endpoints - React frontend (
ui/) — Chart.js-powered SPA
git clone https://github.com/f/agentlytics.git
cd agentlytics && npm install
# Frontend dev server (port 5173, proxies API to backend)
cd ui && npm install && npm run dev
# Backend (port 4637) — in another terminal
npm startThe Vite dev server proxies /api/* requests to the backend via vite.config.js.
agentlytics # normal start (uses cache)
agentlytics --no-cache # wipe cache and full rescan- Create
editors/<name>.jswith the adapter interface:
module.exports = {
name: 'my-editor',
// Optional: list of source IDs this adapter handles
// sources: ['my-editor', 'my-editor-beta'],
getChats() {
return [{
source: 'my-editor', // editor identifier
composerId: '...', // unique chat ID
name: '...', // chat title (nullable)
createdAt: 1234567890, // timestamp in ms (nullable)
lastUpdatedAt: 1234567890, // timestamp in ms (nullable)
mode: 'agent', // session mode (nullable)
folder: '/path/to/project', // working directory (nullable)
encrypted: false, // true if messages can't be read
bubbleCount: 10, // message count hint (nullable)
}];
},
getMessages(chat) {
return [{
role: 'user', // 'user' | 'assistant' | 'system' | 'tool'
content: '...', // message text
_model: 'gpt-4', // model name (optional)
_inputTokens: 500, // input token count (optional)
_outputTokens: 200, // output token count (optional)
_cacheRead: 100, // cache read tokens (optional)
_cacheWrite: 50, // cache write tokens (optional)
_toolCalls: [{ // tool calls (optional)
name: 'read_file',
args: { path: '/foo.js' },
}],
}];
},
};- Register in
editors/index.js:
const myEditor = require('./my-editor');
const editors = [...existingEditors, myEditor];- Add color and label in
ui/src/lib/constants.js:
export const EDITOR_COLORS = { ..., 'my-editor': '#hex' };
export const EDITOR_LABELS = { ..., 'my-editor': 'My Editor' };Reads from two separate data stores:
-
Agent Store (
~/.cursor/chats/<workspace>/<chatId>/store.db)- SQLite with
metatable (hex-encoded JSON) andblobstable (content-addressed SHA-256 tree) - Meta contains:
agentId,latestRootBlobId,name,createdAt - Messages retrieved by walking the blob tree: tree nodes contain message refs and child refs
- Tool calls extracted from OpenAI-format
tool_callsarray on assistant messages
- SQLite with
-
Workspace Composers (
~/Library/Application Support/Cursor/User/)workspaceStorage/<hash>/state.vscdb—composer.composerDatakey holds all composer headersglobalStorage/state.vscdb—cursorDiskKVtable withbubbleId:<composerId>:<n>keys- Each bubble is JSON with
type(1=user, 2=assistant),text,toolFormerData,tokenCount - Tool args from
toolFormerData.rawArgswith fallback totoolFormerData.params
Limitations: Cursor does not persist model names per message. Provider name (e.g., "anthropic") extracted from providerOptions when available.
Connects to the running language server via ConnectRPC (buf Connect protocol):
- Discovers process via
ps aux— findslanguage_server_macos_armwith--csrf_token - Extracts CSRF token and PID, finds listening port via
lsof GetAllCascadeTrajectories→ session summariesGetCascadeTrajectory→ full conversation steps
Requires the application to be running. Data is served from the language server process, not from files on disk. Antigravity uses HTTPS.
Reads from ~/.claude/projects/<encoded-path>/:
sessions-index.json— session index with titles and timestamps- Individual
.jsonlsession files — each line is a JSON message withtype,role,content,model,usage - Tool calls extracted from
tool_usecontent blocks andtool_resultmessages
Reads from ${CODEX_HOME:-~/.codex}/sessions/**/*.jsonl:
session_meta— session metadata includingid,cwd, rawsource,originator, andcli_versionturn_context— per-turn state such as the currentmodelresponse_item— visible transcript items for user/assistant messages, reasoning summaries, and tool callsevent_msgwherepayload.type === "token_count"— token usage deltas or cumulative totals
Adapter behavior:
- Titles come from the first meaningful user prompt, skipping Codex bootstrap wrappers like
<user_instructions>and<environment_context> - Reasoning summaries render as
[thinking] ...; encrypted reasoning is ignored function_call,custom_tool_call, andweb_search_callbecome visible[tool-call: ...]transcript lines and populate_toolCallsanalyticsfunction_call_outputandcustom_tool_call_outputbecome condensed[tool-result: ...]transcript lines- Token usage prefers
last_token_usage; when onlytotal_token_usageexists, the adapter diffs against the previous cumulative totals - Models are carried forward from the latest
turn_context; if none is available, the session still ingests but leaves_modelunset
Reads from ~/Library/Application Support/{Code,Code - Insiders}/User/:
workspaceStorage/<hash>/state.vscdb— workspace-to-folder mapping- Chat sessions stored as
.jsonlfiles in the Copilot Chat extension directory - JSONL reconstruction:
kind:0= init state,kind:1= JSON patch at key path - Messages, tool calls, and token usage extracted from reconstructed state
Reads from ~/Library/Application Support/Zed/threads/threads.db:
- SQLite database with
threadstable containing zstd-compressed JSON blobs - Each thread decompressed via
zstdCLI - Messages in OpenAI format with
tool_callsarray on assistant messages
Reads from ~/.local/share/opencode/opencode.db:
- SQLite database with
session,message, andprojecttables - Messages queried directly via SQL with full content, model, and token data
Location: ~/.agentlytics/cache.db
| Column | Type | Description |
|---|---|---|
id |
TEXT PK | Unique chat ID |
source |
TEXT | Editor identifier |
name |
TEXT | Chat title |
mode |
TEXT | Session mode |
folder |
TEXT | Project directory |
created_at |
INTEGER | Creation timestamp (ms) |
last_updated_at |
INTEGER | Last update (ms) |
bubble_count |
INTEGER | Message count |
encrypted |
INTEGER | 1 if encrypted |
| Column | Type | Description |
|---|---|---|
chat_id |
TEXT FK | → chats.id |
seq |
INTEGER | Sequence number |
role |
TEXT | user / assistant / system / tool |
content |
TEXT | Message text (truncated at 50K chars) |
model |
TEXT | Model name |
input_tokens |
INTEGER | Input tokens |
output_tokens |
INTEGER | Output tokens |
| Column | Type | Description |
|---|---|---|
chat_id |
TEXT PK | → chats.id |
total_messages |
INTEGER | Total count |
user_messages |
INTEGER | User messages |
assistant_messages |
INTEGER | Assistant messages |
tool_calls |
TEXT | JSON array of tool names |
models |
TEXT | JSON array of model names |
total_input_tokens |
INTEGER | Sum of input tokens |
total_output_tokens |
INTEGER | Sum of output tokens |
total_cache_read |
INTEGER | Cache read tokens |
total_cache_write |
INTEGER | Cache write tokens |
| Column | Type | Description |
|---|---|---|
chat_id |
TEXT FK | → chats.id |
tool_name |
TEXT | Function name |
args_json |
TEXT | Full arguments as JSON |
source |
TEXT | Editor |
folder |
TEXT | Project directory |
timestamp |
INTEGER | Timestamp (ms) |