Multiple AI agents for the LTX Analytics team. Each agent automates a different workflow. All agents share a common knowledge base about LTX's data, products, and metric standards.
When the user makes a request, identify which agent should handle it, read its SKILL.md, and follow the instructions.
| If the user asks to... | Activate | Skill file |
|---|---|---|
| Create a dashboard, build a dashboard, set up analytics for a feature | Dashboard Builder | agents/dashboard-builder/SKILL.md |
| Analyze Gong calls, product feedback from sales, feature requests, why deals don't convert, generate product intelligence report | Gong Product Intelligence | agents/gong-product-intelligence/SKILL.md |
| Create a Linear issue, track work in Linear, file a bug/task | Linear Issue Manager | agents/linear/SKILL.md |
| Monitor feature usage, DAU/MAU/WAU, generation volume, alert on usage drops | Usage Monitor | agents/monitoring/usage/SKILL.md |
| Monitor GPU costs, alert on infrastructure cost spikes, track cost per user/feature | BE Cost Monitor | agents/monitoring/be-cost/SKILL.md |
| Monitor revenue trends, alert on revenue drops, track MRR/ARR/subscriptions/churn | Revenue Monitor | agents/monitoring/revenue/SKILL.md |
| Monitor enterprise accounts, alert on churn risk, track account health/quota | Enterprise Monitor | agents/monitoring/enterprise/SKILL.md |
| Monitor API latency, alert on errors/timeouts, track throughput/performance | API Runtime Monitor | agents/monitoring/api-runtime/SKILL.md |
If the request doesn't clearly match an agent, ask the user which they need.
| If the user asks to... | Skill | Skill file |
|---|---|---|
| Create a PR, open a pull request, ship it, submit for review | Create PR | .claude/skills/create-pr/SKILL.md |
| Create a new skill, automate a repeatable task, document a team pattern | Create Skill | .claude/skills/create-skill/SKILL.md |
| Build a data spec, document events for a feature, pull all events for X | Build Data Spec | .claude/skills/build-data-spec/SKILL.md |
| Product daily report, yesterday's product numbers, usage by segment | Product Daily Report | agents/reports/ltx-product-daily-report/SKILL.md |
| Marketing daily report, lead performance, GTM metrics, marketing spend | Marketing Daily Report | agents/reports/ltx-marketing-daily-report/SKILL.md |
| Sales report, pipeline by rep, credit limit alerts, enterprise account health | Sales Daily Report | agents/reports/ltx-sales-daily-report/SKILL.md |
Every agent reads from these files. They are the single source of truth — do not contradict them.
| File | Contains | Read before |
|---|---|---|
shared/product-context.md |
LTX products, user types, business model | Any work (for context) |
shared/bq-schema.md |
BQ tables, columns, joins, segmentation queries | Writing ANY SQL |
shared/event-registry.yaml |
Known events per feature, types, status | Referencing ANY event |
shared/metric-standards.md |
How every metric is calculated (with SQL) | Defining ANY metric |
shared/gpu-cost-query-templates.md |
11 GPU cost queries (DoD, WoW, anomaly detection, breakdowns) | Analyzing GPU/infrastructure costs |
shared/gpu-cost-analysis-patterns.md |
Cost analysis workflows, benchmarks, investigation playbooks | Interpreting GPU cost data |
- Never invent event names. Use only events from
shared/event-registry.yamlor confirmed by the user. - Never improvise metric definitions. Use SQL patterns from
shared/metric-standards.md. - Never skip human approval for actions that create or modify external resources.
- Always read the relevant shared files before generating SQL or referencing events.
- Always use
lt_idas the user identifier. Neveranonymous_id. - Always exclude LT team (
is_lt_team IS FALSE) except inltxstudio_user_all_actionswhich already does this. - Always filter on partition columns for BQ performance. For
ltxstudio_user_all_actions, include:WHERE action_ts >= TIMESTAMP(@start_date) AND action_ts < TIMESTAMP(DATE_ADD(@end_date, INTERVAL 1 DAY))(enables partition pruning). - Never use staging tables. Avoid any table with
stgin the name — they are intermediate builds, not production data. - Always exclude today's incomplete data. Use
DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)as the end date for time ranges. - Always report both press count and output count for generation metrics. A single press can produce multiple outputs.
- Always use
SAFE_DIVIDE(x, y) * 100for percentages. Never show decimals like 0.15 instead of 15%. - Validate column names against actual schema before running queries. Documentation may be outdated.
- Check data quality during query execution. Inspect results for NULLs, zero/low row counts, date gaps, invalid values (negative counts, retention increases, funnel violations), and schema mismatches. Report issues to users with clear warnings.
- Always use EXACT segmentation CTEs from
shared/bq-schema.mdwithout modification. Never simplify or skip steps:- Enterprise Users: Must use the two-step pattern (inner CTE + outer SELECT with " Pilot" suffix logic)
- Heavy Users: Must include all filters (4+ weeks active, token consumption, etc.)
- Full Segmentation: Must follow hierarchy (Enterprise → Heavy → Paying → Free)
- Copy the entire CTE structure from
bq-schema.mdlines 441-516, do not improvise or simplify
- Hex — Create dashboards via Threads (
create_thread,continue_thread,get_thread) - Figma — Read design files for feature context
- Slack — Post dashboards, reports, and alerts to stakeholders
- GitHub — Search codebase for event tracking code
- Linear — Create and manage issues for analytics work