Skip to content

Latest commit

 

History

History
76 lines (60 loc) · 5.94 KB

File metadata and controls

76 lines (60 loc) · 5.94 KB

CLAUDE.md — LTX Analytics Agents

What this project does

Multiple AI agents for the LTX Analytics team. Each agent automates a different workflow. All agents share a common knowledge base about LTX's data, products, and metric standards.

Agent Routing

When the user makes a request, identify which agent should handle it, read its SKILL.md, and follow the instructions.

If the user asks to... Activate Skill file
Create a dashboard, build a dashboard, set up analytics for a feature Dashboard Builder agents/dashboard-builder/SKILL.md
Analyze Gong calls, product feedback from sales, feature requests, why deals don't convert, generate product intelligence report Gong Product Intelligence agents/gong-product-intelligence/SKILL.md
Create a Linear issue, track work in Linear, file a bug/task Linear Issue Manager agents/linear/SKILL.md
Monitor feature usage, DAU/MAU/WAU, generation volume, alert on usage drops Usage Monitor agents/monitoring/usage/SKILL.md
Monitor GPU costs, alert on infrastructure cost spikes, track cost per user/feature BE Cost Monitor agents/monitoring/be-cost/SKILL.md
Monitor revenue trends, alert on revenue drops, track MRR/ARR/subscriptions/churn Revenue Monitor agents/monitoring/revenue/SKILL.md
Monitor enterprise accounts, alert on churn risk, track account health/quota Enterprise Monitor agents/monitoring/enterprise/SKILL.md
Monitor API latency, alert on errors/timeouts, track throughput/performance API Runtime Monitor agents/monitoring/api-runtime/SKILL.md

If the request doesn't clearly match an agent, ask the user which they need.

Skills

If the user asks to... Skill Skill file
Create a PR, open a pull request, ship it, submit for review Create PR .claude/skills/create-pr/SKILL.md
Create a new skill, automate a repeatable task, document a team pattern Create Skill .claude/skills/create-skill/SKILL.md
Build a data spec, document events for a feature, pull all events for X Build Data Spec .claude/skills/build-data-spec/SKILL.md
Product daily report, yesterday's product numbers, usage by segment Product Daily Report agents/reports/ltx-product-daily-report/SKILL.md
Marketing daily report, lead performance, GTM metrics, marketing spend Marketing Daily Report agents/reports/ltx-marketing-daily-report/SKILL.md
Sales report, pipeline by rep, credit limit alerts, enterprise account health Sales Daily Report agents/reports/ltx-sales-daily-report/SKILL.md

Shared Knowledge

Every agent reads from these files. They are the single source of truth — do not contradict them.

File Contains Read before
shared/product-context.md LTX products, user types, business model Any work (for context)
shared/bq-schema.md BQ tables, columns, joins, segmentation queries Writing ANY SQL
shared/event-registry.yaml Known events per feature, types, status Referencing ANY event
shared/metric-standards.md How every metric is calculated (with SQL) Defining ANY metric
shared/gpu-cost-query-templates.md 11 GPU cost queries (DoD, WoW, anomaly detection, breakdowns) Analyzing GPU/infrastructure costs
shared/gpu-cost-analysis-patterns.md Cost analysis workflows, benchmarks, investigation playbooks Interpreting GPU cost data

Rules (all agents)

  1. Never invent event names. Use only events from shared/event-registry.yaml or confirmed by the user.
  2. Never improvise metric definitions. Use SQL patterns from shared/metric-standards.md.
  3. Never skip human approval for actions that create or modify external resources.
  4. Always read the relevant shared files before generating SQL or referencing events.
  5. Always use lt_id as the user identifier. Never anonymous_id.
  6. Always exclude LT team (is_lt_team IS FALSE) except in ltxstudio_user_all_actions which already does this.
  7. Always filter on partition columns for BQ performance. For ltxstudio_user_all_actions, include: WHERE action_ts >= TIMESTAMP(@start_date) AND action_ts < TIMESTAMP(DATE_ADD(@end_date, INTERVAL 1 DAY)) (enables partition pruning).
  8. Never use staging tables. Avoid any table with stg in the name — they are intermediate builds, not production data.
  9. Always exclude today's incomplete data. Use DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY) as the end date for time ranges.
  10. Always report both press count and output count for generation metrics. A single press can produce multiple outputs.
  11. Always use SAFE_DIVIDE(x, y) * 100 for percentages. Never show decimals like 0.15 instead of 15%.
  12. Validate column names against actual schema before running queries. Documentation may be outdated.
  13. Check data quality during query execution. Inspect results for NULLs, zero/low row counts, date gaps, invalid values (negative counts, retention increases, funnel violations), and schema mismatches. Report issues to users with clear warnings.
  14. Always use EXACT segmentation CTEs from shared/bq-schema.md without modification. Never simplify or skip steps:
    • Enterprise Users: Must use the two-step pattern (inner CTE + outer SELECT with " Pilot" suffix logic)
    • Heavy Users: Must include all filters (4+ weeks active, token consumption, etc.)
    • Full Segmentation: Must follow hierarchy (Enterprise → Heavy → Paying → Free)
    • Copy the entire CTE structure from bq-schema.md lines 441-516, do not improvise or simplify

MCP Connections

  • Hex — Create dashboards via Threads (create_thread, continue_thread, get_thread)
  • Figma — Read design files for feature context
  • Slack — Post dashboards, reports, and alerts to stakeholders
  • GitHub — Search codebase for event tracking code
  • Linear — Create and manage issues for analytics work