Skip to content

Track A: Make Neva a top-tier target for AI code generation #1057

@emil14

Description

@emil14

Tracking

Goal

Make Neva a top-tier language target for AI coding agents (Codex/Claude Code/Cloud Code/etc.) by reducing token waste, ambiguity, and trial-and-error during generation and review.

Problem statement

Today, AI coding agents often operate via generic text search (grep-style) and broad file reads. That is expensive and noisy. Neva can do better because the language is explicit and static, and the CLI already has an initial neva doc capability.

We should define a deliberate "AI-friendly developer tooling" layer so agents can:

  • fetch only relevant symbols/components,
  • understand signatures and constraints quickly,
  • generate valid code with fewer retries,
  • produce easier-to-review diffs.

Existing context to leverage

Workstreams

1) AI-oriented CLI ergonomics

  • Extend neva doc beyond grep-like output toward structured retrieval:
    • stable machine-readable output mode (for agents),
    • symbol-level lookup (component/type/const),
    • concise signature-focused responses.
  • Add focused commands (or flags) for frequent agent tasks:
    • "where is this symbol defined",
    • "show public API of package",
    • "show compatible components/ports by type".

2) Tool integration surface

  • Evaluate and prototype an official Neva MCP server (or equivalent tool interface) so agents can query Neva semantics directly rather than scraping raw text.
  • Define minimal protocol surface for:
    • symbol lookup,
    • API docs retrieval,
    • diagnostics explanation,
    • graph-aware code navigation.

3) Deterministic generation guardrails

  • Strengthen formatting/style automation and lint integration so generated code converges quickly.
  • Ensure diagnostics remain actionable for iterative agent loops.
  • Document "LLM coding with Neva" best practices (prompt + workflow recipes).

4) Documentation pipeline for agent consumption

Deliverables

  • Design doc with command/tool API proposals.
  • At least one end-to-end prototype where an agent produces/edits Neva code using the proposed tooling surface.
  • Benchmarks comparing baseline workflow vs improved tooling (tokens/time/retry count).

Non-goals

  • Not building a model-specific integration for one provider only.
  • Not changing core language semantics to add AI-specific syntax sugar.

Acceptance criteria

  • A defined, versioned AI-tooling surface exists (CLI and/or MCP).
  • Agent-driven code tasks show measurable improvement in at least one benchmarked workflow.
  • Documentation and examples are sufficient for contributors to reproduce the workflow.
  • Follow-up implementation issues are split and prioritized.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ideaThinking neededlargeWeeksp1We can live without it but it's very important

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions