Skip to content

Make Copilot quick inputs more context-aware and LLM-driven #2958

@nighca

Description

@nighca

Background

In #2041, we introduced the concept of Copilot quick input.

Quick inputs are small shortcut UI controls in the Copilot panel, similar to buttons, that help users send common or likely-next inputs more efficiently.

The idea is useful, but the current implementation is still quite simple. For example, in tutorials we may always provide a default quick input such as "Next step".

There was also earlier discussion around more useful quick-input patterns such as:

  • "Then what?" / "Done"
  • "Please be more specific"

These examples are useful because they reflect real gaps in current Copilot interaction.

What this issue is about

Improve Copilot quick input generation so it is more context-aware and more helpful for the current interaction.

Instead of relying only on fixed or hardcoded quick inputs, we should explore a design where the model can decide:

  • whether the current interaction should present quick inputs at all
  • whether the user should first do something in the product and then come back to Copilot
  • which quick inputs should be shown for the current situation
  • how many quick inputs should be shown and what they should say

Why this matters

The usefulness of quick inputs depends heavily on context.

In some cases, offering a shortcut response is helpful.
In other cases, the better next step is not another text input, but a user action in the editor, followed by reporting back to Copilot.

A more intelligent mechanism should improve both efficiency and interaction quality.

A common case is that Copilot may break a task into many small steps:

  • some steps can be sensed automatically by the product, so Copilot can continue on its own
  • some steps cannot be sensed reliably today, such as certain code edits, so the user needs an easy way to say "Done" or "Then what?"

Another common case is that Copilot's instruction is directionally correct but not concrete enough, and the user needs an easy way to ask for more specific guidance.

Possible direction

One possible direction is to let the LLM participate in quick input planning for each round, based on the current topic, conversation state, and product context.

That could include:

  • deciding whether quick inputs are needed
  • generating the candidate quick inputs dynamically
  • distinguishing between "reply to Copilot" actions and "go do something in the product first" actions
  • providing concrete quick inputs such as "Done", "Then what?", or "Please be more specific" when appropriate
  • keeping enough product-side control so the UI remains predictable and safe

Notes

This issue is about improving the design and mechanism of Copilot quick inputs, not just adding more hardcoded buttons.

Related historical discussion:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions