You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
use multi-turn conversation to refine and visualize queries
14
+
Our new conversational experience in Sumo Logic Copilot lets you interact with queries the way you would with a chat assistant. You ask a question, refine it with follow-ups, change units or metrics, and see the updated query and visualization without starting over. Copilot maintains your intent across turns, surfaces helpful suggestions, and makes it easy to explore related angles.
15
15
16
-
The new conversational experience in Copilot lets you interact with your queries the way you would with a chat assistant: you ask a question, refine it with follow-ups, change units or metrics, and see the updated query and visualization without starting over. Copilot maintains your intent across turns, surfaces helpful suggestions, and makes it easy to explore related angles.
16
+
This guide explains what's new in the UI, how the multi-turn flow works, and shows example workflows. The examples, such as refining a latency query for an SLO or counting warning events, are illustrative. You can apply the same interaction pattern to any data, log, or metric you care about.
17
17
18
-
This guide explains what’s new in the UI, how the multi-turn flow works, and shows example workflows. The examples—such as refining a latency query for an SLO or counting warning events—are illustrative. You can apply the same interaction pattern to any data or metric you care about.
18
+
## Beta release highlights
19
19
20
-
## what’s new (before vs after)
20
+
***Contextual multi-turn**. Refine queries through natural, conversational follow-up questions without losing prior context.
21
+
***Dashboard-aware translations (via RAG)**. Copilot leverages recent dashboard queries in your org to improve translation accuracy and better understand your intent.
22
+
***Structured fallback & descriptive errors**. Get clear, actionable messages instead of generic errors when Copilot hits a limit or cannot translate.
23
+
***Pro tip**. For best results, ask questions about data sources that already have dashboards built on them, Copilot performs best with those contexts.
21
24
22
-
***Conversational refinement**. Before you had to reissue full queries for each change. Now you give follow-up instructions (for example, narrow scope, change units, switch percentiles) and the system updates the existing query while preserving prior context.
23
-
***Goal/intent cards**. Your current objective appears as a card (for example, "count critical warning events by namespace and threat actor"), making it easy to see what Copilot is working on and adjust it.
24
-
***Suggested next steps**. The UI surfaces related refinement suggestions you can click to explore alternate groupings, sorts, or filters without crafting new language from scratch.
25
+
## What's new (before and after)
26
+
27
+
***Conversational refinement**. Before, you had to reissue full queries for each change. Now you give follow-up instructions (for example, narrow scope, change units, switch percentiles) and the system updates the existing query while preserving context.
28
+
***Goal/intent cards**. Your current objective appears as a card (for example, `count critical warning events by namespace and threat actor`), making it easy to see what Copilot is working on and adjust it.
29
+
***Suggested next steps**. The UI surfaces related refinement suggestions you can click to explore alternate groupings, sorts, or filters without crafting new language.
25
30
***Integrated conversation pane**. A sidebar shows the flow of your prompts, current goal, and refinements, giving visibility into the multi-turn state.
26
31
***Immediate editable query**. Natural language instructions are translated into a query that appears in the editor with syntax highlighting; you can edit it directly or keep iterating via conversation.
27
32
***Live results and visuals**. Tables, charts, and other outputs update in place as you refine, with smooth transitions when changing things like percentiles or units.
28
33
***History and branching**. You can start a new conversation, revisit past ones, or branch off to compare different perspectives.
29
34
30
-
<!--Screenshot comparison (before vs after).
31
-
Caption suggestion: "Before: flat query editing with no preserved conversational context. After: chat-style multi-turn pane with goal card, suggestions, generated query, and live results." -->
<img src={useBaseUrl('/img/copilot-multiturn/before.png')} alt="Legacy flat query editor with no preserved conversational context and static suggestions" />
38
+
<figcaption>before: flat query editing with no preserved multi-turn context</figcaption>
39
+
</figure>
40
+
<figurestyle={{flex:1,minWidth:300}}>
41
+
<img src={useBaseUrl('/img/copilot-multiturn/after.png')} alt="Chat-style multi-turn pane showing goal card, suggested refinements, generated query, and live results" />
42
+
<figcaption>after: conversation pane with intent, suggestions, query, and updated results</figcaption>
43
+
</figure>
44
+
</div>
45
+
46
+
***Alt text examples**.
47
+
* before image: legacy flat query editor with no retained conversational context and static suggestions.
48
+
* after image: chat-style multi-turn pane showing intent card, suggested refinements, generated query, and live results table.
32
49
33
-
## key concepts
34
50
35
-
***multi-turn conversation**. A sequence of related instructions you give so the system retains context and incrementally updates the query and output.
36
-
***intent card**. Visual summary of what you’re asking Copilot to do in this session.
37
-
***suggestion cards**. Recommended refinements or adjacent analyses you can apply with a click.
38
-
***timeslice**. Fixed interval (like one minute) used to bucket time-series data.
39
-
***percentile**. A metric summary such as P50 (median) or P90 that describes distribution behavior.
51
+
In the below examples, we'll use the following key concepts:
40
52
41
-
## example workflow: refining a latency query (illustrative)
53
+
***Multi-turn conversation**. A sequence of related instructions that retains context and incrementally updates the query and output.
54
+
***Intent card**. Visual summary of what you're asking Copilot to do in this session.
55
+
***Suggestion cards**. Recommended refinements or adjacent analyses you can apply with a click.
56
+
***Timeslice**. Fixed interval (for example, one minute) used to bucket time-series data.
57
+
***Percentile**. A metric summary such as P50 (median) or P90 that describes distribution behavior.
42
58
43
-
> This is an example to demonstrate the interaction pattern. The same approach works for other metrics, logs, or event types.
59
+
## Example workflow: Refining a latency query
44
60
45
-
### step 1 ask your initial question
61
+
:::note
62
+
This workflow is only an example of the multi-turn interaction pattern. You can apply the same steps to other metrics, logs, events, or dimensions.
63
+
:::
46
64
47
-
Example:
48
-
> What is the fetch latency for my SLO monitor?
65
+
### Step 1: Ask your initial question
49
66
50
-
The system translates your question into a query and displays it on the canvas. You see the generated query and the initial result.
51
-
Screenshot placeholder: initial natural language question with generated latency query.*
67
+
Example: `What is the fetch latency for my SLO monitor?`
52
68
53
-
### Step 2 narrow the scope
69
+
Copilot translates your question into a query and displays it on the canvas. You see the generated query and the initial result.
70
+
*Screenshot placeholder: initial natural language question with generated latency query.*
54
71
55
-
Example follow-up:
56
-
> Only show results for the logs client.
72
+
### Step 2: Narrow the scope
73
+
74
+
Example follow-up: `Only show results for the logs client.`
57
75
58
76
Copilot updates the existing query to apply that filter and refreshes the results accordingly.
59
-
Screenshot placeholder: query now filtered to the specific client.*
77
+
*Screenshot placeholder: query now filtered to the specific client.*
60
78
61
-
### step 3 change units
79
+
### Step 3: Change units
62
80
63
-
Example follow-up:
64
-
> Convert latency from seconds to milliseconds.
81
+
Example follow-up: `Convert latency from seconds to milliseconds.`
65
82
66
83
The query logic and display adapt so the metric is expressed in milliseconds.
67
-
Screenshot placeholder: unit conversion reflected in query and output.*
84
+
*Screenshot placeholder: unit conversion reflected in query and output.*
68
85
69
-
### step 4 request a percentile over time
86
+
### Step 4: Request a percentile over time
70
87
71
-
Example follow-up:
72
-
> Show me P50 latency over one-minute intervals as a graph.
88
+
Example follow-up: `Show me P50 latency over one-minute intervals as a graph.`
73
89
74
-
Copilot adjusts the query to compute the P50 percentile in 1-minute timeslices and renders a visual chart.
## Example workflow: count critical warning events
112
128
113
-
step 1 set the goal
114
-
Ask:
129
+
### Step 1: Set the goal
115
130
116
-
Count the number of critical warning events by namespace and threat actor.
131
+
Ask: `Count the number of critical warning events by namespace and threat actor.`
117
132
118
-
Copilot creates a goal card and generates an initial query that filters warning events and groups by relevant dimensions.
119
-
Screenshot placeholder: goal card with generated query and result table.
133
+
Copilot creates an intent card and generates an initial query that filters for warning events and groups by the relevant dimensions.
134
+
*Screenshot placeholder: goal card with generated query and result table.*
120
135
121
-
step 2 explore refinements
122
-
Use suggestions such as:
136
+
### Step 2: Explore refinements
123
137
124
-
"Get count of critical warning events by namespace and manager sorted by highest count"
138
+
Use suggestions such as:
139
+
*`Get count of critical warning events by namespace and manager sorted by highest count`
140
+
*`Count warning events by namespace and the component that generated them`
125
141
126
-
"Count warning events by namespace and the component that generated them"
142
+
Clicking a suggestion or typing a follow-up applies those refinements instantly.
143
+
*Screenshot placeholder: suggestion panel with related refinements.*
127
144
128
-
Clicking or rephrasing applies those refinements instantly.
129
-
Screenshot placeholder: suggestion panel with related refinements.
130
-
131
-
sample query (from screenshot)
145
+
### Sample query (from screenshot)
132
146
133
147
```sql
134
148
_sourcecategory=us2-primary-eks/events
135
149
| where %"object.type"=="Warning"
136
150
| count by %"object.metadata.namespace", %"object.metadata.managedfields[0].manager"
137
151
```
138
152
139
-
interpretation
140
-
The result shows each namespace, the managing component, and the count of warning events. You can further ask follow-ups like "show the top 5 namespaces by warning volume" or "filter to only high-severity namespaces."
153
+
### Interpretation
154
+
155
+
The result shows each namespace, the managing component, and the count of warning events. You can further ask follow-ups like `show the top 5 namespaces by warning volume` or `filter to only high-severity namespaces`.
141
156
142
157
## Best practices
143
-
talk to it like a conversation. Layer refinements instead of rewriting the whole question.
144
158
145
-
be specific. Combine filters, metrics, units, and percentiles in clear language.
159
+
***Talk to it like a conversation**. Layer refinements instead of rewriting the whole question.
160
+
***Be specific**. Combine filters, metrics, units, and percentiles in clear language.
161
+
***Use suggestions**. Leverage the surfaced cards to pivot or drill down without manual query construction.
162
+
***Reuse history**. Open prior conversations to compare or branch analyses.
146
163
147
-
use suggestions. Leverage the surfaced cards to pivot or drill down without manual query construction.
164
+
## FAQ
148
165
149
-
reuse history. Open prior conversations to compare or branch analyses.
166
+
### Telemetry and success metrics
150
167
151
-
## Troubleshooting
168
+
<details>
169
+
<summary>What usage data do we collect?</summary>
170
+
We track:
171
+
* number of multi-turn sessions started
172
+
* average turns per session
173
+
* RAG hits vs. misses
174
+
* structured fallback occurrences
175
+
</details>
152
176
153
-
no or delayed results. Give Copilot a few seconds to process complex refinements.
177
+
### Security and compliance
154
178
155
-
output too broad. Add more context (specific client, namespace, metric).
179
+
<details>
180
+
<summary>Is any user or org data sent outside our environment?</summary>
181
+
No. All processing happens within your region's cluster. RAG context is scoped to dashboards in your own org—no cross-org data leakage.
182
+
</details>
156
183
157
-
unit is wrong. Explicitly ask "show in milliseconds" or "convert to seconds."
184
+
<details>
185
+
<summary>Who can access multi-turn?</summary>
186
+
Only users with the standard Copilot feature flag in enabled Beta orgs. Admins can toggle access via the feature-flag UI.
187
+
</details>
158
188
159
-
percentile isn't what you expected. Clarify by saying "show P90" or "switch back to P50 over 1 minute."
189
+
### Performance considerations
190
+
191
+
<details>
192
+
<summary>What's the impact on query latency?</summary>
193
+
Typical end-to-end response time remains under 2s for most queries. Very large result sets or percentile calculations over broad time ranges may take up to 5s, after which a structured fallback suggests narrowing the window.
194
+
</details>
160
195
161
-
## Feedback
162
196
163
-
Share feedback where?. Your input helps improve the experience as it evolves toward wider availability.
197
+
### Known issues and troubleshooting
164
198
165
-
alt text examples.
199
+
<details>
200
+
<summary>How do I debug a failed translation?</summary>
201
+
If a translation fails, Copilot will return a structured error with an error code and suggested next steps for debugging (for example, "try narrowing your time window" or "simplify your filter expression").
166
202
167
-
Before image: "Legacy flat query editor with no retained conversational context and static suggestions."
203
+
***No or delayed results**. Give Copilot a few seconds to process complex refinements.
204
+
***Output too broad**. Add more context (specific client, namespace, metric).
205
+
***Unit is wrong**. Explicitly ask `show in milliseconds` or `convert to seconds`.
206
+
***Percentile isn't what you expected**. Clarify by saying `show P90` or `switch back to P50 over 1 minute`.
207
+
</details>
208
+
209
+
<details>
210
+
<summary>What are the current limitations?</summary>
211
+
* RAG support is limited to `sourceCategory` filter (selection in source picker) at launch. Other expressions like `_index=` or `_sourceHost=` are not yet supported.
212
+
* Dashboards must have been opened in the last 90 days to be considered in RAG context.
213
+
* Very large or highly complex queries may time out or trigger structured fallback responses.
214
+
</details>
215
+
216
+
### Future roadmap
217
+
218
+
<details>
219
+
<summary>What's coming after this Beta?</summary>
220
+
A: We plan to add support for RAG on additional sources (for example, `_index`, `_sourceHost`), expand dashboard lookback beyond 90 days, and introduce API/CLI access for scripted workflows.
221
+
</details>
222
+
223
+
### Opt-in / opt-out
224
+
225
+
<details>
226
+
<summary>Can I opt out of multi-turn while remaining on Copilot?</summary>
227
+
Yes. Contact your Sumo Logic support engineer to disable it if needed.
228
+
</details>
229
+
230
+
## Feedback
168
231
169
-
After image: "Chat-style multi-turn pane showing goal card, refinement suggestions, generated query, and updated results table."
232
+
Share feedback ***with AE or directly in the product?*** Your input helps improve the experience as it evolves toward wider availability.
0 commit comments