Skip to content

Commit 6b51d45

Browse files
committed
integrate CP's FAQ content
1 parent 4813502 commit 6b51d45

File tree

1 file changed

+136
-73
lines changed

1 file changed

+136
-73
lines changed
Lines changed: 136 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
id: copilot-multiturn-beta
3-
title: Sumo Logic Copilot Multiturn Conversations (Beta)
3+
title: Sumo Logic Copilot Multi-turn Conversations (Beta)
44
---
55

66
import useBaseUrl from '@docusaurus/useBaseUrl';
@@ -11,76 +11,91 @@ import useBaseUrl from '@docusaurus/useBaseUrl';
1111

1212
<p><a href="/docs/beta"><span className="beta">Beta</span></a></p>
1313

14-
use multi-turn conversation to refine and visualize queries
14+
Our new conversational experience in Sumo Logic Copilot lets you interact with queries the way you would with a chat assistant. You ask a question, refine it with follow-ups, change units or metrics, and see the updated query and visualization without starting over. Copilot maintains your intent across turns, surfaces helpful suggestions, and makes it easy to explore related angles.
1515

16-
The new conversational experience in Copilot lets you interact with your queries the way you would with a chat assistant: you ask a question, refine it with follow-ups, change units or metrics, and see the updated query and visualization without starting over. Copilot maintains your intent across turns, surfaces helpful suggestions, and makes it easy to explore related angles.
16+
This guide explains what's new in the UI, how the multi-turn flow works, and shows example workflows. The examples, such as refining a latency query for an SLO or counting warning events, are illustrative. You can apply the same interaction pattern to any data, log, or metric you care about.
1717

18-
This guide explains what’s new in the UI, how the multi-turn flow works, and shows example workflows. The examples—such as refining a latency query for an SLO or counting warning events—are illustrative. You can apply the same interaction pattern to any data or metric you care about.
18+
## Beta release highlights
1919

20-
## what’s new (before vs after)
20+
* **Contextual multi-turn**. Refine queries through natural, conversational follow-up questions without losing prior context.
21+
* **Dashboard-aware translations (via RAG)**. Copilot leverages recent dashboard queries in your org to improve translation accuracy and better understand your intent.
22+
* **Structured fallback & descriptive errors**. Get clear, actionable messages instead of generic errors when Copilot hits a limit or cannot translate.
23+
* **Pro tip**. For best results, ask questions about data sources that already have dashboards built on them, Copilot performs best with those contexts.
2124

22-
* **Conversational refinement**. Before you had to reissue full queries for each change. Now you give follow-up instructions (for example, narrow scope, change units, switch percentiles) and the system updates the existing query while preserving prior context.
23-
* **Goal/intent cards**. Your current objective appears as a card (for example, "count critical warning events by namespace and threat actor"), making it easy to see what Copilot is working on and adjust it.
24-
* **Suggested next steps**. The UI surfaces related refinement suggestions you can click to explore alternate groupings, sorts, or filters without crafting new language from scratch.
25+
## What's new (before and after)
26+
27+
* **Conversational refinement**. Before, you had to reissue full queries for each change. Now you give follow-up instructions (for example, narrow scope, change units, switch percentiles) and the system updates the existing query while preserving context.
28+
* **Goal/intent cards**. Your current objective appears as a card (for example, `count critical warning events by namespace and threat actor`), making it easy to see what Copilot is working on and adjust it.
29+
* **Suggested next steps**. The UI surfaces related refinement suggestions you can click to explore alternate groupings, sorts, or filters without crafting new language.
2530
* **Integrated conversation pane**. A sidebar shows the flow of your prompts, current goal, and refinements, giving visibility into the multi-turn state.
2631
* **Immediate editable query**. Natural language instructions are translated into a query that appears in the editor with syntax highlighting; you can edit it directly or keep iterating via conversation.
2732
* **Live results and visuals**. Tables, charts, and other outputs update in place as you refine, with smooth transitions when changing things like percentiles or units.
2833
* **History and branching**. You can start a new conversation, revisit past ones, or branch off to compare different perspectives.
2934

30-
<!--Screenshot comparison (before vs after).
31-
Caption suggestion: "Before: flat query editing with no preserved conversational context. After: chat-style multi-turn pane with goal card, suggestions, generated query, and live results." -->
35+
<div style={{display: 'flex', gap: '1rem', flexWrap: 'wrap'}}>
36+
<figure style={{flex: 1, minWidth: 300}}>
37+
<img src={useBaseUrl('/img/copilot-multiturn/before.png')} alt="Legacy flat query editor with no preserved conversational context and static suggestions" />
38+
<figcaption>before: flat query editing with no preserved multi-turn context</figcaption>
39+
</figure>
40+
<figure style={{flex: 1, minWidth: 300}}>
41+
<img src={useBaseUrl('/img/copilot-multiturn/after.png')} alt="Chat-style multi-turn pane showing goal card, suggested refinements, generated query, and live results" />
42+
<figcaption>after: conversation pane with intent, suggestions, query, and updated results</figcaption>
43+
</figure>
44+
</div>
45+
46+
* **Alt text examples**.
47+
* before image: legacy flat query editor with no retained conversational context and static suggestions.
48+
* after image: chat-style multi-turn pane showing intent card, suggested refinements, generated query, and live results table.
3249

33-
## key concepts
3450

35-
* **multi-turn conversation**. A sequence of related instructions you give so the system retains context and incrementally updates the query and output.
36-
* **intent card**. Visual summary of what you’re asking Copilot to do in this session.
37-
* **suggestion cards**. Recommended refinements or adjacent analyses you can apply with a click.
38-
* **timeslice**. Fixed interval (like one minute) used to bucket time-series data.
39-
* **percentile**. A metric summary such as P50 (median) or P90 that describes distribution behavior.
51+
In the below examples, we'll use the following key concepts:
4052

41-
## example workflow: refining a latency query (illustrative)
53+
* **Multi-turn conversation**. A sequence of related instructions that retains context and incrementally updates the query and output.
54+
* **Intent card**. Visual summary of what you're asking Copilot to do in this session.
55+
* **Suggestion cards**. Recommended refinements or adjacent analyses you can apply with a click.
56+
* **Timeslice**. Fixed interval (for example, one minute) used to bucket time-series data.
57+
* **Percentile**. A metric summary such as P50 (median) or P90 that describes distribution behavior.
4258

43-
> This is an example to demonstrate the interaction pattern. The same approach works for other metrics, logs, or event types.
59+
## Example workflow: Refining a latency query
4460

45-
### step 1 ask your initial question
61+
:::note
62+
This workflow is only an example of the multi-turn interaction pattern. You can apply the same steps to other metrics, logs, events, or dimensions.
63+
:::
4664

47-
Example:
48-
> What is the fetch latency for my SLO monitor?
65+
### Step 1: Ask your initial question
4966

50-
The system translates your question into a query and displays it on the canvas. You see the generated query and the initial result.
51-
Screenshot placeholder: initial natural language question with generated latency query.*
67+
Example: `What is the fetch latency for my SLO monitor?`
5268

53-
### Step 2 narrow the scope
69+
Copilot translates your question into a query and displays it on the canvas. You see the generated query and the initial result.
70+
*Screenshot placeholder: initial natural language question with generated latency query.*
5471

55-
Example follow-up:
56-
> Only show results for the logs client.
72+
### Step 2: Narrow the scope
73+
74+
Example follow-up: `Only show results for the logs client.`
5775

5876
Copilot updates the existing query to apply that filter and refreshes the results accordingly.
59-
Screenshot placeholder: query now filtered to the specific client.*
77+
*Screenshot placeholder: query now filtered to the specific client.*
6078

61-
### step 3 change units
79+
### Step 3: Change units
6280

63-
Example follow-up:
64-
> Convert latency from seconds to milliseconds.
81+
Example follow-up: `Convert latency from seconds to milliseconds.`
6582

6683
The query logic and display adapt so the metric is expressed in milliseconds.
67-
Screenshot placeholder: unit conversion reflected in query and output.*
84+
*Screenshot placeholder: unit conversion reflected in query and output.*
6885

69-
### step 4 request a percentile over time
86+
### Step 4: Request a percentile over time
7087

71-
Example follow-up:
72-
> Show me P50 latency over one-minute intervals as a graph.
88+
Example follow-up: `Show me P50 latency over one-minute intervals as a graph.`
7389

74-
Copilot adjusts the query to compute the P50 percentile in 1-minute timeslices and renders a visual chart.
75-
Screenshot placeholder: graph showing P50 latency trend.*
90+
Copilot adjusts the query to compute the P50 percentile in one-minute timeslices and renders a visual chart.
91+
*Screenshot placeholder: graph showing P50 latency trend.*
7692

77-
### step 5 switch percentile
93+
### Step 5: Switch percentile
7894

79-
Example follow-up:
80-
> Now show P90 instead.
95+
Example follow-up: `Now show P90 instead.`
8196

82-
The system updates both the query and chart to reflect the new percentile while preserving prior filters and unit conversions.
83-
Screenshot placeholder: updated chart with P90 latency.*
97+
Copilot updates both the query and chart to reflect the new percentile while preserving prior filters and unit conversions.
98+
*Screenshot placeholder: updated chart with P90 latency.*
8499

85100
### Sample query patterns (replace with your actual fields)
86101

@@ -92,7 +107,8 @@ _sourceCategory="your-slo-category" sloName="your-slo-name" client="logs"
92107
| eval latency_ms = latency_seconds * 1000
93108
```
94109

95-
```sql title="P50 latency over one-minute intervals"
110+
#### P50 latency over one-minute intervals
111+
```sql
96112
_sourceCategory="your-slo-category" sloName="your-slo-name" client="logs"
97113
| measure avg(fetch_latency_seconds) as latency_seconds
98114
| eval latency_ms = latency_seconds * 1000
@@ -101,69 +117,116 @@ _sourceCategory="your-slo-category" sloName="your-slo-name" client="logs"
101117
| sort _timeslice
102118
```
103119

104-
switch to P90
120+
#### Switch to P90
105121

106122
```sql
107123
... same as above ...
108124
| percentile(latency_ms, 90) by _timeslice
109125
```
110126

111-
example workflow: count critical warning events
127+
## Example workflow: count critical warning events
112128

113-
step 1 set the goal
114-
Ask:
129+
### Step 1: Set the goal
115130

116-
Count the number of critical warning events by namespace and threat actor.
131+
Ask: `Count the number of critical warning events by namespace and threat actor.`
117132

118-
Copilot creates a goal card and generates an initial query that filters warning events and groups by relevant dimensions.
119-
Screenshot placeholder: goal card with generated query and result table.
133+
Copilot creates an intent card and generates an initial query that filters for warning events and groups by the relevant dimensions.
134+
*Screenshot placeholder: goal card with generated query and result table.*
120135

121-
step 2 explore refinements
122-
Use suggestions such as:
136+
### Step 2: Explore refinements
123137

124-
"Get count of critical warning events by namespace and manager sorted by highest count"
138+
Use suggestions such as:
139+
* `Get count of critical warning events by namespace and manager sorted by highest count`
140+
* `Count warning events by namespace and the component that generated them`
125141

126-
"Count warning events by namespace and the component that generated them"
142+
Clicking a suggestion or typing a follow-up applies those refinements instantly.
143+
*Screenshot placeholder: suggestion panel with related refinements.*
127144

128-
Clicking or rephrasing applies those refinements instantly.
129-
Screenshot placeholder: suggestion panel with related refinements.
130-
131-
sample query (from screenshot)
145+
### Sample query (from screenshot)
132146

133147
```sql
134148
_sourcecategory=us2-primary-eks/events
135149
| where %"object.type" == "Warning"
136150
| count by %"object.metadata.namespace", %"object.metadata.managedfields[0].manager"
137151
```
138152

139-
interpretation
140-
The result shows each namespace, the managing component, and the count of warning events. You can further ask follow-ups like "show the top 5 namespaces by warning volume" or "filter to only high-severity namespaces."
153+
### Interpretation
154+
155+
The result shows each namespace, the managing component, and the count of warning events. You can further ask follow-ups like `show the top 5 namespaces by warning volume` or `filter to only high-severity namespaces`.
141156

142157
## Best practices
143-
talk to it like a conversation. Layer refinements instead of rewriting the whole question.
144158

145-
be specific. Combine filters, metrics, units, and percentiles in clear language.
159+
* **Talk to it like a conversation**. Layer refinements instead of rewriting the whole question.
160+
* **Be specific**. Combine filters, metrics, units, and percentiles in clear language.
161+
* **Use suggestions**. Leverage the surfaced cards to pivot or drill down without manual query construction.
162+
* **Reuse history**. Open prior conversations to compare or branch analyses.
146163

147-
use suggestions. Leverage the surfaced cards to pivot or drill down without manual query construction.
164+
## FAQ
148165

149-
reuse history. Open prior conversations to compare or branch analyses.
166+
### Telemetry and success metrics
150167

151-
## Troubleshooting
168+
<details>
169+
<summary>What usage data do we collect?</summary>
170+
We track:
171+
* number of multi-turn sessions started
172+
* average turns per session
173+
* RAG hits vs. misses
174+
* structured fallback occurrences
175+
</details>
152176

153-
no or delayed results. Give Copilot a few seconds to process complex refinements.
177+
### Security and compliance
154178

155-
output too broad. Add more context (specific client, namespace, metric).
179+
<details>
180+
<summary>Is any user or org data sent outside our environment?</summary>
181+
No. All processing happens within your region's cluster. RAG context is scoped to dashboards in your own org—no cross-org data leakage.
182+
</details>
156183

157-
unit is wrong. Explicitly ask "show in milliseconds" or "convert to seconds."
184+
<details>
185+
<summary>Who can access multi-turn?</summary>
186+
Only users with the standard Copilot feature flag in enabled Beta orgs. Admins can toggle access via the feature-flag UI.
187+
</details>
158188

159-
percentile isn't what you expected. Clarify by saying "show P90" or "switch back to P50 over 1 minute."
189+
### Performance considerations
190+
191+
<details>
192+
<summary>What's the impact on query latency?</summary>
193+
Typical end-to-end response time remains under 2s for most queries. Very large result sets or percentile calculations over broad time ranges may take up to 5s, after which a structured fallback suggests narrowing the window.
194+
</details>
160195

161-
## Feedback
162196

163-
Share feedback where?. Your input helps improve the experience as it evolves toward wider availability.
197+
### Known issues and troubleshooting
164198

165-
alt text examples.
199+
<details>
200+
<summary>How do I debug a failed translation?</summary>
201+
If a translation fails, Copilot will return a structured error with an error code and suggested next steps for debugging (for example, "try narrowing your time window" or "simplify your filter expression").
166202

167-
Before image: "Legacy flat query editor with no retained conversational context and static suggestions."
203+
* **No or delayed results**. Give Copilot a few seconds to process complex refinements.
204+
* **Output too broad**. Add more context (specific client, namespace, metric).
205+
* **Unit is wrong**. Explicitly ask `show in milliseconds` or `convert to seconds`.
206+
* **Percentile isn't what you expected**. Clarify by saying `show P90` or `switch back to P50 over 1 minute`.
207+
</details>
208+
209+
<details>
210+
<summary>What are the current limitations?</summary>
211+
* RAG support is limited to `sourceCategory` filter (selection in source picker) at launch. Other expressions like `_index=` or `_sourceHost=` are not yet supported.
212+
* Dashboards must have been opened in the last 90 days to be considered in RAG context.
213+
* Very large or highly complex queries may time out or trigger structured fallback responses.
214+
</details>
215+
216+
### Future roadmap
217+
218+
<details>
219+
<summary>What's coming after this Beta?</summary>
220+
A: We plan to add support for RAG on additional sources (for example, `_index`, `_sourceHost`), expand dashboard lookback beyond 90 days, and introduce API/CLI access for scripted workflows.
221+
</details>
222+
223+
### Opt-in / opt-out
224+
225+
<details>
226+
<summary>Can I opt out of multi-turn while remaining on Copilot?</summary>
227+
Yes. Contact your Sumo Logic support engineer to disable it if needed.
228+
</details>
229+
230+
## Feedback
168231

169-
After image: "Chat-style multi-turn pane showing goal card, refinement suggestions, generated query, and updated results table."
232+
Share feedback ***with AE or directly in the product?*** Your input helps improve the experience as it evolves toward wider availability.

0 commit comments

Comments
 (0)