Skip to content

Commit 5e16044

Browse files
webui: add Storybook example for raw LLM output and scope reasoning format toggle per story
- Added a Storybook example that showcases the chat message component in raw LLM output mode with the provided trace sample - Updated every ChatMessage story to toggle the disableReasoningFormat setting so the raw-output rendering remains scoped to its own example
1 parent 1e6beb6 commit 5e16044

File tree

1 file changed

+41
-0
lines changed

1 file changed

+41
-0
lines changed

tools/server/webui/src/stories/ChatMessage.stories.svelte

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,19 @@
4949
"Let's consider the user's question step by step:\\n\\n1. Identify the core problem\\n2. Evaluate relevant information\\n3. Formulate a clear answer\\n\\nFollowing this process ensures the final response stays focused and accurate.",
5050
children: []
5151
};
52+
const rawOutputMessage: DatabaseMessage = {
53+
id: '6',
54+
convId: 'conv-1',
55+
type: 'message',
56+
timestamp: Date.now() - 1000 * 60,
57+
role: 'assistant',
58+
content:
59+
'<|channel|>analysis<|message|>User greeted me. Initiating overcomplicated analysis: Is this a trap? No, just a normal hello. Respond calmly, act like a helpful assistant, and do not start explaining quantum physics again. Confidence 0.73. Engaging socially acceptable greeting protocol...<|end|>Hello there! How can I help you today?',
60+
parent: '1',
61+
thinking: '',
62+
children: []
63+
};
64+
5265
5366
let processingMessage = $state({
5467
id: '4',
@@ -80,6 +93,10 @@
8093
args={{
8194
message: userMessage
8295
}}
96+
play={async () => {
97+
const { updateConfig } = await import('$lib/stores/settings.svelte');
98+
updateConfig('disableReasoningFormat', false);
99+
}}
83100
/>
84101

85102
<Story
@@ -88,6 +105,10 @@
88105
class: 'max-w-[56rem] w-[calc(100vw-2rem)]',
89106
message: assistantMessage
90107
}}
108+
play={async () => {
109+
const { updateConfig } = await import('$lib/stores/settings.svelte');
110+
updateConfig('disableReasoningFormat', false);
111+
}}
91112
/>
92113

93114
<Story
@@ -96,6 +117,22 @@
96117
class: 'max-w-[56rem] w-[calc(100vw-2rem)]',
97118
message: assistantWithReasoning
98119
}}
120+
play={async () => {
121+
const { updateConfig } = await import('$lib/stores/settings.svelte');
122+
updateConfig('disableReasoningFormat', false);
123+
}}
124+
/>
125+
126+
<Story
127+
name="RawLlmOutput"
128+
args={{
129+
class: 'max-w-[56rem] w-[calc(100vw-2rem)]',
130+
message: rawOutputMessage
131+
}}
132+
play={async () => {
133+
const { updateConfig } = await import('$lib/stores/settings.svelte');
134+
updateConfig('disableReasoningFormat', true);
135+
}}
99136
/>
100137

101138
<Story
@@ -105,6 +142,8 @@
105142
}}
106143
asChild
107144
play={async () => {
145+
const { updateConfig } = await import('$lib/stores/settings.svelte');
146+
updateConfig('disableReasoningFormat', false);
108147
// Phase 1: Stream reasoning content in chunks
109148
let reasoningText =
110149
'I need to think about this carefully. Let me break down the problem:\n\n1. The user is asking for help with something complex\n2. I should provide a thorough and helpful response\n3. I need to consider multiple approaches\n4. The best solution would be to explain step by step\n\nThis approach will ensure clarity and understanding.';
@@ -156,6 +195,8 @@ asChild
156195
message: processingMessage
157196
}}
158197
play={async () => {
198+
const { updateConfig } = await import('$lib/stores/settings.svelte');
199+
updateConfig('disableReasoningFormat', false);
159200
// Import the chat store to simulate loading state
160201
const { chatStore } = await import('$lib/stores/chat.svelte');
161202

0 commit comments

Comments
 (0)