You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The client’s VoiceSystemController uses the Web Speech API to continuously listen for speech. In the onresult handler above, any finalized recognition result is captured as finalTranscript and immediately forwarded to the command-processing system via queueCommand. This converts spoken input into text and injects it into the pipeline without local filtering, delegating interpretation to the command processor.
The CommandProcessor manages voice-command routing and conversation context on the client. It checks whether the transcript contains the wake phrase (“hey monday”) or an ongoing conversation is active. Only then is the user’s command treated as actionable. On activation, it may start a new conversation session, timestamp the interaction, and dispatch the raw transcript to the backend (sendToBackend). Inputs outside an active session without the trigger phrase are ignored.
142
122
143
-
Description.
144
-
The CommandProcessor manages voice-command routing and conversation context on the client. It checks whether the transcript contains the wake phrase (“hey monday”) or an ongoing conversation is active. Only then is the user’s command treated as actionable. On activation, it may start a new conversation session, timestamp the interaction, and dispatch the raw transcript to the backend (sendToBackend). Inputs outside an active session without the trigger phrase are ignored.
// ... (spatial and focus commands omitted for brevity)
226
202
}
227
203
})
204
+
# Description
205
+
The server receives voice_command events and parses them to infer intent (e.g., greeting, basic Q&A, reasoning, deep research). For each type, it invokes the Perplexity service with the corresponding mode and the user’s query. The resulting answer—including content, citations, and, where applicable, a reasoning chain or research sources—is emitted back to the client as a monday_response with a type aligned to the mode.
228
206
229
-
Description.
230
-
The server receives voice_command events and parses them to infer intent (e.g., greeting, basicQ&A, reasoning, deepresearch). For each type, it invokes the Perplexity service with the corresponding mode and the user’s query. The resulting answer—including content, citations, and, where applicable, a reasoning chain or research sources—is emitted back to the client asamonday_responsewithatypealignedtothemode.
231
-
232
-
AI Query Processing (PerplexityServiceIntegration)
207
+
#AI Query Processing (Perplexity Service Integration)
233
208
ts
234
-
CopyEdit
209
+
Copy
210
+
Edit
235
211
const result = await this.makeRequest('/chat/completions', requestData)
236
212
return {
237
213
id: result.id ||'reasoning_query',
@@ -244,13 +220,14 @@ return {
244
220
responseTime: 0
245
221
}
246
222
}
223
+
Description
224
+
PerplexityService prepares a mode-specific request and calls the external API. It returns a structured result containing the main answer (content), any citations, and—when in reasoning mode—a parsed list of reasoning steps. Using the Sonar API, it also includes metadata such as token usage and the model identifier.
247
225
248
-
Description.
249
-
PerplexityService prepares a mode-specific request and calls the external API. It returns a structured result containing the main answer (content), any citations, and—when in reasoning mode—a parsed list of reasoning steps. Using the Sonar API, It also includes metadata such astokenusageandthemodelidentifier.
In reasoning mode, answers are expected to include an ordered thought process. This utility scans the text for step indicators (e.g., “Step 1:” or “1.”), producing a structured array of steps with content and an initial confidence score. This enables the client to render reasoning as a clear, enumerated sequence.
274
253
275
-
Description.
276
-
In reasoning mode, answers are expected to include an ordered thought process. This utility scans the text for step indicators (e.g., “Step 1:” or “1.”), producing a structured array of steps with content and an initial confidence score. This enables the client to render reasoning asaclear, enumerated sequence.
To bridge AI output into a 3D presentation, the backend constructs spatial panel objects. A main content panel is centered; optional citations and reasoning panels are positioned to the sides. Each panel has an ID, type, position/rotation, title, content, and opacity. These definitions are sent with the response so the client can render floating informational boards in VR.
338
-
339
-
Spatial Orchestration & Layout (FrontendVR)
340
-
ts
341
-
CopyEdit
342
-
useFrame(() => {
343
-
// Continuously rotate the entire group of panels slowly
344
-
if (groupRef.current) {
345
-
groupRef.current.rotation.y+=0.001
346
-
}
347
-
348
-
// Dynamic layout based on mode
349
-
panels.forEach((panel, index) => {
350
-
if (spatialLayout === 'focus' && panel.id !== activePanel) {
351
-
// In focus mode, push non-active panels far outward
352
-
const distance =5
353
-
const angle = (index/panels.length) *Math.PI*2
354
-
panel.position[0] =Math.cos(angle) *distance
355
-
panel.position[2] =Math.sin(angle) *distance
356
-
357
-
} else if (spatialLayout === 'research') {
358
-
// In research mode, distribute panels in a layered circle (knowledge constellation)
To bridge AI output into a 3D presentation, the backend constructs spatial panel objects. A main content panel is centered; optional citations and reasoning panels are positioned to the sides. Each panel has an ID, type, position/rotation, title, content, and opacity. These definitions are sent with the response so the client can render floating informational boards in VR.
377
315
378
-
Description.
379
-
SpatialOrchestrator renders panels in VR and animates placement per the current layout. Default mode arranges panels in a semi-circle ahead of the user. Focus mode pushes non-active panels outward to minimize distraction. Research mode distributes panels into layered circular constellations to accommodate more nodes. The layout logic runs every frame for smooth transitions.
0 commit comments