You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The CommandProcessor manages voice-command routing and conversation context on the client. It checks whether the transcript contains the wake phrase (“hey monday”) or an ongoing conversation is active. Only then is the user’s command treated as actionable. On activation, it may start a new conversation session, timestamp the interaction, and dispatch the raw transcript to the backend (sendToBackend). Inputs outside an active session without the trigger phrase are ignored.
**Description**: The server receives voice_command events and parses them to infer intent (e.g., greeting, basic Q&A, reasoning, deep research). For each type, it invokes the Perplexity service with the corresponding mode and the user’s query. The resulting answer—including content, citations, and, where applicable, a reasoning chain or research sources—is emitted back to the client as a monday_response with a type aligned to the mode.
209
+
210
+
## AI Query Processing (Perplexity Service Integration)
211
+
```ts
210
212
Copy
211
213
Edit
212
214
const result =awaitthis.makeRequest('/chat/completions', requestData)
**Description**: PerplexityService prepares a mode-specific request and calls the external API. It returns a structured result containing the main answer (content), any citations, and—when in reasoning mode—a parsed list of reasoning steps. Using the Sonar API, it also includes metadata such as token usage and the model identifier.
Inreasoningmode, answersareexpectedtoincludeanorderedthoughtprocess. Thisutilityscansthetextforstepindicators (e.g., “Step1:” or “1.”), producingastructuredarrayofstepswithcontentandaninitialconfidencescore. Thisenablestheclienttorenderreasoningasaclear, enumeratedsequence.
254
+
```
255
+
**Description:**In reasoning mode, answers are expected to include an ordered thought process. This utility scans the text for step indicators (e.g., “Step 1:” or “1.”), producing a structured array of steps with content and an initial confidence score. This enables the client to render reasoning as a clear, enumerated sequence.
254
256
255
-
#VRSpatialResponseVisualization
256
-
ts
257
+
## VR Spatial Response Visualization
258
+
```ts
257
259
Copy
258
260
Edit
259
261
function createSpatialPanels(response:any, mode:string, query:string):any[] {
**Description**: To bridge AI output into a 3D presentation, the backend constructs spatial panel objects. A main content panel is centered; optional citations and reasoning panels are positioned to the sides. Each panel has an ID, type, position/rotation, title, content, and opacity. These definitions are sent with the response so the client can render floating informational boards in VR.
0 commit comments