From hal_voice_assistant.log at 03:16:48:
- User asked: "Play movie about barbarian"
- System selected: Millennium.mp4 (WRONG)
- Should have selected: Red Sonja.mp4 (CORRECT)
-
Pseudo-function interception was broken (FIXED)
- Code only checked for
"select_movie" - Agent called
"media-server.select_movie"(with server prefix) - Interception failed → tried to call as real MCP tool → "Unknown tool"
- Code only checked for
-
Selection prompt was too complex (FIXED)
- Original had numbered lists, multiple IMPORTANT instructions
- Too much structure confused the small model
- User tested
exaone3.5:2.4bwith simple prompt → worked correctly!
File: hal_voice_assistant.py:387
Before:
if function_name == "select_movie":After:
if function_name == "select_movie" or function_name.endswith(".select_movie"):File: hal_voice_assistant.py:410-415
Before:
The user requested: "Play movie about barbarian."
Available movies:
1. Airplane!.mp4
2. Millennium.mp4
3. Nobody.mp4
4. Red Sonja.mp4
5. Superman.mp4
Based ONLY on the movie titles above, which movie best matches the user's request?
IMPORTANT: Choose from the list above. Do NOT make up movies. Consider what each title suggests the movie is about.
Respond with ONLY the exact filename from the list (e.g., "Red Sonja.mp4"), nothing else.
After:
Of the movies "Airplane!.mp4", "Millennium.mp4", "Nobody.mp4", "Red Sonja.mp4", "Superman.mp4", which one best matches: Play movie about barbarian.
Respond with only the filename from the list above, nothing else.
Why: User tested exaone3.5:2.4b and found simpler prompts work better. The model correctly identified Red Sonja when given: "of the movies "superman","airplane","red sonja" which one is about a barbarian"
File: hal_voice_assistant.py:421-435
Now logs:
- Full selection prompt sent to LLM
- Model name used for selection
- Complete LLM response (not just extracted filename)
File: hal_voice_assistant.py:322-324
Before: Used "Superman.mp4" as example (could be a real movie) After: Uses "nonexistantmovie.mp4" (canary value)
If "nonexistantmovie.mp4" is ever selected, we know the agent is using examples from the prompt instead of real data.
Ready to test! The new prompt format matches the user's successful test case.
Expected behavior:
- User: "play movie about barbarian"
- Intent matcher detects "about" → triggers agent mode
- Agent calls
list_movies→ gets movie list - Agent calls
select_movie(ormedia-server.select_movie) - Code intercepts (now works with server prefix!)
- Simplified prompt sent to LLM
- Should select: Red Sonja.mp4
- Agent calls
play_moviewith correct filename - Logs show full selection prompt and reasoning