Conversation
- Add StartRoutineDiscoveryJobCreationParams Pydantic model for tool schema - Add data_models/guide_agent/ with conversation state and message types - Add data_models/websockets/ with base WS types and guide-specific commands/responses - Update GuideAgent with callback pattern, tool confirmation flow, state management - Business logic stubs marked with NotImplementedError for subsequent PR Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move all WebSocket types (base, browser, guide) into one consolidated websockets.py file. Also move test_websockets.py from servers repo. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace Pydantic model + constants with a simple function stub that colleague will implement. Guide agent now uses register_tool_from_function and calls the function directly. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add tool_utils.py with extract_description_from_docstring and generate_parameters_schema for converting Python functions to LLM tool definitions using pydantic TypeAdapter - Add register_tool_from_function method to LLMClient that extracts name, description, and parameters schema from a function - Add unit tests for tool_utils Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Merge GuideWebSocketClientCommandType into WebSocketClientCommandType - Merge ParsedGuideWebSocketClientCommand into ParsedWebSocketClientCommand - Remove Guide- prefix from response types (WebSocketMessageResponse, etc.) - Consolidate response type enums (MESSAGE, STATE, TOOL_INVOCATION_RESULT) - Add tests for all previously untested models and commands - Increase test coverage from ~50% to 100% of websockets module Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…LLM API - Add Chat and ChatThread models extending ResourceBase with bidirectional linking - Rename duplicate ChatMessage to EmittedChatMessage for callback messages - Add LLMToolCall and LLMChatResponse models for tool calling support - Implement GuideAgent with conversation logic, persistence callbacks, and self-aware system prompt for web automation routine creation - Update all LLM client methods to accept messages array instead of single prompt (get_text_sync/async, get_structured_response_sync/async, chat_sync) - Add run_guide_agent.py terminal chat script Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replaces stub script with full terminal interface featuring ANSI colors, ASCII banner, tool invocation confirmation flow, and conversation commands. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update welcome message to describe CDP capture analysis workflow - Add links to Vectorly docs and console - Change banner color to purple - Fix OpenAI client to use max_completion_tokens for GPT-5 models Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
GPT-5 models only support temperature=1 (default), so we omit the parameter entirely to avoid API errors. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add chat_stream_sync method to abstract, OpenAI, and Anthropic clients - Add stream_chunk_callable parameter to GuideAgent - Update terminal CLI to print chunks as they arrive for typewriter effect Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add STREAM_CHUNK and STREAM_END to WebSocketStreamResponseType - Add WebSocketStreamChunkResponse for text deltas during streaming - Add WebSocketStreamEndResponse with full accumulated content - Update WebSocketServerResponse union to include new types Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update tests to use thread_id instead of guide_chat_id to match the WebSocketStateResponse model change. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…into llms/ subpackage - Remove Chat and ChatThread (ResourceBase-dependent) from this repo - Add ChatLite and ChatThreadLite as lightweight replacements - Move data models to web_hacker/data_models/llms/ subpackage: - vendors.py: LLM vendor enums and model types - interaction.py: chat/conversation types for agent communication - Update all imports across codebase to use new submodule paths - Delete chat.py (models moved to servers repo) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
dimavrem22
reviewed
Jan 16, 2026
Comment on lines
17
to
20
| """OpenAI models.""" | ||
| GPT_5_2 = "gpt-5.2" | ||
| GPT_5_MINI = "gpt-5-mini" | ||
| GPT_5_NANO = "gpt-5-nano" |
Contributor
Author
|
Update:
Edit: Updated in commit |
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Include message IDs in emitted chat responses so WebSocket clients can track and reference individual messages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Update callback signatures from Callable[[T], None] to Callable[[T], T] so the persistence layer can assign IDs and return them to GuideAgent. This allows servers to use ResourceBase-generated IDs while keeping web_hacker's models decoupled from ResourceBase. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
dimavrem22
reviewed
Jan 17, 2026
Comment on lines
+15
to
+22
| class ChatRole(StrEnum): | ||
| """ | ||
| Role in a chat message. | ||
| """ | ||
| USER = "user" | ||
| ASSISTANT = "assistant" # AI | ||
| SYSTEM = "system" | ||
| TOOL = "tool" |
dimavrem22
approved these changes
Jan 17, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Guide agent scaffolding
Adds a conversational AI agent that helps users define and create web automation routines through natural language interaction.
What's new
Guide agent (
web_hacker/agents/guide_agent/)Unified LLM client (
web_hacker/llms/)register_tool_from_function()with automatic schema generation from type hints and docstringsChat data models (
web_hacker/data_models/chat.py)ChatandChatThreadmodels for conversation persistencePendingToolInvocationfor tracking tool calls awaiting user confirmationLLMChatResponseand streaming response typesCLI and scripts
web_hacker/scripts/run_guide_agent.py) for local testingscripts/run_guide_agent.py) as alternate entry pointOther changes
ResourceBasefor consistent ID and timestamp handling across modelsUnknownToolErrorexceptiondatetime.utcnow()usage with timezone-aware alternativeNext steps