-
Notifications
You must be signed in to change notification settings - Fork 403
Added Commandline Arguments for Chat Creation #392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
eade768
f64f2b4
162597f
38bdf92
0a970d1
1bb0801
8030df3
0657dae
308fc7d
e9fe625
030864c
7fae779
6e641a3
11ab98e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,49 @@ | ||
| import detectPort from 'detect-port'; | ||
| import chalk from 'chalk'; | ||
|
|
||
| const port = 1212; | ||
|
|
||
| const delay = (ms: number) => | ||
| new Promise<void>((resolve) => { | ||
| setTimeout(() => resolve(), ms); | ||
| }); | ||
|
|
||
| async function waitForRenderer(attempt = 0): Promise<void> { | ||
| if (attempt === 0) { | ||
| console.log(chalk.blueBright('Waiting for renderer to be available...')); | ||
| } | ||
|
|
||
| const maxAttempts = 60; // 60 seconds max wait | ||
| if (attempt >= maxAttempts) { | ||
| console.log(chalk.red('✗ Renderer did not start within 60 seconds.')); | ||
| console.log( | ||
| chalk.yellow( | ||
| 'Please start the renderer manually with: npm run start:renderer', | ||
| ), | ||
| ); | ||
| process.exit(1); | ||
| return; | ||
| } | ||
|
|
||
| try { | ||
| const availablePort = await detectPort(port); | ||
| if (availablePort !== port) { | ||
| console.log(chalk.greenBright('✓ Renderer is ready!')); | ||
| return; | ||
| } | ||
| } catch (err) { | ||
| // Ignore errors and retry | ||
| } | ||
|
|
||
| if ((attempt + 1) % 5 === 0) { | ||
| console.log(chalk.yellow(`Still waiting... (${attempt + 1}s)`)); | ||
| } | ||
|
|
||
| await delay(1000); | ||
| await waitForRenderer(attempt + 1); | ||
| } | ||
|
|
||
| waitForRenderer().catch((err) => { | ||
| console.error(chalk.red('Error waiting for renderer:'), err); | ||
| process.exit(1); | ||
| }); | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,127 @@ | ||||||
| # CLI Startup Arguments | ||||||
|
|
||||||
| 5ire supports command-line arguments to automatically create chats with pre-configured settings when launching the application. | ||||||
|
|
||||||
| ## Usage | ||||||
|
|
||||||
| ### Individual Flags | ||||||
|
|
||||||
| You can use individual flags to configure a new chat: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider openai --model gpt-4 --system "You are a helpful assistant" --summary "My Chat" --prompt "Hello!" --temperature 0.7 | ||||||
| ``` | ||||||
|
|
||||||
| #### Available Flags | ||||||
|
|
||||||
| - `--new-chat` - Flag to indicate creating a new chat (required when using individual flags) | ||||||
| - `--provider <provider>` - AI provider (e.g., openai, anthropic, google) | ||||||
| - `--model <model>` - Model name (e.g., gpt-4, claude-3-opus) | ||||||
| - `--system <message>` - System message for the chat | ||||||
| - `--summary <text>` - Summary/title for the chat | ||||||
| - `--prompt <text>` - Initial prompt/message to send | ||||||
| - `--temperature <number>` - Temperature setting (0.0 - 2.0) | ||||||
|
|
||||||
| ### JSON Format | ||||||
|
|
||||||
| You can also provide all settings as a JSON object: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --chat '{"provider":"openai","model":"gpt-4","system":"You are a helpful assistant","summary":"My Chat","prompt":"Hello!","temperature":0.7}' | ||||||
| ``` | ||||||
|
|
||||||
| ### Provider Derivation | ||||||
|
|
||||||
| If you specify the model in the format `Provider:model`, the provider will be automatically derived and the model will be normalized: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --model anthropic:claude-3-opus | ||||||
| ``` | ||||||
|
|
||||||
| This is equivalent to: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider anthropic --model claude-3-opus | ||||||
| ``` | ||||||
|
|
||||||
| **Note:** If you explicitly provide both a provider and a model with the `Provider:model` format, the explicit provider takes precedence, but the model will still be normalized to remove the provider prefix: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider openai --model anthropic:claude-3-opus | ||||||
| # Results in: provider=openai, model=claude-3-opus | ||||||
| ``` | ||||||
|
|
||||||
| ## Examples | ||||||
|
|
||||||
| ### Basic Chat Creation | ||||||
|
|
||||||
| Create a new chat with OpenAI GPT-4: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider openai --model gpt-4 | ||||||
| ``` | ||||||
|
|
||||||
| ### Chat with System Message | ||||||
|
|
||||||
| Create a chat with a custom system message: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider anthropic --model claude-3-opus --system "You are a coding assistant specialized in TypeScript" | ||||||
| ``` | ||||||
|
|
||||||
| ### Chat with Initial Prompt | ||||||
|
|
||||||
| Create a chat and send an initial message: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat --provider openai --model gpt-4 --prompt "Explain quantum computing in simple terms" | ||||||
| ``` | ||||||
|
|
||||||
| ### Complete Configuration | ||||||
|
|
||||||
| Create a fully configured chat: | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --new-chat \ | ||||||
| --provider openai \ | ||||||
| --model gpt-4 \ | ||||||
| --system "You are a creative writing assistant" \ | ||||||
| --summary "Story Writing Session" \ | ||||||
| --prompt "Write a short story about a time traveler" \ | ||||||
| --temperature 0.9 | ||||||
| ``` | ||||||
|
|
||||||
| ### Using JSON Format | ||||||
|
|
||||||
| ```bash | ||||||
| 5ire --chat '{ | ||||||
| "provider": "anthropic", | ||||||
| "model": "claude-3-opus", | ||||||
| "system": "You are a helpful assistant", | ||||||
| "summary": "Quick Chat", | ||||||
| "temperature": 0.7 | ||||||
| }' | ||||||
| ``` | ||||||
|
|
||||||
| ## Behavior | ||||||
|
|
||||||
| - When launched with startup arguments, 5ire will: | ||||||
| 1. Create a new chat with the specified configuration | ||||||
| 2. Navigate to the newly created chat | ||||||
| 3. If a `--prompt` is provided, it will be set as the initial input (but not automatically sent) | ||||||
|
||||||
| 3. If a `--prompt` is provided, it will be set as the initial input (but not automatically sent) | |
| 3. If a `--prompt` is provided, it will be set as the initial input and automatically sent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation contradicts implementation behavior.
The documentation states the prompt "will be set as the initial input (but not automatically sent)". However, reviewing src/renderer/components/StartupHandler.tsx (lines 103-110), the implementation explicitly auto-submits the prompt via eventBus.emit('startup-submit', args.prompt).
Update the documentation to reflect the actual behavior:
-- If a `--prompt` is provided, it will be set as the initial input (but not automatically sent)
+ 3. If a `--prompt` is provided, it will be automatically submittedCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In docs/CLI_ARGUMENTS.md around lines 111 to 112, the documentation incorrectly
says a provided --prompt is set as initial input but not automatically sent; in
reality StartupHandler.tsx (lines 103–110) emits eventBus.emit('startup-submit',
args.prompt) which auto-submits the prompt. Update the documentation text to
state that when --prompt is provided it will be automatically submitted on
startup (mentioning the startup-submit event for clarity) and remove the "(but
not automatically sent)" clause so the docs match the implementation.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,186 @@ | ||
| # Terminal Startup Arguments - Implementation Summary | ||
|
|
||
| ## Overview | ||
|
|
||
| This implementation adds support for terminal startup arguments that allow users to automatically create chats with pre-configured settings when launching the 5ire application. | ||
|
|
||
| ## Architecture | ||
|
|
||
| ### 1. CLI Argument Parser (`src/main/cli-args.ts`) | ||
|
|
||
| A dedicated module that parses command-line arguments and extracts chat configuration: | ||
|
|
||
| - **Supported Formats:** | ||
| - Individual flags: `--new-chat --provider openai --model gpt-4 --system "..." --summary "..." --prompt "..." --temperature 0.7` | ||
| - JSON format: `--chat '{"provider":"openai","model":"gpt-4",...}'` | ||
|
|
||
| - **Key Features:** | ||
| - Provider derivation from model format (`Provider:model`) | ||
| - Model normalization (always removes provider prefix) | ||
| - Explicit provider takes precedence | ||
| - Robust error handling for invalid JSON | ||
|
|
||
| ### 2. Main Process Integration (`src/main/main.ts`) | ||
|
|
||
| Enhanced the main process to handle startup arguments: | ||
|
|
||
| - **Cold Start:** Parses `process.argv` when app launches | ||
| - **Second Instance:** Parses command line from second instance activation | ||
| - **Pending State:** Stores pending startup args until renderer is ready | ||
| - **IPC Communication:** Sends startup payload via `startup-new-chat` event | ||
|
|
||
| **Key Changes:** | ||
| ```typescript | ||
| // Added variable to track pending startup args | ||
| let pendingStartupArgs: StartupChatArgs | null = null; | ||
|
|
||
| // Parse args on cold start | ||
| handleStartupArgs(process.argv); | ||
|
|
||
| // Parse args on second instance | ||
| app.on('second-instance', (event, commandLine) => { | ||
| handleStartupArgs(commandLine); | ||
| // ... handle deep links | ||
| }); | ||
|
|
||
| // Send pending args when renderer is ready | ||
| ipcMain.on('install-tool-listener-ready', () => { | ||
| if (pendingStartupArgs !== null) { | ||
| mainWindow?.webContents.send('startup-new-chat', pendingStartupArgs); | ||
| pendingStartupArgs = null; | ||
| } | ||
| }); | ||
|
Comment on lines
+47
to
+52
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IPC channel name mismatch. The documentation references -// Send pending args when renderer is ready
-ipcMain.on('install-tool-listener-ready', () => {
+// Send pending args when startup handler is ready
+ipcMain.on('startup-handler-ready', () => {
if (pendingStartupArgs !== null) {
🤖 Prompt for AI Agents |
||
| ``` | ||
|
|
||
| ### 3. Preload API (`src/main/preload.ts`) | ||
|
|
||
| Exposed secure API for renderer process via contextBridge: | ||
|
|
||
| ```typescript | ||
| startup: { | ||
| onNewChat(callback: (args: StartupChatArgs) => void) { | ||
| // Returns unsubscribe function | ||
| return () => { ... }; | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Security Constraints:** | ||
| - Uses contextBridge for secure IPC communication | ||
| - No direct access to Node.js APIs from renderer | ||
| - Type-safe API with TypeScript interfaces | ||
|
|
||
| ### 4. Renderer Handler (`src/renderer/components/StartupHandler.tsx`) | ||
|
|
||
| React component that handles startup events: | ||
|
|
||
| - **Placement:** Inside Router in FluentApp component | ||
| - **Lifecycle:** Sets up listener on mount, cleans up on unmount | ||
| - **Race Condition Protection:** Uses ref to prevent concurrent chat creation | ||
| - **Chat Creation:** Calls `useChatStore().createChat()` with parsed args | ||
| - **Navigation:** Automatically navigates to newly created chat | ||
|
|
||
| **Key Features:** | ||
| ```typescript | ||
| - Prevents race conditions with isProcessingRef | ||
| - Proper error handling and logging | ||
| - Automatic navigation to created chat | ||
| - Clean event listener cleanup | ||
| ``` | ||
|
|
||
| ## Data Flow | ||
|
|
||
| ``` | ||
| CLI Args → parseStartupArgs() → handleStartupArgs() → IPC Event | ||
| ↓ | ||
| Preload API | ||
| ↓ | ||
| StartupHandler | ||
| ↓ | ||
| createChat() + navigate() | ||
| ``` | ||
|
|
||
| ### Cold Start Flow: | ||
| 1. User launches app with CLI args | ||
| 2. Main process parses args from `process.argv` | ||
| 3. Args stored in `pendingStartupArgs` | ||
| 4. Renderer loads and sends 'install-tool-listener-ready' | ||
| 5. Main sends 'startup-new-chat' event with args | ||
| 6. StartupHandler receives event, creates chat, navigates | ||
|
|
||
| ### Second Instance Flow: | ||
| 1. User launches app again with CLI args (app already running) | ||
| 2. Second instance detected, window focused | ||
| 3. Main process parses args from `commandLine` | ||
| 4. If renderer ready, immediately sends 'startup-new-chat' event | ||
| 5. StartupHandler receives event, creates chat, navigates | ||
|
|
||
| ## Testing | ||
|
|
||
| Comprehensive test suite in `test/main/cli-args.spec.ts`: | ||
|
|
||
| - ✅ Null handling for no args | ||
| - ✅ Individual flag parsing | ||
| - ✅ Partial flag parsing | ||
| - ✅ JSON format parsing | ||
| - ✅ Provider derivation from model | ||
| - ✅ Model normalization with explicit provider | ||
| - ✅ Invalid JSON handling | ||
| - ✅ Missing value handling | ||
| - ✅ Temperature number parsing | ||
| - ✅ Invalid temperature handling | ||
| - ✅ Complex JSON with all properties | ||
| - ✅ Provider derivation in JSON format | ||
|
|
||
| ## Documentation | ||
|
|
||
| Complete user documentation in `docs/CLI_ARGUMENTS.md`: | ||
|
|
||
| - Usage examples for all scenarios | ||
| - Detailed explanation of provider derivation | ||
| - Behavior notes and edge cases | ||
| - Platform-specific considerations | ||
|
|
||
| ## Edge Cases Handled | ||
|
|
||
| 1. **Empty Args:** Returns null, no chat created | ||
| 2. **Invalid JSON:** Logged and ignored, returns null | ||
| 3. **Missing Values:** Ignores flag if no value provided | ||
| 4. **Invalid Temperature:** Ignores if not a number | ||
| 5. **Race Conditions:** Protected with ref guard in handler | ||
| 6. **Deep Link Conflicts:** Searches all args, not just last one | ||
| 7. **Provider Prefix:** Always normalized in model string | ||
| 8. **Concurrent Events:** Processing flag prevents duplicate chat creation | ||
|
|
||
| ## Future Enhancements | ||
|
|
||
| Potential improvements for future consideration: | ||
|
|
||
| 1. Support for additional chat settings (maxTokens, maxCtxMessages) | ||
| 2. Validate provider and model against available providers | ||
| 3. Auto-send message if prompt is provided | ||
| 4. Support for chat folder assignment | ||
| 5. Batch chat creation from config file | ||
| 6. Shell auto-completion for flags | ||
|
|
||
| ## Breaking Changes | ||
|
|
||
| None. This is a new feature with no impact on existing functionality. | ||
|
|
||
| ## Security Considerations | ||
|
|
||
| - ✅ All IPC communication through contextBridge | ||
| - ✅ No direct Node.js access from renderer | ||
| - ✅ Input validation in parser (JSON.parse in try-catch) | ||
| - ✅ Type-safe interfaces throughout | ||
| - ✅ No eval or code execution from user input | ||
| - ✅ Proper logging instead of console methods | ||
|
|
||
| ## Performance Impact | ||
|
|
||
| Minimal: | ||
|
|
||
| - Argument parsing is O(n) where n = number of args (typically < 20) | ||
| - Event listeners cleaned up properly | ||
| - No memory leaks from event subscriptions | ||
| - Race condition protection prevents duplicate work | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
returnstatement on line 32 is unnecessary afterprocess.exit(1). The process.exit() call immediately terminates the process, so the return will never execute. Remove this redundant return statement.