Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .erb/scripts/wait-for-renderer.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import detectPort from 'detect-port';
import chalk from 'chalk';

const port = 1212;

const delay = (ms: number) =>
new Promise<void>((resolve) => {
setTimeout(() => resolve(), ms);
});

async function waitForRenderer(attempt = 0): Promise<void> {
if (attempt === 0) {
console.log(chalk.blueBright('Waiting for renderer to be available...'));
}

const maxAttempts = 60; // 60 seconds max wait
if (attempt >= maxAttempts) {
console.log(chalk.red('✗ Renderer did not start within 60 seconds.'));
console.log(
chalk.yellow(
'Please start the renderer manually with: npm run start:renderer',
),
);
process.exit(1);
return;
}

try {
const availablePort = await detectPort(port);
if (availablePort !== port) {
console.log(chalk.greenBright('✓ Renderer is ready!'));
return;
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The return statement on line 32 is unnecessary after process.exit(1). The process.exit() call immediately terminates the process, so the return will never execute. Remove this redundant return statement.

Copilot uses AI. Check for mistakes.
}
} catch (err) {
// Ignore errors and retry
}

if ((attempt + 1) % 5 === 0) {
console.log(chalk.yellow(`Still waiting... (${attempt + 1}s)`));
}

await delay(1000);
await waitForRenderer(attempt + 1);
}

waitForRenderer().catch((err) => {
console.error(chalk.red('Error waiting for renderer:'), err);
process.exit(1);
});
127 changes: 127 additions & 0 deletions docs/CLI_ARGUMENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# CLI Startup Arguments

5ire supports command-line arguments to automatically create chats with pre-configured settings when launching the application.

## Usage

### Individual Flags

You can use individual flags to configure a new chat:

```bash
5ire --new-chat --provider openai --model gpt-4 --system "You are a helpful assistant" --summary "My Chat" --prompt "Hello!" --temperature 0.7
```

#### Available Flags

- `--new-chat` - Flag to indicate creating a new chat (required when using individual flags)
- `--provider <provider>` - AI provider (e.g., openai, anthropic, google)
- `--model <model>` - Model name (e.g., gpt-4, claude-3-opus)
- `--system <message>` - System message for the chat
- `--summary <text>` - Summary/title for the chat
- `--prompt <text>` - Initial prompt/message to send
- `--temperature <number>` - Temperature setting (0.0 - 2.0)

### JSON Format

You can also provide all settings as a JSON object:

```bash
5ire --chat '{"provider":"openai","model":"gpt-4","system":"You are a helpful assistant","summary":"My Chat","prompt":"Hello!","temperature":0.7}'
```

### Provider Derivation

If you specify the model in the format `Provider:model`, the provider will be automatically derived and the model will be normalized:

```bash
5ire --new-chat --model anthropic:claude-3-opus
```

This is equivalent to:

```bash
5ire --new-chat --provider anthropic --model claude-3-opus
```

**Note:** If you explicitly provide both a provider and a model with the `Provider:model` format, the explicit provider takes precedence, but the model will still be normalized to remove the provider prefix:

```bash
5ire --new-chat --provider openai --model anthropic:claude-3-opus
# Results in: provider=openai, model=claude-3-opus
```

## Examples

### Basic Chat Creation

Create a new chat with OpenAI GPT-4:

```bash
5ire --new-chat --provider openai --model gpt-4
```

### Chat with System Message

Create a chat with a custom system message:

```bash
5ire --new-chat --provider anthropic --model claude-3-opus --system "You are a coding assistant specialized in TypeScript"
```

### Chat with Initial Prompt

Create a chat and send an initial message:

```bash
5ire --new-chat --provider openai --model gpt-4 --prompt "Explain quantum computing in simple terms"
```

### Complete Configuration

Create a fully configured chat:

```bash
5ire --new-chat \
--provider openai \
--model gpt-4 \
--system "You are a creative writing assistant" \
--summary "Story Writing Session" \
--prompt "Write a short story about a time traveler" \
--temperature 0.9
```

### Using JSON Format

```bash
5ire --chat '{
"provider": "anthropic",
"model": "claude-3-opus",
"system": "You are a helpful assistant",
"summary": "Quick Chat",
"temperature": 0.7
}'
```

## Behavior

- When launched with startup arguments, 5ire will:
1. Create a new chat with the specified configuration
2. Navigate to the newly created chat
3. If a `--prompt` is provided, it will be set as the initial input (but not automatically sent)
Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement conflicts with the implementation. Looking at StartupHandler.tsx lines 104-109, when a prompt is provided, it emits a 'startup-submit' event, which triggers auto-submission via the onSubmit callback in the chat page. Either update the documentation to accurately reflect that the prompt IS auto-submitted, or remove the auto-submit functionality from the code.

Suggested change
3. If a `--prompt` is provided, it will be set as the initial input (but not automatically sent)
3. If a `--prompt` is provided, it will be set as the initial input and automatically sent

Copilot uses AI. Check for mistakes.

Comment on lines +111 to +112
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation contradicts implementation behavior.

The documentation states the prompt "will be set as the initial input (but not automatically sent)". However, reviewing src/renderer/components/StartupHandler.tsx (lines 103-110), the implementation explicitly auto-submits the prompt via eventBus.emit('startup-submit', args.prompt).

Update the documentation to reflect the actual behavior:

-- If a `--prompt` is provided, it will be set as the initial input (but not automatically sent)
+  3. If a `--prompt` is provided, it will be automatically submitted

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docs/CLI_ARGUMENTS.md around lines 111 to 112, the documentation incorrectly
says a provided --prompt is set as initial input but not automatically sent; in
reality StartupHandler.tsx (lines 103–110) emits eventBus.emit('startup-submit',
args.prompt) which auto-submits the prompt. Update the documentation text to
state that when --prompt is provided it will be automatically submitted on
startup (mentioning the startup-submit event for clarity) and remove the "(but
not automatically sent)" clause so the docs match the implementation.

- On second instance activation (when 5ire is already running):
- The existing window will be focused
- A new chat will be created with the startup arguments
- The user will be navigated to the new chat

## Notes

- The `--new-chat` flag is required when using individual flags (not needed with `--chat`)
- The `--chat` JSON format takes precedence over individual flags if both are provided
- Temperature values outside the valid range (typically 0.0-2.0) may be adjusted by the provider
- Invalid JSON in the `--chat` argument will be logged and ignored
- When a model contains the `Provider:model` format:
- If no explicit provider is set, the provider will be extracted from the model
- The model will always be normalized to remove the provider prefix
- An explicit provider takes precedence over a provider in the model format
186 changes: 186 additions & 0 deletions docs/IMPLEMENTATION_SUMMARY.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,186 @@
# Terminal Startup Arguments - Implementation Summary

## Overview

This implementation adds support for terminal startup arguments that allow users to automatically create chats with pre-configured settings when launching the 5ire application.

## Architecture

### 1. CLI Argument Parser (`src/main/cli-args.ts`)

A dedicated module that parses command-line arguments and extracts chat configuration:

- **Supported Formats:**
- Individual flags: `--new-chat --provider openai --model gpt-4 --system "..." --summary "..." --prompt "..." --temperature 0.7`
- JSON format: `--chat '{"provider":"openai","model":"gpt-4",...}'`

- **Key Features:**
- Provider derivation from model format (`Provider:model`)
- Model normalization (always removes provider prefix)
- Explicit provider takes precedence
- Robust error handling for invalid JSON

### 2. Main Process Integration (`src/main/main.ts`)

Enhanced the main process to handle startup arguments:

- **Cold Start:** Parses `process.argv` when app launches
- **Second Instance:** Parses command line from second instance activation
- **Pending State:** Stores pending startup args until renderer is ready
- **IPC Communication:** Sends startup payload via `startup-new-chat` event

**Key Changes:**
```typescript
// Added variable to track pending startup args
let pendingStartupArgs: StartupChatArgs | null = null;

// Parse args on cold start
handleStartupArgs(process.argv);

// Parse args on second instance
app.on('second-instance', (event, commandLine) => {
handleStartupArgs(commandLine);
// ... handle deep links
});

// Send pending args when renderer is ready
ipcMain.on('install-tool-listener-ready', () => {
if (pendingStartupArgs !== null) {
mainWindow?.webContents.send('startup-new-chat', pendingStartupArgs);
pendingStartupArgs = null;
}
});
Comment on lines +47 to +52
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

IPC channel name mismatch.

The documentation references install-tool-listener-ready but src/renderer/components/StartupHandler.tsx (line 119) sends startup-handler-ready. Update the documentation to match the implementation.

-// Send pending args when renderer is ready
-ipcMain.on('install-tool-listener-ready', () => {
+// Send pending args when startup handler is ready
+ipcMain.on('startup-handler-ready', () => {
   if (pendingStartupArgs !== null) {

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docs/IMPLEMENTATION_SUMMARY.md around lines 47 to 52 the IPC channel name is
incorrect: the doc shows "install-tool-listener-ready" but the renderer actually
sends "startup-handler-ready" (see src/renderer/components/StartupHandler.tsx
line 119); update the documentation to use "startup-handler-ready" (leave the
rest of the snippet including the "startup-new-chat" send call unchanged) so the
docs match the implementation.

```

### 3. Preload API (`src/main/preload.ts`)

Exposed secure API for renderer process via contextBridge:

```typescript
startup: {
onNewChat(callback: (args: StartupChatArgs) => void) {
// Returns unsubscribe function
return () => { ... };
}
}
```

**Security Constraints:**
- Uses contextBridge for secure IPC communication
- No direct access to Node.js APIs from renderer
- Type-safe API with TypeScript interfaces

### 4. Renderer Handler (`src/renderer/components/StartupHandler.tsx`)

React component that handles startup events:

- **Placement:** Inside Router in FluentApp component
- **Lifecycle:** Sets up listener on mount, cleans up on unmount
- **Race Condition Protection:** Uses ref to prevent concurrent chat creation
- **Chat Creation:** Calls `useChatStore().createChat()` with parsed args
- **Navigation:** Automatically navigates to newly created chat

**Key Features:**
```typescript
- Prevents race conditions with isProcessingRef
- Proper error handling and logging
- Automatic navigation to created chat
- Clean event listener cleanup
```

## Data Flow

```
CLI Args → parseStartupArgs() → handleStartupArgs() → IPC Event
Preload API
StartupHandler
createChat() + navigate()
```

### Cold Start Flow:
1. User launches app with CLI args
2. Main process parses args from `process.argv`
3. Args stored in `pendingStartupArgs`
4. Renderer loads and sends 'install-tool-listener-ready'
5. Main sends 'startup-new-chat' event with args
6. StartupHandler receives event, creates chat, navigates

### Second Instance Flow:
1. User launches app again with CLI args (app already running)
2. Second instance detected, window focused
3. Main process parses args from `commandLine`
4. If renderer ready, immediately sends 'startup-new-chat' event
5. StartupHandler receives event, creates chat, navigates

## Testing

Comprehensive test suite in `test/main/cli-args.spec.ts`:

- ✅ Null handling for no args
- ✅ Individual flag parsing
- ✅ Partial flag parsing
- ✅ JSON format parsing
- ✅ Provider derivation from model
- ✅ Model normalization with explicit provider
- ✅ Invalid JSON handling
- ✅ Missing value handling
- ✅ Temperature number parsing
- ✅ Invalid temperature handling
- ✅ Complex JSON with all properties
- ✅ Provider derivation in JSON format

## Documentation

Complete user documentation in `docs/CLI_ARGUMENTS.md`:

- Usage examples for all scenarios
- Detailed explanation of provider derivation
- Behavior notes and edge cases
- Platform-specific considerations

## Edge Cases Handled

1. **Empty Args:** Returns null, no chat created
2. **Invalid JSON:** Logged and ignored, returns null
3. **Missing Values:** Ignores flag if no value provided
4. **Invalid Temperature:** Ignores if not a number
5. **Race Conditions:** Protected with ref guard in handler
6. **Deep Link Conflicts:** Searches all args, not just last one
7. **Provider Prefix:** Always normalized in model string
8. **Concurrent Events:** Processing flag prevents duplicate chat creation

## Future Enhancements

Potential improvements for future consideration:

1. Support for additional chat settings (maxTokens, maxCtxMessages)
2. Validate provider and model against available providers
3. Auto-send message if prompt is provided
4. Support for chat folder assignment
5. Batch chat creation from config file
6. Shell auto-completion for flags

## Breaking Changes

None. This is a new feature with no impact on existing functionality.

## Security Considerations

- ✅ All IPC communication through contextBridge
- ✅ No direct Node.js access from renderer
- ✅ Input validation in parser (JSON.parse in try-catch)
- ✅ Type-safe interfaces throughout
- ✅ No eval or code execution from user input
- ✅ Proper logging instead of console methods

## Performance Impact

Minimal:

- Argument parsing is O(n) where n = number of args (typically < 20)
- Event listeners cleaned up properly
- No memory leaks from event subscriptions
- Race condition protection prevents duplicate work
Loading