Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/integrations/workspace/WorkspaceTracker.ts
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ class WorkspaceTracker {
this.prevWorkSpacePath = this.cwd
this.initializeFilePaths()
}
}, 300) // Debounce for 300ms
}, 500) // Increased debounce to 500ms to reduce CPU load
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The debounce values here (500ms) differ from other components (1000ms in file-watcher and McpHub). Could we consider making these configurable through VS Code settings? This would allow users to tune the delays based on their system performance - slower systems might benefit from longer delays, while faster systems could use shorter ones.

}

private workspaceDidUpdate() {
Expand All @@ -125,7 +125,7 @@ class WorkspaceTracker {
openedTabs: this.getOpenedTabsInfo(),
})
this.updateTimer = null
}, 300) // Debounce for 300ms
}, 500) // Increased debounce to 500ms to reduce CPU load
}

private normalizeFilePath(filePath: string): string {
Expand Down
12 changes: 6 additions & 6 deletions src/integrations/workspace/__tests__/WorkspaceTracker.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -225,8 +225,8 @@ describe("WorkspaceTracker", () => {
// Simulate tab change event
await registeredTabChangeCallback!()

// Run the debounce timer for workspaceDidReset
vitest.advanceTimersByTime(300)
// Run the debounce timer for workspaceDidReset (now 500ms)
vitest.advanceTimersByTime(500)

// Should clear file paths and reset workspace
expect(mockProvider.postMessageToWebview).toHaveBeenCalledWith({
Expand Down Expand Up @@ -314,8 +314,8 @@ describe("WorkspaceTracker", () => {
// Call again before timer completes
await registeredTabChangeCallback!()

// Advance timer
vitest.advanceTimersByTime(300)
// Advance timer (now 500ms)
vitest.advanceTimersByTime(500)

// Should only have one call to postMessageToWebview
expect(mockProvider.postMessageToWebview).toHaveBeenCalledWith({
Expand All @@ -338,8 +338,8 @@ describe("WorkspaceTracker", () => {
// Dispose before timer completes
workspaceTracker.dispose()

// Advance timer
vitest.advanceTimersByTime(300)
// Advance timer (now 500ms)
vitest.advanceTimersByTime(500)

// Should have called dispose on all disposables
expect(mockDispose).toHaveBeenCalled()
Expand Down
2 changes: 1 addition & 1 deletion src/services/code-index/processors/file-watcher.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ export class FileWatcher implements IFileWatcher {
private ignoreController: RooIgnoreController
private accumulatedEvents: Map<string, { uri: vscode.Uri; type: "create" | "change" | "delete" }> = new Map()
private batchProcessDebounceTimer?: NodeJS.Timeout
private readonly BATCH_DEBOUNCE_DELAY_MS = 500
private readonly BATCH_DEBOUNCE_DELAY_MS = 1000 // Increased from 500ms to reduce CPU load
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider extracting these timing constants (500ms, 1000ms) to named constants at the top of the file. This would make them easier to find and modify, and the names could document their purpose (e.g., BATCH_DEBOUNCE_DELAY_MS, FILE_PROCESSING_TIMEOUT_MS).

private readonly FILE_PROCESSING_CONCURRENCY_LIMIT = 10
private readonly batchSegmentThreshold: number

Expand Down
52 changes: 35 additions & 17 deletions src/services/glob/list-files.ts
Original file line number Diff line number Diff line change
Expand Up @@ -626,52 +626,70 @@ async function execRipgrep(rgPath: string, args: string[], limit: number): Promi
const rgProcess = childProcess.spawn(rgPath, args)
let output = ""
let results: string[] = []
let processKilled = false

// Set timeout to avoid hanging
// Set timeout to avoid hanging - increased timeout for slower systems
const timeoutId = setTimeout(() => {
rgProcess.kill()
console.warn("ripgrep timed out, returning partial results")
resolve(results.slice(0, limit))
}, 10_000)
if (!processKilled) {
processKilled = true
rgProcess.kill("SIGTERM")
console.warn("ripgrep timed out, returning partial results")
resolve(results.slice(0, limit))
}
}, 15_000) // Increased timeout to 15 seconds

// Process stdout data as it comes in
rgProcess.stdout.on("data", (data) => {
// Early exit if we've already killed the process
if (processKilled) return

output += data.toString()
processRipgrepOutput()

// Kill the process if we've reached the limit
if (results.length >= limit) {
rgProcess.kill()
clearTimeout(timeoutId) // Clear the timeout when we kill the process due to reaching the limit
if (results.length >= limit && !processKilled) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this race condition handling intentional? The processKilled flag is checked and set in multiple places (lines 633, 635, 644, 650-653) without synchronization. While JavaScript is single-threaded, the async callbacks could potentially lead to edge cases where the process is killed multiple times or results are processed after kill. Consider using a more robust state machine or ensuring atomic flag operations.

processKilled = true
clearTimeout(timeoutId)
rgProcess.kill("SIGTERM")
// Resolve immediately when limit is reached
resolve(results.slice(0, limit))
}
})

// Process stderr but don't fail on non-zero exit codes
rgProcess.stderr.on("data", (data) => {
console.error(`ripgrep stderr: ${data}`)
// Only log errors if not killed intentionally
if (!processKilled) {
console.error(`ripgrep stderr: ${data}`)
}
})

// Handle process completion
rgProcess.on("close", (code) => {
// Clear the timeout to avoid memory leaks
clearTimeout(timeoutId)

// Process any remaining output
processRipgrepOutput(true)
// Only process if not already resolved
if (!processKilled) {
// Process any remaining output
processRipgrepOutput(true)

// Log non-zero exit codes but don't fail
if (code !== 0 && code !== null && code !== 143 /* SIGTERM */) {
console.warn(`ripgrep process exited with code ${code}, returning partial results`)
}
// Log non-zero exit codes but don't fail
if (code !== 0 && code !== null && code !== 143 /* SIGTERM */) {
console.warn(`ripgrep process exited with code ${code}, returning partial results`)
}

resolve(results.slice(0, limit))
resolve(results.slice(0, limit))
}
})

// Handle process errors
rgProcess.on("error", (error) => {
// Clear the timeout to avoid memory leaks
clearTimeout(timeoutId)
reject(new Error(`ripgrep process error: ${error.message}`))
if (!processKilled) {
reject(new Error(`ripgrep process error: ${error.message}`))
}
})

// Helper function to process output buffer
Expand Down
18 changes: 6 additions & 12 deletions src/services/mcp/McpHub.ts
Original file line number Diff line number Diff line change
Expand Up @@ -286,11 +286,11 @@ export class McpHub {
clearTimeout(existingTimer)
}

// Set new timer
// Set new timer with longer debounce to reduce CPU usage
const timer = setTimeout(async () => {
this.configChangeDebounceTimers.delete(key)
await this.handleConfigFileChange(filePath, source)
}, 500) // 500ms debounce
}, 1000) // Increased to 1000ms debounce to reduce CPU load

this.configChangeDebounceTimers.set(key, timer)
}
Expand Down Expand Up @@ -1033,15 +1033,14 @@ export class McpHub {
if (manageConnectingState) {
this.isConnecting = true
}
this.removeAllFileWatchers()
// Filter connections by source
const currentConnections = this.connections.filter(
(conn) => conn.server.source === source || (!conn.server.source && source === "global"),
)
const currentNames = new Set(currentConnections.map((conn) => conn.server.name))
const newNames = new Set(Object.keys(newServers))

// Delete removed servers
// Delete removed servers (this will also clean up their file watchers)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice the removeAllFileWatchers() call was removed here. The comment says "Delete removed servers (this will also clean up their file watchers)" and I see that deleteConnection() should handle the cleanup via removeFileWatchersForServer(). Just wanted to confirm this doesn't leave any orphaned watchers in edge cases?

for (const name of currentNames) {
if (!newNames.has(name)) {
await this.deleteConnection(name, source)
Expand All @@ -1065,22 +1064,17 @@ export class McpHub {
if (!currentConnection) {
// New server
try {
// Only setup file watcher for enabled servers
if (!validatedConfig.disabled) {
this.setupFileWatcher(name, validatedConfig, source)
}
// connectToServer will handle file watcher setup for enabled servers
await this.connectToServer(name, validatedConfig, source)
} catch (error) {
this.showErrorMessage(`Failed to connect to new MCP server ${name}`, error)
}
} else if (!deepEqual(JSON.parse(currentConnection.server.config), config)) {
// Existing server with changed config
try {
// Only setup file watcher for enabled servers
if (!validatedConfig.disabled) {
this.setupFileWatcher(name, validatedConfig, source)
}
// Delete connection first (this cleans up old file watchers)
await this.deleteConnection(name, source)
// connectToServer will handle file watcher setup for enabled servers
await this.connectToServer(name, validatedConfig, source)
} catch (error) {
this.showErrorMessage(`Failed to reconnect MCP server ${name}`, error)
Expand Down
Loading