Skip to content
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import { createLogger } from '@sim/logger'
import { NextResponse } from 'next/server'
import { getSession } from '@/lib/auth'
import { loadWorkflowFromNormalizedTables } from '@/lib/workflows/persistence/utils'
import { hasValidStartBlockInState } from '@/lib/workflows/triggers/trigger-utils'

const logger = createLogger('ValidateMcpWorkflowsAPI')

/**
* POST /api/mcp/workflow-servers/validate
* Validates if workflows have valid start blocks for MCP usage
*/
export async function POST(request: Request) {
try {
const session = await getSession()
if (!session?.user?.id) {
return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}

const body = await request.json()
const { workflowIds } = body

if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
Comment on lines +20 to +24
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request body parsing lacks type validation for workflowIds. If a malicious or buggy client sends workflowIds as a non-array value (e.g., {"workflowIds": "string"} or {"workflowIds": null}), the Array.isArray() check on line 23 will catch it, but there's no validation for:

  1. Maximum array size: A client could send thousands of workflow IDs, causing performance issues or timeouts
  2. Array element types: The code doesn't validate that each element in the array is a string

Recommendation: Add validation for array size and element types:

if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
  return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
}

if (workflowIds.length > 100) {
  return NextResponse.json({ error: 'Maximum 100 workflow IDs allowed' }, { status: 400 })
}

if (!workflowIds.every(id => typeof id === 'string')) {
  return NextResponse.json({ error: 'All workflow IDs must be strings' }, { status: 400 })
}
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Line: 20:24

Comment:
The request body parsing lacks type validation for `workflowIds`. If a malicious or buggy client sends `workflowIds` as a non-array value (e.g., `{"workflowIds": "string"}` or `{"workflowIds": null}`), the `Array.isArray()` check on line 23 will catch it, but there's no validation for:

1. **Maximum array size**: A client could send thousands of workflow IDs, causing performance issues or timeouts
2. **Array element types**: The code doesn't validate that each element in the array is a string

**Recommendation**: Add validation for array size and element types:

```typescript
if (!Array.isArray(workflowIds) || workflowIds.length === 0) {
  return NextResponse.json({ error: 'workflowIds must be a non-empty array' }, { status: 400 })
}

if (workflowIds.length > 100) {
  return NextResponse.json({ error: 'Maximum 100 workflow IDs allowed' }, { status: 400 })
}

if (!workflowIds.every(id => typeof id === 'string')) {
  return NextResponse.json({ error: 'All workflow IDs must be strings' }, { status: 400 })
}
```

How can I resolve this? If you propose a fix, please make it concise.

}

const results: Record<string, boolean> = {}

for (const workflowId of workflowIds) {
try {
const state = await loadWorkflowFromNormalizedTables(workflowId)
results[workflowId] = hasValidStartBlockInState(state)
} catch (error) {
logger.warn(`Failed to validate workflow ${workflowId}:`, error)
results[workflowId] = false
}
Comment on lines +29 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security vulnerability: The validation loop doesn't verify that the workflows belong to the authenticated user's workspace. A malicious user could validate workflows from other workspaces by sending arbitrary workflow IDs.

The vulnerability:

  • loadWorkflowFromNormalizedTables(workflowId) (line 31) loads workflow data without any workspace authorization check
  • An attacker could send workflow IDs from other users' workspaces
  • While this only returns boolean validation results (not sensitive data), it still:
    1. Allows enumeration of workflow IDs across workspaces
    2. Leaks information about workflow structure (whether they have start blocks)
    3. Could enable timing attacks to infer workflow complexity

Fix:
Add workspace validation before checking each workflow:

for (const workflowId of workflowIds) {
  try {
    // Verify workflow belongs to user's workspace
    const [workflowRecord] = await db
      .select({ id: workflow.id, workspaceId: workflow.workspaceId })
      .from(workflow)
      .where(eq(workflow.id, workflowId))
      .limit(1)
    
    if (!workflowRecord || workflowRecord.workspaceId !== session.user.workspaceId) {
      logger.warn(`Unauthorized workflow access attempt: ${workflowId}`)
      results[workflowId] = false
      continue
    }
    
    const state = await loadWorkflowFromNormalizedTables(workflowId)
    results[workflowId] = hasValidStartBlockInState(state)
  } catch (error) {
    logger.warn(`Failed to validate workflow ${workflowId}:`, error)
    results[workflowId] = false
  }
}

Note: You'll need to verify the session object structure to get the correct workspace ID (might be session.user.activeWorkspaceId or require a separate query).

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/api/mcp/workflow-servers/validate/route.ts
Line: 29:36

Comment:
Security vulnerability: The validation loop doesn't verify that the workflows belong to the authenticated user's workspace. A malicious user could validate workflows from other workspaces by sending arbitrary workflow IDs.

**The vulnerability:**
- `loadWorkflowFromNormalizedTables(workflowId)` (line 31) loads workflow data without any workspace authorization check
- An attacker could send workflow IDs from other users' workspaces
- While this only returns boolean validation results (not sensitive data), it still:
  1. Allows enumeration of workflow IDs across workspaces
  2. Leaks information about workflow structure (whether they have start blocks)
  3. Could enable timing attacks to infer workflow complexity

**Fix:**
Add workspace validation before checking each workflow:

```typescript
for (const workflowId of workflowIds) {
  try {
    // Verify workflow belongs to user's workspace
    const [workflowRecord] = await db
      .select({ id: workflow.id, workspaceId: workflow.workspaceId })
      .from(workflow)
      .where(eq(workflow.id, workflowId))
      .limit(1)
    
    if (!workflowRecord || workflowRecord.workspaceId !== session.user.workspaceId) {
      logger.warn(`Unauthorized workflow access attempt: ${workflowId}`)
      results[workflowId] = false
      continue
    }
    
    const state = await loadWorkflowFromNormalizedTables(workflowId)
    results[workflowId] = hasValidStartBlockInState(state)
  } catch (error) {
    logger.warn(`Failed to validate workflow ${workflowId}:`, error)
    results[workflowId] = false
  }
}
```

Note: You'll need to verify the session object structure to get the correct workspace ID (might be `session.user.activeWorkspaceId` or require a separate query).

How can I resolve this? If you propose a fix, please make it concise.

}

return NextResponse.json({ data: results })
} catch (error) {
logger.error('Failed to validate workflows for MCP:', error)
return NextResponse.json({ error: 'Failed to validate workflows' }, { status: 500 })
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,7 @@ export function McpDeploy({
})
const [parameterDescriptions, setParameterDescriptions] = useState<Record<string, string>>({})
const [pendingServerChanges, setPendingServerChanges] = useState<Set<string>>(new Set())
const [saveError, setSaveError] = useState<string | null>(null)

const parameterSchema = useMemo(
() => generateParameterSchema(inputFormat, parameterDescriptions),
Expand All @@ -173,7 +174,7 @@ export function McpDeploy({
[]
)

const selectedServerIds = useMemo(() => {
const actualServerIds = useMemo(() => {
const ids: string[] = []
for (const server of servers) {
const toolInfo = serverToolsMap[server.id]
Expand All @@ -184,6 +185,20 @@ export function McpDeploy({
return ids
}, [servers, serverToolsMap])

const [pendingSelectedServerIds, setPendingSelectedServerIds] = useState<string[] | null>(null)

const selectedServerIds = pendingSelectedServerIds ?? actualServerIds

useEffect(() => {
if (pendingSelectedServerIds !== null) {
const pendingSet = new Set(pendingSelectedServerIds)
const actualSet = new Set(actualServerIds)
if (pendingSet.size === actualSet.size && [...pendingSet].every((id) => actualSet.has(id))) {
setPendingSelectedServerIds(null)
}
}
}, [actualServerIds, pendingSelectedServerIds])
Comment on lines 193 to 202
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The effect that automatically clears pendingSelectedServerIds when it matches actualServerIds can create a race condition during save operations. If actualServerIds updates from the query invalidation (line 397) before all operations complete, this effect could clear pendingSelectedServerIds prematurely, causing the UI to show the wrong state.

Consider moving this logic inside handleSave after all operations complete successfully, or add a flag to prevent this effect from running during active save operations.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 192:200

Comment:
The effect that automatically clears `pendingSelectedServerIds` when it matches `actualServerIds` can create a race condition during save operations. If `actualServerIds` updates from the query invalidation (line 397) before all operations complete, this effect could clear `pendingSelectedServerIds` prematurely, causing the UI to show the wrong state.

Consider moving this logic inside `handleSave` after all operations complete successfully, or add a flag to prevent this effect from running during active save operations.

How can I resolve this? If you propose a fix, please make it concise.


const hasLoadedInitialData = useRef(false)

useEffect(() => {
Expand Down Expand Up @@ -241,7 +256,17 @@ export function McpDeploy({
}, [toolName, toolDescription, parameterDescriptions, savedValues])

const hasDeployedTools = selectedServerIds.length > 0

const hasServerSelectionChanges = useMemo(() => {
if (pendingSelectedServerIds === null) return false
const pendingSet = new Set(pendingSelectedServerIds)
const actualSet = new Set(actualServerIds)
if (pendingSet.size !== actualSet.size) return true
return ![...pendingSet].every((id) => actualSet.has(id))
}, [pendingSelectedServerIds, actualServerIds])

const hasChanges = useMemo(() => {
if (hasServerSelectionChanges && selectedServerIds.length > 0) return true
if (!savedValues || !hasDeployedTools) return false
if (toolName !== savedValues.toolName) return true
if (toolDescription !== savedValues.toolDescription) return true
Expand All @@ -251,7 +276,15 @@ export function McpDeploy({
return true
}
return false
}, [toolName, toolDescription, parameterDescriptions, hasDeployedTools, savedValues])
}, [
toolName,
toolDescription,
parameterDescriptions,
hasDeployedTools,
savedValues,
hasServerSelectionChanges,
selectedServerIds.length,
])

useEffect(() => {
onCanSaveChange?.(hasChanges && hasDeployedTools && !!toolName.trim())
Expand All @@ -262,45 +295,120 @@ export function McpDeploy({
}, [servers.length, onHasServersChange])

/**
* Save tool configuration to all deployed servers
* Save tool configuration to all selected servers.
* This handles both adding to new servers and updating existing tools.
*/
const handleSave = useCallback(async () => {
if (!toolName.trim()) return
if (selectedServerIds.length === 0) return

const toolsToUpdate: Array<{ serverId: string; toolId: string }> = []
for (const server of servers) {
const toolInfo = serverToolsMap[server.id]
if (toolInfo?.tool) {
toolsToUpdate.push({ serverId: server.id, toolId: toolInfo.tool.id })
}
}
onSubmittingChange?.(true)
setSaveError(null)

if (toolsToUpdate.length === 0) return
const actualSet = new Set(actualServerIds)
const toAdd = selectedServerIds.filter((id) => !actualSet.has(id))
const toRemove = actualServerIds.filter((id) => !selectedServerIds.includes(id))
const toUpdate = selectedServerIds.filter((id) => actualSet.has(id))

onSubmittingChange?.(true)
try {
for (const { serverId, toolId } of toolsToUpdate) {
await updateToolMutation.mutateAsync({
const errors: string[] = []

for (const serverId of toAdd) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await addToolMutation.mutateAsync({
workspaceId,
serverId,
toolId,
workflowId,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
onAddedToServer?.()
logger.info(`Added workflow ${workflowId} as tool to server ${serverId}`)
} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to add to "${serverName}"`)
logger.error(`Failed to add tool to server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}

for (const serverId of toRemove) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await deleteToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
})
setServerToolsMap((prev) => {
const next = { ...prev }
delete next[serverId]
return next
})
Comment on lines +354 to +358
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually updating serverToolsMap after a successful delete can cause state inconsistencies. The ServerToolsQuery components (lines 516-524) are continuously querying for tools and updating this same map via handleServerToolData. This creates a race condition where:

  1. Delete succeeds, local state updated to remove the tool
  2. Query refetch happens (line 397)
  3. Query results come back and ServerToolsQuery updates the map again
  4. If the backend hasn't fully processed the delete yet, the tool might reappear

The manual state update is unnecessary since refetchServers() (line 397) will trigger the queries to update naturally. Remove this manual update and rely on the query invalidation.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 351:355

Comment:
Manually updating `serverToolsMap` after a successful delete can cause state inconsistencies. The `ServerToolsQuery` components (lines 516-524) are continuously querying for tools and updating this same map via `handleServerToolData`. This creates a race condition where:

1. Delete succeeds, local state updated to remove the tool
2. Query refetch happens (line 397) 
3. Query results come back and `ServerToolsQuery` updates the map again
4. If the backend hasn't fully processed the delete yet, the tool might reappear

The manual state update is unnecessary since `refetchServers()` (line 397) will trigger the queries to update naturally. Remove this manual update and rely on the query invalidation.

How can I resolve this? If you propose a fix, please make it concise.

} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to remove from "${serverName}"`)
logger.error(`Failed to remove tool from server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
// Update saved values after successful save (triggers re-render → hasChanges becomes false)
}

for (const serverId of toUpdate) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await updateToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
} catch (error) {
const serverName = servers.find((s) => s.id === serverId)?.name || serverId
errors.push(`Failed to update "${serverName}"`)
logger.error(`Failed to update tool on server ${serverId}:`, error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
}

refetchServers()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refetchServers() is called after all operations complete, but before checking if there were errors. This can cause the actualServerIds to update prematurely, triggering the auto-clear effect (lines 192-200) even when there are errors. This creates inconsistent UI state where pending changes are cleared but errors are shown.

Move refetchServers() inside the if (errors.length > 0) else branch at line 401, so it only refetches when all operations succeed.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/sim/app/workspace/[workspaceId]/w/[workflowId]/components/panel/components/deploy/components/deploy-modal/components/mcp/mcp.tsx
Line: 397:397

Comment:
`refetchServers()` is called after all operations complete, but before checking if there were errors. This can cause the `actualServerIds` to update prematurely, triggering the auto-clear effect (lines 192-200) even when there are errors. This creates inconsistent UI state where pending changes are cleared but errors are shown.

Move `refetchServers()` inside the `if (errors.length > 0)` else branch at line 401, so it only refetches when all operations succeed.

How can I resolve this? If you propose a fix, please make it concise.


if (errors.length > 0) {
setSaveError(errors.join('. '))
} else {
setPendingSelectedServerIds(null)
setSavedValues({
toolName,
toolDescription,
parameterDescriptions: { ...parameterDescriptions },
})
onCanSaveChange?.(false)
onSubmittingChange?.(false)
} catch (error) {
logger.error('Failed to save tool configuration:', error)
onSubmittingChange?.(false)
}

onSubmittingChange?.(false)
}, [
toolName,
toolDescription,
Expand All @@ -309,9 +417,16 @@ export function McpDeploy({
servers,
serverToolsMap,
workspaceId,
workflowId,
selectedServerIds,
actualServerIds,
addToolMutation,
deleteToolMutation,
updateToolMutation,
refetchServers,
onSubmittingChange,
onCanSaveChange,
onAddedToServer,
])

const serverOptions: ComboboxOption[] = useMemo(() => {
Expand All @@ -321,83 +436,13 @@ export function McpDeploy({
}))
}, [servers])

const handleServerSelectionChange = useCallback(
async (newSelectedIds: string[]) => {
if (!toolName.trim()) return

const currentIds = new Set(selectedServerIds)
const newIds = new Set(newSelectedIds)

const toAdd = newSelectedIds.filter((id) => !currentIds.has(id))
const toRemove = selectedServerIds.filter((id) => !newIds.has(id))

for (const serverId of toAdd) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await addToolMutation.mutateAsync({
workspaceId,
serverId,
workflowId,
toolName: toolName.trim(),
toolDescription: toolDescription.trim() || undefined,
parameterSchema,
})
refetchServers()
onAddedToServer?.()
logger.info(`Added workflow ${workflowId} as tool to server ${serverId}`)
} catch (error) {
logger.error('Failed to add tool:', error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}

for (const serverId of toRemove) {
const toolInfo = serverToolsMap[serverId]
if (toolInfo?.tool) {
setPendingServerChanges((prev) => new Set(prev).add(serverId))
try {
await deleteToolMutation.mutateAsync({
workspaceId,
serverId,
toolId: toolInfo.tool.id,
})
setServerToolsMap((prev) => {
const next = { ...prev }
delete next[serverId]
return next
})
refetchServers()
} catch (error) {
logger.error('Failed to remove tool:', error)
} finally {
setPendingServerChanges((prev) => {
const next = new Set(prev)
next.delete(serverId)
return next
})
}
}
}
},
[
selectedServerIds,
serverToolsMap,
toolName,
toolDescription,
workspaceId,
workflowId,
parameterSchema,
addToolMutation,
deleteToolMutation,
refetchServers,
onAddedToServer,
]
)
/**
* Handle server selection change - only updates local state.
* Actual add/remove operations happen when user clicks Save.
*/
const handleServerSelectionChange = useCallback((newSelectedIds: string[]) => {
setPendingSelectedServerIds(newSelectedIds)
}, [])

const selectedServersLabel = useMemo(() => {
const count = selectedServerIds.length
Expand Down Expand Up @@ -563,11 +608,7 @@ export function McpDeploy({
)}
</div>

{addToolMutation.isError && (
<p className='mt-[6.5px] text-[12px] text-[var(--text-error)]'>
{addToolMutation.error?.message || 'Failed to add tool'}
</p>
)}
{saveError && <p className='mt-[6.5px] text-[12px] text-[var(--text-error)]'>{saveError}</p>}
</form>
)
}
Loading