Skip to content

Conversation

rohan-chidurala
Copy link
Contributor

@rohan-chidurala rohan-chidurala commented Sep 10, 2025

Description

Calls a clone website tool which returns

{
markdownFile:
htmlfile:
a screenshot of the whole page
a design doc (describes what section on the page should look like)
assets from the website
}

i found pretty good results using this method it was able to clone the Netflix landing page:
image

it was also able to clone my personal portfolio pretty well.

I just wanted to put this out there as on option for cloning a site. I think it needs a bit more work in terms of general implementation but other than that I think it works really well.

#2669

Type of Change

  • Bug fix
  • New feature
  • Documentation update
  • Release
  • Refactor
  • Other (please describe):

Testing

Screenshots (if applicable)

Additional Notes


Important

This PR adds a new feature for cloning websites, including a new tool, API handling, and schema definitions for processing and returning cloned website data.

  • Behavior:
    • Adds CLONE_WEBSITE_TOOL_NAME and CLONE_WEBSITE_TOOL_PARAMETERS to tool-call-simple.tsx, web.ts, and tools.ts.
    • Implements handleCloneWebsiteTool() in web.ts to process website cloning requests.
    • Adds cloneWebsite mutation in code.ts to handle API requests for cloning websites.
  • Models:
    • Introduces CloneWebsiteResultSchema and DesignSchema in index.ts to define the structure of cloning results and design documents.
  • Prompts:
    • Adds CLONE_WEBSITE_DESIGN_PROMPT in clone.ts for generating design documents from cloned websites.
    • Updates CREATE_NEW_PAGE_SYSTEM_PROMPT in create.ts to include conditions for using the clone website tool.
  • Misc:
    • Updates toolset.ts to include cloneWebsiteTool in BUILD_TOOL_SET.
    • Adds cloneWebsiteTool definition in web.ts with input schema and description.

This description was created by Ellipsis for ab716ae. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features
    • Introduced a Clone Website tool that clones a provided URL and returns HTML, Markdown, a reference screenshot, a design document, and assets.
    • Added a cloneWebsite API endpoint and integrated the tool into the build toolset and tool handling.
    • Automatically downloads discovered assets into the project (per branch) when available.
    • Chat UI shows a globe icon and hostname-based labels (e.g., “Cloning example.com”).
    • Updated AI prompts to trigger cloning when users request it with a valid URL.

Copy link

vercel bot commented Sep 10, 2025

@rohan-chidurala is attempting to deploy a commit to the Onlook Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

coderabbitai bot commented Sep 10, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds a new clone_website tool across AI prompts, toolsets, models, client handlers, UI labeling, and a server API mutation. Implements scraping via Firecrawl, design document generation via model inference, result typing/validation, asset downloading into a branch-specific directory, and integrates dispatch/handling in the client tool pipeline.

Changes

Cohort / File(s) Summary
UI: Tool label and icon
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
Imports CLONE_WEBSITE tool name/parameters, adds globe icon mapping, and extends label logic to derive “Cloning ” from input.url with safe fallbacks.
Client handler: Clone website
apps/web/client/src/components/tools/handlers/web.ts
Exports handleCloneWebsiteTool: calls api.code.cloneWebsite.mutate({ url }), validates response, downloads assets into public/cloned-assets/, annotates fileLocation, and returns CloneWebsiteResult; includes error handling.
Client tool registry/dispatch
apps/web/client/src/components/tools/tools.ts
Registers CLONE_WEBSITE_TOOL_NAME with CLONE_WEBSITE_TOOL_PARAMETERS and wires to handleCloneWebsiteTool in TOOL_HANDLERS.
Server API: Code router
apps/web/client/src/server/api/routers/code.ts
Adds protectedProcedure mutation cloneWebsite(input: { url }): Promise; scrapes HTML/Markdown/screenshot, extracts/dedupes assets, generates designDocument, and returns structured result with errors handled.
AI prompts
packages/ai/src/prompt/clone.ts
Adds CLONE_WEBSITE_DESIGN_PROMPT guiding pixel-accurate design document generation from HTML/Markdown/screenshot.
AI system prompt update
packages/ai/src/prompt/create.ts
Extends CREATE_NEW_PAGE_SYSTEM_PROMPT with cloning_instructions: when user provides a concrete URL and intent to clone, call clone_website tool.
AI tools: Definition
packages/ai/src/tools/tools/web.ts
Adds CLONE_WEBSITE_TOOL_NAME, CLONE_WEBSITE_TOOL_PARAMETERS (url, branchId), and cloneWebsiteTool with description and input schema.
AI toolset registry
packages/ai/src/tools/toolset.ts
Adds [CLONE_WEBSITE_TOOL_NAME]: cloneWebsiteTool to BUILD_TOOL_SET.
Models: Result schemas
packages/models/src/tools/index.ts
Adds DesignSchema and CloneWebsiteResultSchema; exports CloneWebsiteResult type for API/tooling validation and typing.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor U as User
    participant UI as Client UI (Chat/Tools)
    participant TD as Tool Dispatcher
    participant TH as handleCloneWebsiteTool
    participant API as code.cloneWebsite (Server)
    participant SCR as Firecrawl
    participant LLM as Model (Design Generation)
    participant FS as Sandbox FS (public/cloned-assets)

    U->>UI: Request to clone website (URL provided)
    UI->>TD: Tool call: clone_website { url, branchId }
    TD->>TH: Invoke handler(args, editorEngine)
    TH->>API: mutate({ url })
    API->>SCR: Scrape HTML/Markdown/Screenshot
    SCR-->>API: { html, markdown, screenshot, assets }
    API->>LLM: Generate designDocument (prompt + inputs)
    LLM-->>API: designDocument
    API-->>TH: CloneWebsiteResult { result | error }

    alt Success
      TH->>FS: Ensure target dir (branch)
      loop For each asset
        TH->>FS: Download asset (curl) -> fileLocation
      end
      TH-->>TD: Result with fileLocation for assets
      TD-->>UI: Tool result delivered
      UI-->>U: Display cloning outcome
    else Error
      TH-->>TD: Error in result or thrown
      TD-->>UI: Surface error
      UI-->>U: Show error message
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • Kitenite

Poem

I hop through code with whiskers bright,
A globe in paw, I clone the site—
Pixels lined, a perfect twin,
Assets tucked neatly in.
From scrape to style, swift I steer,
Thump! The clone is crystal-clear. 🐰🌐

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

Pre-merge checks (1 passed, 2 warnings)

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The current pull request description includes a clear Description section and correctly marks the change as a new feature, but it does not properly populate the Related Issues section with GitHub keywords, leaves the Testing section empty, does not move the screenshot into the designated Screenshots section, and omits any Additional Notes, so it does not fully conform to the repository’s template. Please update the Related Issues section to link the issue (for example, “closes #2669”), describe the tests you performed or steps to verify the new tool under Testing, add the screenshot in the Screenshots section with a caption, and include any further context or caveats in the Additional Notes section.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly summarizes the main change as a website cloning feature, follows the conventional commit style, and directly reflects the primary addition without extraneous details.
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@rohan-chidurala rohan-chidurala changed the title feat: cloning website initial feat: cloning website Sep 10, 2025
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
if (params13?.url) {
try {
const url = new URL(params13.url);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the clone_website case, the URL parsing logic is duplicated. Consider refactoring it into a utility function for consistency and maintainability.

const filename = safeBase + '.png';
const dest = `${baseDir}${filename}`;
try {
const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use of string interpolation in the curl command may pose a shell injection risk. Ensure that asset.url and dest are properly sanitized or escaped.

}
await sandbox.session.runCommand(`mkdir -p ${baseDir}`);

for (const asset of assets) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider parallelizing the asset downloads (e.g. using Promise.all) to improve performance when there are many assets.

const app = new FirecrawlApp({ apiKey: env.FIRECRAWL_API_KEY });

// Scrape the website with screenshot to get visual content
const result = await app.scrapeUrl(input.url, {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two separate scrapeUrl calls run sequentially. Consider running them concurrently with Promise.all to reduce overall latency.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (6)
packages/ai/src/prompt/clone.ts (1)

1-11: Consider adding length constraints and error handling guidance to the prompt.

The prompt is well-structured for generating detailed design documentation. However, consider adding:

  1. Maximum section count or description length guidance to prevent excessive token usage
  2. Fallback behavior when screenshots are unclear or sections overlap
 export const CLONE_WEBSITE_DESIGN_PROMPT = `You are an expert web designer and UX specialist.
 
 Given the HTML, Markdown, and a screenshot of a web page, analyze the entire page from top to bottom, starting at the very top of the screenshot and continuing all the way to the bottom. Do not miss any section—your goal is to create a complete and exhaustive design document that is as accurate as possible, down to every single pixel.
 
 Break down the page into a dynamic list of sections, ordered from top to bottom as they appear visually. For each section, provide:
 - "type": the section type (e.g., "navBar", "hero", "footer", "sidebar", etc.)
-- "description": a highly accurate, detailed explanation of the section's content, purpose, and visual appearance. Be specific about layout, spacing, alignment, colors, typography, and any unique style details. Ensure your description is as precise as possible and reflects the exact look and feel of the section, with pixel-level accuracy.
+- "description": a highly accurate, detailed explanation (max 500 chars) of the section's content, purpose, and visual appearance. Be specific about layout, spacing, alignment, colors, typography, and any unique style details. Ensure your description is as precise as possible and reflects the exact look and feel of the section, with pixel-level accuracy.
 - "styles": a concise summary of the key CSS styles or visual properties that define this section (e.g., background color, font size, padding, margin, border, flex/grid usage, etc.). Focus on what makes the section pixel perfect.
 
+If a section is unclear in the screenshot or HTML, provide your best interpretation based on available data.
 
 Return your analysis as a JSON object with a "sections" array. Do not include any other text or commentary. Only return the JSON object.`;
packages/ai/src/prompt/create.ts (1)

8-15: Consider clarifying the URL requirement message.

The instructions are clear, but the error message when URL is missing could be more user-friendly.

 <cloning_instructions>
 - Conditions for using the clone_website tool: 
   - The user request is specifically to clone a website
   - The user query explicitly mentions a relevant keyword such as "clone"
-  - The user query MUST explicitly mentions a concrete website URL. Even if the user request is to clone a website, if the user query does not explicitly mention a concrete website URL, you must ask the user to provide a concrete website URL.
+  - The user query MUST explicitly mention a concrete website URL. Even if the user request is to clone a website, if the user query does not explicitly mention a concrete website URL, you must ask the user: "Please provide the URL of the website you'd like to clone."
 - If the above conditions are met, immediately call the clone_website tool with that website_url
 - IMPORTANT: The clone_website tool must be about creating a pixel perfect clone of the website that is related to the original user request.
 </cloning_instructions>
apps/web/client/src/components/tools/handlers/web.ts (4)

53-60: Prefer concise destructuring for args

Minor readability tweak; avoids separate locals.

-    // Store args in function scope for error handling
-    const requestUrl = args.url;
-    const branchId = args.branchId;
+    // Store args in function scope for error handling
+    const { url: requestUrl, branchId } = args;

76-88: Return a partial-success error when sandbox is missing

Today this returns error: null, which can mask the fact that no assets were downloaded.

             return {
                 result: {
                     markdown: markdown,
                     html: html,
                     designScreenshot: designScreenshot,
                     designDocument: designDocument,
                     assets: assets,
                 },
-                error: null,
+                error: `Sandbox not found for branch ${branchId}. Assets not downloaded locally.`,
             };

Confirm the UI expects this behavior; if not, consider surfacing a non-blocking toast as well.


61-70: Consistency of error handling across tool handlers

Other handlers either return { error } or throw. Decide on a single convention for client handlers and align handleCloneWebsiteTool accordingly.


73-76: Move downloads server-side or through a vetted fetch layer

Running curl in a sandbox still exposes SSRF surfaces and makes policy enforcement harder (CIDR blocks, DNS rebinding, content length caps). Prefer downloading on the server with IP allow/block lists, max size limits, and content-type checks; then persist into the branch-scoped dir.

Also applies to: 91-101

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e87122 and ab716ae.

📒 Files selected for processing (9)
  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx (3 hunks)
  • apps/web/client/src/components/tools/handlers/web.ts (2 hunks)
  • apps/web/client/src/components/tools/tools.ts (3 hunks)
  • apps/web/client/src/server/api/routers/code.ts (2 hunks)
  • packages/ai/src/prompt/clone.ts (1 hunks)
  • packages/ai/src/prompt/create.ts (1 hunks)
  • packages/ai/src/tools/tools/web.ts (2 hunks)
  • packages/ai/src/tools/toolset.ts (2 hunks)
  • packages/models/src/tools/index.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.tsx

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.tsx: In React providers managing MobX stores, create the observable store with useState(() => new Store()) to ensure a stable instance
Keep a ref (e.g., storeRef.current) to the MobX store to avoid stale closures in effects
Use setTimeout(() => store.clear(), 0) for delayed cleanup of MobX stores to avoid race conditions
Separate project changes from branch updates by using proper effect dependency arrays
Do not use useMemo to hold MobX observable instances; React may drop memoized values causing data loss
Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead
Do not include the MobX store instance in effect dependency arrays when it causes infinite loops

Files:

  • apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
🧬 Code graph analysis (5)
apps/web/client/src/components/tools/handlers/web.ts (2)
packages/ai/src/tools/tools/web.ts (1)
  • CLONE_WEBSITE_TOOL_PARAMETERS (50-53)
packages/models/src/tools/index.ts (1)
  • CloneWebsiteResult (72-72)
apps/web/client/src/server/api/routers/code.ts (5)
apps/web/client/src/server/api/trpc.ts (1)
  • protectedProcedure (130-149)
packages/models/src/tools/index.ts (2)
  • CloneWebsiteResult (72-72)
  • DesignSchema (14-32)
apps/web/client/src/env.ts (1)
  • env (4-149)
packages/ai/src/chat/providers.ts (1)
  • initModel (14-51)
packages/ai/src/prompt/clone.ts (1)
  • CLONE_WEBSITE_DESIGN_PROMPT (1-11)
packages/ai/src/tools/toolset.ts (1)
packages/ai/src/tools/tools/web.ts (2)
  • CLONE_WEBSITE_TOOL_NAME (49-49)
  • cloneWebsiteTool (54-58)
apps/web/client/src/components/tools/tools.ts (2)
packages/ai/src/tools/tools/web.ts (2)
  • CLONE_WEBSITE_TOOL_NAME (49-49)
  • CLONE_WEBSITE_TOOL_PARAMETERS (50-53)
apps/web/client/src/components/tools/handlers/web.ts (1)
  • handleCloneWebsiteTool (53-125)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx (1)
packages/ai/src/tools/tools/web.ts (2)
  • CLONE_WEBSITE_TOOL_NAME (49-49)
  • CLONE_WEBSITE_TOOL_PARAMETERS (50-53)
🪛 Biome (2.1.2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx

[error] 199-199: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🔇 Additional comments (8)
packages/ai/src/tools/toolset.ts (1)

39-40: LGTM! Tool registration is correct.

The clone website tool is properly imported and registered in the BUILD_TOOL_SET, following the existing pattern for other tools.

Also applies to: 66-66

packages/ai/src/tools/tools/web.ts (1)

49-58: LGTM! Tool definition follows existing patterns.

The clone website tool is well-defined with appropriate validation using Zod schemas and includes the required branchId parameter.

packages/ai/src/prompt/create.ts (1)

4-6: Good addition of clone workflow trigger.

The integration properly detects when to use the clone_website tool.

packages/models/src/tools/index.ts (2)

14-32: LGTM! Well-structured schema definitions.

The DesignSchema is properly structured with detailed field descriptions that align well with the clone website prompt requirements.


34-72: LGTM! Comprehensive result schema with proper error handling.

The CloneWebsiteResultSchema correctly handles both success and error cases with nullable fields and includes all necessary data for the clone operation.

apps/web/client/src/components/tools/tools.ts (1)

180-185: LGTM! Tool handler properly integrated.

The clone website tool handler is correctly registered following the existing pattern.

apps/web/client/src/components/tools/handlers/web.ts (2)

4-6: LGTM on added tool parameter imports

These imports align with the new tool and keep types local to the module.


7-7: LGTM on result type imports

The explicit CloneWebsiteResult and WebSearchResult imports make return shapes clear.

Comment on lines +198 to +209
case CLONE_WEBSITE_TOOL_NAME:
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
if (params13?.url) {
try {
const url = new URL(params13.url);
return 'Cloning ' + (url.hostname || 'website');
} catch (error) {
return 'Cloning website';
}
} else {
return 'Cloning website';
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix the switch case declaration issue.

The linter correctly identifies that params13 can be accessed by other switch cases. Wrap the declaration in a block.

 case CLONE_WEBSITE_TOOL_NAME:
+    {
     const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
     if (params13?.url) {
         try {
             const url = new URL(params13.url);
             return 'Cloning ' + (url.hostname || 'website');
         } catch (error) {
             return 'Cloning website';
         }
     } else {
         return 'Cloning website';
     }
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case CLONE_WEBSITE_TOOL_NAME:
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
if (params13?.url) {
try {
const url = new URL(params13.url);
return 'Cloning ' + (url.hostname || 'website');
} catch (error) {
return 'Cloning website';
}
} else {
return 'Cloning website';
}
case CLONE_WEBSITE_TOOL_NAME: {
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
if (params13?.url) {
try {
const url = new URL(params13.url);
return 'Cloning ' + (url.hostname || 'website');
} catch (error) {
return 'Cloning website';
}
} else {
return 'Cloning website';
}
}
🧰 Tools
🪛 Biome (2.1.2)

[error] 199-199: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🤖 Prompt for AI Agents
In
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
around lines 198 to 209, the const params13 declared inside the
CLONE_WEBSITE_TOOL_NAME case is leaking into other switch cases; wrap the case
body in its own block so the declaration is scoped locally (i.e., add an opening
{ immediately after the case label and a closing } before the case's
return/break), keep the existing try/catch/returns unchanged, and ensure no
extra fall-through occurs.

Comment on lines +74 to +76
const baseDir = `public/cloned-assets/`;
const sandbox = editorEngine.branches.getSandboxById(branchId);
if (!sandbox) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make asset directory branch-scoped and quote paths

Avoid cross-branch collisions and ensure safe path handling.

-        // Download assets into public/cloned-assets/
-        const baseDir = `public/cloned-assets/`;
+        // Download assets into public/cloned-assets/<branchId>/
+        const baseDir = `public/cloned-assets/${branchId}/`;
@@
-        await sandbox.session.runCommand(`mkdir -p ${baseDir}`);
+        await sandbox.session.runCommand(`mkdir -p "${baseDir}"`);

Also applies to: 89-89

🤖 Prompt for AI Agents
In apps/web/client/src/components/tools/handlers/web.ts around lines 74-76 (and
also at line 89), the asset directory is not branch-scoped and paths are
constructed unsafely; update baseDir to be branch-scoped (e.g. include a
sanitized/validated branchId segment such as public/cloned-assets/<branchId>/)
and replace raw string concatenation with safe path construction (use path.join
or equivalent) to prevent path traversal and collisions; ensure any path
segments are properly quoted/escaped when used in shell or file APIs and
validate/sanitize branchId to allow only safe characters.

Comment on lines +91 to +105
for (const asset of assets) {

const rawBase = asset.title;
const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-');
const filename = safeBase + '.png';
const dest = `${baseDir}${filename}`;
try {
const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`);
if (download.success) {
// Attach saved location (relative public path) for UI usage
(asset as any).fileLocation = dest;
} else {
console.log('download failed', download.error);
console.warn(`Failed to download asset ${asset.url}:`, download.error);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Critical: command injection & SSRF risk in curl; also fix forced .png extension and return a web path

  • Interpolating an untrusted URL inside double quotes allows command substitution (e.g., $(...)) → RCE.
  • No scheme validation → SSRF vector (internal addresses, file://, etc.).
  • All assets saved as .png → broken asset types.
  • fileLocation should be a public web path (strip leading public/).
-        for (const asset of assets) {
-            
-            const rawBase = asset.title;
-            const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-');
-            const filename = safeBase + '.png';
-            const dest = `${baseDir}${filename}`;
-            try {
-                const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`);
-                if (download.success) {
-                    // Attach saved location (relative public path) for UI usage
-                    (asset as any).fileLocation = dest;
-                } else {
-                    console.log('download failed', download.error);
-                    console.warn(`Failed to download asset ${asset.url}:`, download.error);
-                }
-            } catch (error) {
-                console.warn(`Failed to download asset ${asset.url}:`, error);
-            }
-        }
+        for (const asset of assets) {
+            // Validate and normalize URL
+            let parsed: URL;
+            try {
+                parsed = new URL(asset.url);
+            } catch {
+                console.warn('Skipping asset with invalid URL:', asset?.url);
+                continue;
+            }
+            if (!/^https?:$/.test(parsed.protocol)) {
+                console.warn('Skipping non-http(s) asset:', asset.url);
+                continue;
+            }
+
+            // Safe filename (bounded length)
+            const rawBase = (asset.title || parsed.pathname.split('/').pop() || 'asset').slice(0, 64);
+            const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-');
+            const last = parsed.pathname.split('/').pop() || '';
+            const idx = last.lastIndexOf('.');
+            const extFromPath = idx > -1 ? last.slice(idx + 1) : '';
+            const ext = /^[a-zA-Z0-9]+$/.test(extFromPath) ? extFromPath : 'bin';
+            const filename = `${safeBase}.${ext}`;
+            const dest = `${baseDir}${filename}`;
+
+            // Shell-safe quoting: single-quote and escape existing single quotes
+            const quotedUrl = `'${asset.url.replace(/'/g, `'\\''`)}'`;
+            const quotedDest = `'${dest.replace(/'/g, `'\\''`)}'`;
+            const cmd = `curl -L --silent --fail --show-error --connect-timeout 5 --max-time 30 --proto '=http,https' --proto-redir '=http,https' ${quotedUrl} -o ${quotedDest}`;
+
+            try {
+                const download = await sandbox.session.runCommand(cmd);
+                if (download.success) {
+                    // Attach served location (strip "public/")
+                    const webPath = dest.startsWith('public/') ? `/${dest.slice('public/'.length)}` : dest;
+                    (asset as any).fileLocation = webPath;
+                } else {
+                    console.warn(`Failed to download asset ${asset.url}:`, download.error);
+                }
+            } catch (error) {
+                console.warn(`Failed to download asset ${asset.url}:`, error);
+            }
+        }

Follow-ups:

  • Consider deduping filenames (e.g., suffix collisions or hash by URL).
  • If assets can include CSS/JS/fonts, optionally restrict extensions to an allowlist and skip others.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/client/src/components/tools/handlers/web.ts around lines 91-105, the
code currently interpolates untrusted asset.url into a shell command (risking
command injection and SSRF), forces a .png extension, and stores a file system
path instead of a public web path; to fix, stop invoking curl via shell
interpolation and instead fetch assets using a safe HTTP client or spawn a
subprocess with arguments (no shell), validate and allow only safe schemes
(http/https) and disallow internal/loopback IPs/hostnames to mitigate SSRF,
determine the correct extension from the response Content-Type or the URL path
(and apply an allowlist if needed), ensure filenames are sanitized and
deduplicated (e.g., append a short hash of the URL), write to the filesystem
under the public dir, then set fileLocation to the web-accessible path by
stripping the leading public/ prefix.

Comment on lines +11 to +12
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
import { DesignSchema } from '@onlook/models/tools';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix the import path for CLONE_WEBSITE_DESIGN_PROMPT.

The import uses a direct path instead of the package export. This could break if the internal structure changes.

-import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
-import { DesignSchema } from '@onlook/models/tools';
+import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai';
+import { DesignSchema } from '@onlook/models';

🏁 Script executed:

#!/bin/bash
# Check if these exports are available from the package roots
rg -n "export.*CLONE_WEBSITE_DESIGN_PROMPT" packages/ai/src/index.ts packages/ai/index.ts
rg -n "export.*DesignSchema" packages/models/src/index.ts packages/models/index.ts

Length of output: 125


🏁 Script executed:

#!/usr/bin/env bash
set -e

echo "Root directories:"
ls -1 .

echo -e "\nApps directory (apps/*):"
ls -1 apps

echo -e "\nPackages directory (packages/*):"
ls -1 packages || echo "No packages directory at root"

echo -e "\nSearching for @onlook/ai in package.json:"
rg -n '"name"\s*:\s*"@onlook/ai"' -g "*/package.json"

echo -e "\nSearching for @onlook/models in package.json:"
rg -n '"name"\s*:\s*"@onlook/models"' -g "*/package.json"

AI_PKG_DIR=$(dirname "$(rg -l '"name"\s*:\s*"@onlook/ai"' -g "*/package.json" | head -n1)")
MODELS_PKG_DIR=$(dirname "$(rg -l '"name"\s*:\s*"@onlook/models"' -g "*/package.json" | head -n1)")

echo -e "\nAI package directory: $AI_PKG_DIR"
echo "Models package directory: $MODELS_PKG_DIR"

echo -e "\nListing index files in AI package:"
find "$AI_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \)

echo -e "\nListing index files in Models package:"
find "$MODELS_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \)

echo -e "\nSearching for export of CLONE_WEBSITE_DESIGN_PROMPT in AI index file(s):"
rg -n "export.*CLONE_WEBSITE_DESIGN_PROMPT" $(find "$AI_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \) )

echo -e "\nSearching for export of DesignSchema in Models index file(s):"
rg -n "export.*DesignSchema" $(find "$MODELS_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \) )

Length of output: 440


🏁 Script executed:

#!/usr/bin/env bash
set -e

echo "=== packages/ai/package.json ==="
cat packages/ai/package.json

echo -e "\n=== packages/models/package.json ==="
cat packages/models/package.json

echo -e "\n=== files under packages/ai/src/prompt ==="
if [ -d packages/ai/src/prompt ]; then
  find packages/ai/src/prompt -type f
else
  echo "No packages/ai/src/prompt directory"
fi

echo -e "\n=== definition of CLONE_WEBSITE_DESIGN_PROMPT in ai package ==="
rg -n "CLONE_WEBSITE_DESIGN_PROMPT" -g "packages/ai/**" || true

echo -e "\n=== packages/ai/src/index.ts (first 200 lines) ==="
if [ -f packages/ai/src/index.ts ]; then
  sed -n '1,200p' packages/ai/src/index.ts
else
  echo "packages/ai/src/index.ts not found"
fi

echo -e "\n=== packages/ai/index.ts (first 200 lines) ==="
if [ -f packages/ai/index.ts ]; then
  sed -n '1,200p' packages/ai/index.ts
else
  echo "packages/ai/index.ts not found"
fi

echo -e "\n=== files under packages/models/src/tools ==="
if [ -d packages/models/src/tools ]; then
  find packages/models/src/tools -type f
else
  echo "No packages/models/src/tools directory"
fi

echo -e "\n=== definition of DesignSchema in models package ==="
rg -n "DesignSchema" -g "packages/models/**" || true

echo -e "\n=== packages/models/src/index.ts (first 200 lines) ==="
if [ -f packages/models/src/index.ts ]; then
  sed -n '1,200p' packages/models/src/index.ts
else
  echo "packages/models/src/index.ts not found"
fi

echo -e "\n=== packages/models/index.ts (first 200 lines) ==="
if [ -f packages/models/index.ts ]; then
  sed -n '1,200p' packages/models/index.ts
else
  echo "packages/models/index.ts not found"
fi

Length of output: 4454


Use package root exports instead of deep imports

-import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
-import { DesignSchema } from '@onlook/models/tools';
+import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai';
+import { DesignSchema } from '@onlook/models';

Benefits: guards against internal restructuring and aligns with package export contracts.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
import { DesignSchema } from '@onlook/models/tools';
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai';
import { DesignSchema } from '@onlook/models';
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 11-12 the file
uses deep imports ('@onlook/ai/src/prompt/clone' and '@onlook/models/tools');
change these to use the package root exports (e.g., import
CLONE_WEBSITE_DESIGN_PROMPT from '@onlook/ai' and DesignSchema from
'@onlook/models') so consumers rely on the package export surface instead of
internal paths, and update any named vs default import shape to match the
package root exports.

Comment on lines +170 to +181
// Scrape the website with screenshot to get visual content
const result = await app.scrapeUrl(input.url, {
formats: ['html', 'screenshot@fullPage', 'markdown'],
onlyMainContent: false,
waitFor: 2000,
});
const imageAssetsResult = await app.scrapeUrl(input.url, {
formats: ['markdown'],
onlyMainContent: false,
includeTags: ['img'],
waitFor: 2000,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add timeout configuration for Firecrawl operations.

The scraping operations use a fixed 2-second wait time which may not be sufficient for all websites. Consider making this configurable or adaptive.

 // Scrape the website with screenshot to get visual content
+const waitTime = Math.min(10000, 2000 + (input.url.length * 10)); // Adaptive wait based on URL complexity, max 10s
 const result = await app.scrapeUrl(input.url, {
     formats: ['html', 'screenshot@fullPage', 'markdown'],
     onlyMainContent: false,
-    waitFor: 2000,
+    waitFor: waitTime,
+    timeout: 30000, // Add overall timeout
 });
 const imageAssetsResult = await app.scrapeUrl(input.url, {
     formats: ['markdown'],
     onlyMainContent: false,
     includeTags: ['img'],
-    waitFor: 2000,
+    waitFor: waitTime,
+    timeout: 30000,
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Scrape the website with screenshot to get visual content
const result = await app.scrapeUrl(input.url, {
formats: ['html', 'screenshot@fullPage', 'markdown'],
onlyMainContent: false,
waitFor: 2000,
});
const imageAssetsResult = await app.scrapeUrl(input.url, {
formats: ['markdown'],
onlyMainContent: false,
includeTags: ['img'],
waitFor: 2000,
});
// Scrape the website with screenshot to get visual content
const waitTime = Math.min(10000, 2000 + (input.url.length * 10)); // Adaptive wait based on URL complexity, max 10s
const result = await app.scrapeUrl(input.url, {
formats: ['html', 'screenshot@fullPage', 'markdown'],
onlyMainContent: false,
waitFor: waitTime,
timeout: 30000, // Add overall timeout
});
const imageAssetsResult = await app.scrapeUrl(input.url, {
formats: ['markdown'],
onlyMainContent: false,
includeTags: ['img'],
waitFor: waitTime,
timeout: 30000,
});
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 170 to 181, the
two scrapeUrl calls hardcode waitFor: 2000 which is brittle; make the Firecrawl
timeout configurable and robust by accepting an optional timeout (e.g.,
timeoutMs) from the request input or reading a service-level config/env value,
validate and clamp it to a safe min/max, and pass that value into both scrapeUrl
calls; if no timeout provided fall back to a sensible default (e.g., 2000), and
consider supporting an adaptive strategy (e.g., retry with a larger timeout or
use a networkIdle option if available) so slow pages are handled without
indefinite waits.

Comment on lines +187 to +240
let imageAssets: {
url: string;
title: string;
}[] = [];
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) {
const md = imageAssetsResult.markdown;
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g;
let match: RegExpExecArray | null;
while ((match = mdImgRegex.exec(md)) !== null) {
const alt = (match[1] || '').trim();
const urlCandidate = match[2];
const title = alt ? alt.replace(/\s+/g, '-') : '';
if (!urlCandidate) continue;
try {
const absoluteUrl = new URL(urlCandidate, input.url).toString();
imageAssets.push({ url: absoluteUrl, title });
} catch {
imageAssets.push({ url: urlCandidate, title });
}
}
} else if (result.html) {
// Fallback: parse from HTML if markdown not available
const imgTagRegex = /<img[^>]*>/gi;
const srcRegex = /src=["']([^"']+)["']/i;
const altRegex = /alt=["']([^"']*)["']/i;
let tagMatch: RegExpExecArray | null;
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) {
const tag = tagMatch[0];
const srcMatch = srcRegex.exec(tag);
if (!srcMatch) continue;
const srcCandidate = srcMatch[1] ?? '';
const altMatch = altRegex.exec(tag);
const alt = (altMatch?.[1] ?? '').trim();
const title = alt ? alt.replace(/\s+/g, '-') : '';
try {
const absoluteUrl = new URL(srcCandidate, input.url).toString();
imageAssets.push({ url: absoluteUrl, title });
} catch {
imageAssets.push({ url: srcCandidate, title });
}
}
}


// Dedupe by URL

const byUrl = new Map<string, { url: string; title: string }>();
for (const asset of imageAssets) {
if (!byUrl.has(asset.url)) {
byUrl.set(asset.url, asset);
}
}
imageAssets = Array.from(byUrl.values());

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve asset extraction robustness and add validation.

The current implementation has several areas for improvement:

  1. No validation of image URLs (could be data URIs, malformed, etc.)
  2. Missing error handling for invalid URLs
  3. No size or type filtering for assets
 let imageAssets: {
     url: string;
     title: string;
 }[] = [];
+
+// Helper to validate and normalize image URLs
+const isValidImageUrl = (url: string): boolean => {
+    if (!url || url.startsWith('data:')) return false;
+    try {
+        const parsed = new URL(url);
+        return ['http:', 'https:'].includes(parsed.protocol);
+    } catch {
+        return false;
+    }
+};
+
 if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) {
     const md = imageAssetsResult.markdown;
     const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g;
     let match: RegExpExecArray | null;
     while ((match = mdImgRegex.exec(md)) !== null) {
         const alt = (match[1] || '').trim();
         const urlCandidate = match[2];
         const title = alt ? alt.replace(/\s+/g, '-') : '';
         if (!urlCandidate) continue;
         try {
             const absoluteUrl = new URL(urlCandidate, input.url).toString();
-            imageAssets.push({ url: absoluteUrl, title });
+            if (isValidImageUrl(absoluteUrl)) {
+                imageAssets.push({ url: absoluteUrl, title });
+            }
         } catch {
-            imageAssets.push({ url: urlCandidate, title });
+            // Skip invalid URLs
         }
     }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let imageAssets: {
url: string;
title: string;
}[] = [];
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) {
const md = imageAssetsResult.markdown;
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g;
let match: RegExpExecArray | null;
while ((match = mdImgRegex.exec(md)) !== null) {
const alt = (match[1] || '').trim();
const urlCandidate = match[2];
const title = alt ? alt.replace(/\s+/g, '-') : '';
if (!urlCandidate) continue;
try {
const absoluteUrl = new URL(urlCandidate, input.url).toString();
imageAssets.push({ url: absoluteUrl, title });
} catch {
imageAssets.push({ url: urlCandidate, title });
}
}
} else if (result.html) {
// Fallback: parse from HTML if markdown not available
const imgTagRegex = /<img[^>]*>/gi;
const srcRegex = /src=["']([^"']+)["']/i;
const altRegex = /alt=["']([^"']*)["']/i;
let tagMatch: RegExpExecArray | null;
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) {
const tag = tagMatch[0];
const srcMatch = srcRegex.exec(tag);
if (!srcMatch) continue;
const srcCandidate = srcMatch[1] ?? '';
const altMatch = altRegex.exec(tag);
const alt = (altMatch?.[1] ?? '').trim();
const title = alt ? alt.replace(/\s+/g, '-') : '';
try {
const absoluteUrl = new URL(srcCandidate, input.url).toString();
imageAssets.push({ url: absoluteUrl, title });
} catch {
imageAssets.push({ url: srcCandidate, title });
}
}
}
// Dedupe by URL
const byUrl = new Map<string, { url: string; title: string }>();
for (const asset of imageAssets) {
if (!byUrl.has(asset.url)) {
byUrl.set(asset.url, asset);
}
}
imageAssets = Array.from(byUrl.values());
let imageAssets: {
url: string;
title: string;
}[] = [];
// Helper to validate and normalize image URLs
const isValidImageUrl = (url: string): boolean => {
if (!url || url.startsWith('data:')) return false;
try {
const parsed = new URL(url);
return ['http:', 'https:'].includes(parsed.protocol);
} catch {
return false;
}
};
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) {
const md = imageAssetsResult.markdown;
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g;
let match: RegExpExecArray | null;
while ((match = mdImgRegex.exec(md)) !== null) {
const alt = (match[1] || '').trim();
const urlCandidate = match[2];
const title = alt ? alt.replace(/\s+/g, '-') : '';
if (!urlCandidate) continue;
try {
const absoluteUrl = new URL(urlCandidate, input.url).toString();
if (isValidImageUrl(absoluteUrl)) {
imageAssets.push({ url: absoluteUrl, title });
}
} catch {
// Skip invalid URLs
}
}
} else if (result.html) {
// Fallback: parse from HTML if markdown not available
const imgTagRegex = /<img[^>]*>/gi;
const srcRegex = /src=["']([^"']+)["']/i;
const altRegex = /alt=["']([^"']*)["']/i;
let tagMatch: RegExpExecArray | null;
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) {
const tag = tagMatch[0];
const srcMatch = srcRegex.exec(tag);
if (!srcMatch) continue;
const srcCandidate = srcMatch[1] ?? '';
const altMatch = altRegex.exec(tag);
const alt = (altMatch?.[1] ?? '').trim();
const title = alt ? alt.replace(/\s+/g, '-') : '';
try {
const absoluteUrl = new URL(srcCandidate, input.url).toString();
imageAssets.push({ url: absoluteUrl, title });
} catch {
imageAssets.push({ url: srcCandidate, title });
}
}
}
// Dedupe by URL
const byUrl = new Map<string, { url: string; title: string }>();
for (const asset of imageAssets) {
if (!byUrl.has(asset.url)) {
byUrl.set(asset.url, asset);
}
}
imageAssets = Array.from(byUrl.values());
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 187 to 240, the
image extraction block should validate and filter candidates before adding them:
only accept http(s) absolute URLs (reject data:, mailto:, javascript: etc.),
skip obviously malformed URLs by catching URL construction errors (already
present) and continue; perform a HEAD (or fetch with method: 'HEAD') to check
Content-Type starts with image/ and Content-Length (or streamed size) is below a
configured limit (e.g., 5MB) with a short timeout, and skip on any
network/timeout/error; also whitelist common image extensions as a cheap
pre-check before fetching; keep existing dedupe logic but apply it after
filtering; ensure all network calls are try/catch and do not throw to the
caller, logging or silently skipping invalid assets.

Comment on lines +242 to +264
const { model, headers } = await initModel({
provider: LLMProvider.ANTHROPIC,
model: ANTHROPIC_MODELS.SONNET_4,
});

const { object } = await generateObject({
model,
headers,
schema: DesignSchema,
messages: [
{
role: 'system',
content: CLONE_WEBSITE_DESIGN_PROMPT,
},
{
role: 'user',
content: `HTML: ${result.html}
Markdown: ${result.markdown}
Screenshot: ${result.screenshot}`,
},
],
maxOutputTokens: 10000,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for AI model initialization and generation.

The design document generation lacks proper error handling and could fail silently.

+let designDocument: z.infer<typeof DesignSchema> | null = null;
+
+try {
     const { model, headers } = await initModel({
         provider: LLMProvider.ANTHROPIC,
         model: ANTHROPIC_MODELS.SONNET_4,
     });
 
     const { object } = await generateObject({
         model,
         headers,
         schema: DesignSchema,
         messages: [
             {
                 role: 'system',
                 content: CLONE_WEBSITE_DESIGN_PROMPT,
             },
             {
                 role: 'user',
                 content: `HTML: ${result.html}
                 Markdown: ${result.markdown}
                 Screenshot: ${result.screenshot}`,
             },
         ],
         maxOutputTokens: 10000,
     });
 
-    const designDocument: z.infer<typeof DesignSchema> = object;
+    designDocument = object;
+} catch (error) {
+    console.error('Failed to generate design document:', error);
+    // Continue without design document rather than failing entire operation
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { model, headers } = await initModel({
provider: LLMProvider.ANTHROPIC,
model: ANTHROPIC_MODELS.SONNET_4,
});
const { object } = await generateObject({
model,
headers,
schema: DesignSchema,
messages: [
{
role: 'system',
content: CLONE_WEBSITE_DESIGN_PROMPT,
},
{
role: 'user',
content: `HTML: ${result.html}
Markdown: ${result.markdown}
Screenshot: ${result.screenshot}`,
},
],
maxOutputTokens: 10000,
});
let designDocument: z.infer<typeof DesignSchema> | null = null;
try {
const { model, headers } = await initModel({
provider: LLMProvider.ANTHROPIC,
model: ANTHROPIC_MODELS.SONNET_4,
});
const { object } = await generateObject({
model,
headers,
schema: DesignSchema,
messages: [
{
role: 'system',
content: CLONE_WEBSITE_DESIGN_PROMPT,
},
{
role: 'user',
content: `HTML: ${result.html}
Markdown: ${result.markdown}
Screenshot: ${result.screenshot}`,
},
],
maxOutputTokens: 10000,
});
designDocument = object;
} catch (error) {
console.error('Failed to generate design document:', error);
// Continue without design document rather than failing entire operation
}
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 242 to 264, the
calls to initModel and generateObject lack error handling and may fail silently;
wrap the initModel and generateObject calls in a try/catch (or separate
try/catches) so failures are caught, log the error with contextual information
(which call failed and relevant inputs), and propagate a clear failure to the
caller (e.g., throw a TRPC/HTTP error with a 500 and concise message or return
an error payload) instead of letting the function continue silently.

Comment on lines +268 to +277
return {
result: {
markdown: result.markdown || '',
html: result.html || '',
designScreenshot: result.screenshot || '',
designDocument: designDocument,
assets: imageAssets,
},
error: null,
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add null checks for optional Firecrawl results.

The code assumes all fields from Firecrawl will be present, but they might be undefined.

 return {
     result: {
-        markdown: result.markdown || '',
-        html: result.html || '',
-        designScreenshot: result.screenshot || '',
+        markdown: result.markdown ?? '',
+        html: result.html ?? '',
+        designScreenshot: result.screenshot ?? '',
         designDocument: designDocument,
         assets: imageAssets,
     },
     error: null,
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return {
result: {
markdown: result.markdown || '',
html: result.html || '',
designScreenshot: result.screenshot || '',
designDocument: designDocument,
assets: imageAssets,
},
error: null,
};
return {
result: {
markdown: result.markdown ?? '',
html: result.html ?? '',
designScreenshot: result.screenshot ?? '',
designDocument: designDocument,
assets: imageAssets,
},
error: null,
};
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 268-277, the code
assumes all fields returned from Firecrawl exist; add null/undefined checks and
safe defaults when constructing the response so missing fields won't throw.
Coalesce result.markdown and result.html to empty strings if undefined, coalesce
result.screenshot to empty string, ensure designDocument is set to null or an
empty object if missing, and ensure imageAssets is an empty array if undefined;
update any typing if necessary so the response always has those safe defaults.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant