-
Notifications
You must be signed in to change notification settings - Fork 1.6k
feat: cloning website #2820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: cloning website #2820
Conversation
@rohan-chidurala is attempting to deploy a commit to the Onlook Team on Vercel. A member of the Team first needs to authorize it. |
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a new clone_website tool across AI prompts, toolsets, models, client handlers, UI labeling, and a server API mutation. Implements scraping via Firecrawl, design document generation via model inference, result typing/validation, asset downloading into a branch-specific directory, and integrates dispatch/handling in the client tool pipeline. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant UI as Client UI (Chat/Tools)
participant TD as Tool Dispatcher
participant TH as handleCloneWebsiteTool
participant API as code.cloneWebsite (Server)
participant SCR as Firecrawl
participant LLM as Model (Design Generation)
participant FS as Sandbox FS (public/cloned-assets)
U->>UI: Request to clone website (URL provided)
UI->>TD: Tool call: clone_website { url, branchId }
TD->>TH: Invoke handler(args, editorEngine)
TH->>API: mutate({ url })
API->>SCR: Scrape HTML/Markdown/Screenshot
SCR-->>API: { html, markdown, screenshot, assets }
API->>LLM: Generate designDocument (prompt + inputs)
LLM-->>API: designDocument
API-->>TH: CloneWebsiteResult { result | error }
alt Success
TH->>FS: Ensure target dir (branch)
loop For each asset
TH->>FS: Download asset (curl) -> fileLocation
end
TH-->>TD: Result with fileLocation for assets
TD-->>UI: Tool result delivered
UI-->>U: Display cloning outcome
else Error
TH-->>TD: Error in result or thrown
TD-->>UI: Surface error
UI-->>U: Show error message
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>; | ||
if (params13?.url) { | ||
try { | ||
const url = new URL(params13.url); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the clone_website case, the URL parsing logic is duplicated. Consider refactoring it into a utility function for consistency and maintainability.
const filename = safeBase + '.png'; | ||
const dest = `${baseDir}${filename}`; | ||
try { | ||
const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of string interpolation in the curl command may pose a shell injection risk. Ensure that asset.url and dest are properly sanitized or escaped.
} | ||
await sandbox.session.runCommand(`mkdir -p ${baseDir}`); | ||
|
||
for (const asset of assets) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider parallelizing the asset downloads (e.g. using Promise.all) to improve performance when there are many assets.
const app = new FirecrawlApp({ apiKey: env.FIRECRAWL_API_KEY }); | ||
|
||
// Scrape the website with screenshot to get visual content | ||
const result = await app.scrapeUrl(input.url, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two separate scrapeUrl calls run sequentially. Consider running them concurrently with Promise.all to reduce overall latency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (6)
packages/ai/src/prompt/clone.ts (1)
1-11
: Consider adding length constraints and error handling guidance to the prompt.The prompt is well-structured for generating detailed design documentation. However, consider adding:
- Maximum section count or description length guidance to prevent excessive token usage
- Fallback behavior when screenshots are unclear or sections overlap
export const CLONE_WEBSITE_DESIGN_PROMPT = `You are an expert web designer and UX specialist. Given the HTML, Markdown, and a screenshot of a web page, analyze the entire page from top to bottom, starting at the very top of the screenshot and continuing all the way to the bottom. Do not miss any section—your goal is to create a complete and exhaustive design document that is as accurate as possible, down to every single pixel. Break down the page into a dynamic list of sections, ordered from top to bottom as they appear visually. For each section, provide: - "type": the section type (e.g., "navBar", "hero", "footer", "sidebar", etc.) -- "description": a highly accurate, detailed explanation of the section's content, purpose, and visual appearance. Be specific about layout, spacing, alignment, colors, typography, and any unique style details. Ensure your description is as precise as possible and reflects the exact look and feel of the section, with pixel-level accuracy. +- "description": a highly accurate, detailed explanation (max 500 chars) of the section's content, purpose, and visual appearance. Be specific about layout, spacing, alignment, colors, typography, and any unique style details. Ensure your description is as precise as possible and reflects the exact look and feel of the section, with pixel-level accuracy. - "styles": a concise summary of the key CSS styles or visual properties that define this section (e.g., background color, font size, padding, margin, border, flex/grid usage, etc.). Focus on what makes the section pixel perfect. +If a section is unclear in the screenshot or HTML, provide your best interpretation based on available data. Return your analysis as a JSON object with a "sections" array. Do not include any other text or commentary. Only return the JSON object.`;packages/ai/src/prompt/create.ts (1)
8-15
: Consider clarifying the URL requirement message.The instructions are clear, but the error message when URL is missing could be more user-friendly.
<cloning_instructions> - Conditions for using the clone_website tool: - The user request is specifically to clone a website - The user query explicitly mentions a relevant keyword such as "clone" - - The user query MUST explicitly mentions a concrete website URL. Even if the user request is to clone a website, if the user query does not explicitly mention a concrete website URL, you must ask the user to provide a concrete website URL. + - The user query MUST explicitly mention a concrete website URL. Even if the user request is to clone a website, if the user query does not explicitly mention a concrete website URL, you must ask the user: "Please provide the URL of the website you'd like to clone." - If the above conditions are met, immediately call the clone_website tool with that website_url - IMPORTANT: The clone_website tool must be about creating a pixel perfect clone of the website that is related to the original user request. </cloning_instructions>apps/web/client/src/components/tools/handlers/web.ts (4)
53-60
: Prefer concise destructuring for argsMinor readability tweak; avoids separate locals.
- // Store args in function scope for error handling - const requestUrl = args.url; - const branchId = args.branchId; + // Store args in function scope for error handling + const { url: requestUrl, branchId } = args;
76-88
: Return a partial-success error when sandbox is missingToday this returns
error: null
, which can mask the fact that no assets were downloaded.return { result: { markdown: markdown, html: html, designScreenshot: designScreenshot, designDocument: designDocument, assets: assets, }, - error: null, + error: `Sandbox not found for branch ${branchId}. Assets not downloaded locally.`, };Confirm the UI expects this behavior; if not, consider surfacing a non-blocking toast as well.
61-70
: Consistency of error handling across tool handlersOther handlers either return
{ error }
or throw. Decide on a single convention for client handlers and alignhandleCloneWebsiteTool
accordingly.
73-76
: Move downloads server-side or through a vetted fetch layerRunning
curl
in a sandbox still exposes SSRF surfaces and makes policy enforcement harder (CIDR blocks, DNS rebinding, content length caps). Prefer downloading on the server with IP allow/block lists, max size limits, and content-type checks; then persist into the branch-scoped dir.Also applies to: 91-101
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
(3 hunks)apps/web/client/src/components/tools/handlers/web.ts
(2 hunks)apps/web/client/src/components/tools/tools.ts
(3 hunks)apps/web/client/src/server/api/routers/code.ts
(2 hunks)packages/ai/src/prompt/clone.ts
(1 hunks)packages/ai/src/prompt/create.ts
(1 hunks)packages/ai/src/tools/tools/web.ts
(2 hunks)packages/ai/src/tools/toolset.ts
(2 hunks)packages/models/src/tools/index.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.tsx
: In React providers managing MobX stores, create the observable store with useState(() => new Store()) to ensure a stable instance
Keep a ref (e.g., storeRef.current) to the MobX store to avoid stale closures in effects
Use setTimeout(() => store.clear(), 0) for delayed cleanup of MobX stores to avoid race conditions
Separate project changes from branch updates by using proper effect dependency arrays
Do not use useMemo to hold MobX observable instances; React may drop memoized values causing data loss
Do not clean up MobX stores synchronously during navigation; perform delayed cleanup instead
Do not include the MobX store instance in effect dependency arrays when it causes infinite loops
Files:
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
🧬 Code graph analysis (5)
apps/web/client/src/components/tools/handlers/web.ts (2)
packages/ai/src/tools/tools/web.ts (1)
CLONE_WEBSITE_TOOL_PARAMETERS
(50-53)packages/models/src/tools/index.ts (1)
CloneWebsiteResult
(72-72)
apps/web/client/src/server/api/routers/code.ts (5)
apps/web/client/src/server/api/trpc.ts (1)
protectedProcedure
(130-149)packages/models/src/tools/index.ts (2)
CloneWebsiteResult
(72-72)DesignSchema
(14-32)apps/web/client/src/env.ts (1)
env
(4-149)packages/ai/src/chat/providers.ts (1)
initModel
(14-51)packages/ai/src/prompt/clone.ts (1)
CLONE_WEBSITE_DESIGN_PROMPT
(1-11)
packages/ai/src/tools/toolset.ts (1)
packages/ai/src/tools/tools/web.ts (2)
CLONE_WEBSITE_TOOL_NAME
(49-49)cloneWebsiteTool
(54-58)
apps/web/client/src/components/tools/tools.ts (2)
packages/ai/src/tools/tools/web.ts (2)
CLONE_WEBSITE_TOOL_NAME
(49-49)CLONE_WEBSITE_TOOL_PARAMETERS
(50-53)apps/web/client/src/components/tools/handlers/web.ts (1)
handleCloneWebsiteTool
(53-125)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx (1)
packages/ai/src/tools/tools/web.ts (2)
CLONE_WEBSITE_TOOL_NAME
(49-49)CLONE_WEBSITE_TOOL_PARAMETERS
(50-53)
🪛 Biome (2.1.2)
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
[error] 199-199: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (8)
packages/ai/src/tools/toolset.ts (1)
39-40
: LGTM! Tool registration is correct.The clone website tool is properly imported and registered in the BUILD_TOOL_SET, following the existing pattern for other tools.
Also applies to: 66-66
packages/ai/src/tools/tools/web.ts (1)
49-58
: LGTM! Tool definition follows existing patterns.The clone website tool is well-defined with appropriate validation using Zod schemas and includes the required branchId parameter.
packages/ai/src/prompt/create.ts (1)
4-6
: Good addition of clone workflow trigger.The integration properly detects when to use the clone_website tool.
packages/models/src/tools/index.ts (2)
14-32
: LGTM! Well-structured schema definitions.The DesignSchema is properly structured with detailed field descriptions that align well with the clone website prompt requirements.
34-72
: LGTM! Comprehensive result schema with proper error handling.The CloneWebsiteResultSchema correctly handles both success and error cases with nullable fields and includes all necessary data for the clone operation.
apps/web/client/src/components/tools/tools.ts (1)
180-185
: LGTM! Tool handler properly integrated.The clone website tool handler is correctly registered following the existing pattern.
apps/web/client/src/components/tools/handlers/web.ts (2)
4-6
: LGTM on added tool parameter importsThese imports align with the new tool and keep types local to the module.
7-7
: LGTM on result type importsThe explicit
CloneWebsiteResult
andWebSearchResult
imports make return shapes clear.
case CLONE_WEBSITE_TOOL_NAME: | ||
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>; | ||
if (params13?.url) { | ||
try { | ||
const url = new URL(params13.url); | ||
return 'Cloning ' + (url.hostname || 'website'); | ||
} catch (error) { | ||
return 'Cloning website'; | ||
} | ||
} else { | ||
return 'Cloning website'; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the switch case declaration issue.
The linter correctly identifies that params13
can be accessed by other switch cases. Wrap the declaration in a block.
case CLONE_WEBSITE_TOOL_NAME:
+ {
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>;
if (params13?.url) {
try {
const url = new URL(params13.url);
return 'Cloning ' + (url.hostname || 'website');
} catch (error) {
return 'Cloning website';
}
} else {
return 'Cloning website';
}
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
case CLONE_WEBSITE_TOOL_NAME: | |
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>; | |
if (params13?.url) { | |
try { | |
const url = new URL(params13.url); | |
return 'Cloning ' + (url.hostname || 'website'); | |
} catch (error) { | |
return 'Cloning website'; | |
} | |
} else { | |
return 'Cloning website'; | |
} | |
case CLONE_WEBSITE_TOOL_NAME: { | |
const params13 = toolInvocation.input as z.infer<typeof CLONE_WEBSITE_TOOL_PARAMETERS>; | |
if (params13?.url) { | |
try { | |
const url = new URL(params13.url); | |
return 'Cloning ' + (url.hostname || 'website'); | |
} catch (error) { | |
return 'Cloning website'; | |
} | |
} else { | |
return 'Cloning website'; | |
} | |
} |
🧰 Tools
🪛 Biome (2.1.2)
[error] 199-199: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
In
apps/web/client/src/app/project/[id]/_components/right-panel/chat-tab/chat-messages/message-content/tool-call-simple.tsx
around lines 198 to 209, the const params13 declared inside the
CLONE_WEBSITE_TOOL_NAME case is leaking into other switch cases; wrap the case
body in its own block so the declaration is scoped locally (i.e., add an opening
{ immediately after the case label and a closing } before the case's
return/break), keep the existing try/catch/returns unchanged, and ensure no
extra fall-through occurs.
const baseDir = `public/cloned-assets/`; | ||
const sandbox = editorEngine.branches.getSandboxById(branchId); | ||
if (!sandbox) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make asset directory branch-scoped and quote paths
Avoid cross-branch collisions and ensure safe path handling.
- // Download assets into public/cloned-assets/
- const baseDir = `public/cloned-assets/`;
+ // Download assets into public/cloned-assets/<branchId>/
+ const baseDir = `public/cloned-assets/${branchId}/`;
@@
- await sandbox.session.runCommand(`mkdir -p ${baseDir}`);
+ await sandbox.session.runCommand(`mkdir -p "${baseDir}"`);
Also applies to: 89-89
🤖 Prompt for AI Agents
In apps/web/client/src/components/tools/handlers/web.ts around lines 74-76 (and
also at line 89), the asset directory is not branch-scoped and paths are
constructed unsafely; update baseDir to be branch-scoped (e.g. include a
sanitized/validated branchId segment such as public/cloned-assets/<branchId>/)
and replace raw string concatenation with safe path construction (use path.join
or equivalent) to prevent path traversal and collisions; ensure any path
segments are properly quoted/escaped when used in shell or file APIs and
validate/sanitize branchId to allow only safe characters.
for (const asset of assets) { | ||
|
||
const rawBase = asset.title; | ||
const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-'); | ||
const filename = safeBase + '.png'; | ||
const dest = `${baseDir}${filename}`; | ||
try { | ||
const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`); | ||
if (download.success) { | ||
// Attach saved location (relative public path) for UI usage | ||
(asset as any).fileLocation = dest; | ||
} else { | ||
console.log('download failed', download.error); | ||
console.warn(`Failed to download asset ${asset.url}:`, download.error); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Critical: command injection & SSRF risk in curl; also fix forced .png extension and return a web path
- Interpolating an untrusted URL inside double quotes allows command substitution (e.g., $(...)) → RCE.
- No scheme validation → SSRF vector (internal addresses, file://, etc.).
- All assets saved as .png → broken asset types.
fileLocation
should be a public web path (strip leadingpublic/
).
- for (const asset of assets) {
-
- const rawBase = asset.title;
- const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-');
- const filename = safeBase + '.png';
- const dest = `${baseDir}${filename}`;
- try {
- const download = await sandbox.session.runCommand(`curl -L --silent --fail --show-error "${asset.url}" -o "${dest}"`);
- if (download.success) {
- // Attach saved location (relative public path) for UI usage
- (asset as any).fileLocation = dest;
- } else {
- console.log('download failed', download.error);
- console.warn(`Failed to download asset ${asset.url}:`, download.error);
- }
- } catch (error) {
- console.warn(`Failed to download asset ${asset.url}:`, error);
- }
- }
+ for (const asset of assets) {
+ // Validate and normalize URL
+ let parsed: URL;
+ try {
+ parsed = new URL(asset.url);
+ } catch {
+ console.warn('Skipping asset with invalid URL:', asset?.url);
+ continue;
+ }
+ if (!/^https?:$/.test(parsed.protocol)) {
+ console.warn('Skipping non-http(s) asset:', asset.url);
+ continue;
+ }
+
+ // Safe filename (bounded length)
+ const rawBase = (asset.title || parsed.pathname.split('/').pop() || 'asset').slice(0, 64);
+ const safeBase = rawBase.replace(/\s+/g, '-').replace(/[^a-zA-Z0-9._-]/g, '-');
+ const last = parsed.pathname.split('/').pop() || '';
+ const idx = last.lastIndexOf('.');
+ const extFromPath = idx > -1 ? last.slice(idx + 1) : '';
+ const ext = /^[a-zA-Z0-9]+$/.test(extFromPath) ? extFromPath : 'bin';
+ const filename = `${safeBase}.${ext}`;
+ const dest = `${baseDir}${filename}`;
+
+ // Shell-safe quoting: single-quote and escape existing single quotes
+ const quotedUrl = `'${asset.url.replace(/'/g, `'\\''`)}'`;
+ const quotedDest = `'${dest.replace(/'/g, `'\\''`)}'`;
+ const cmd = `curl -L --silent --fail --show-error --connect-timeout 5 --max-time 30 --proto '=http,https' --proto-redir '=http,https' ${quotedUrl} -o ${quotedDest}`;
+
+ try {
+ const download = await sandbox.session.runCommand(cmd);
+ if (download.success) {
+ // Attach served location (strip "public/")
+ const webPath = dest.startsWith('public/') ? `/${dest.slice('public/'.length)}` : dest;
+ (asset as any).fileLocation = webPath;
+ } else {
+ console.warn(`Failed to download asset ${asset.url}:`, download.error);
+ }
+ } catch (error) {
+ console.warn(`Failed to download asset ${asset.url}:`, error);
+ }
+ }
Follow-ups:
- Consider deduping filenames (e.g., suffix collisions or hash by URL).
- If
assets
can include CSS/JS/fonts, optionally restrict extensions to an allowlist and skip others.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web/client/src/components/tools/handlers/web.ts around lines 91-105, the
code currently interpolates untrusted asset.url into a shell command (risking
command injection and SSRF), forces a .png extension, and stores a file system
path instead of a public web path; to fix, stop invoking curl via shell
interpolation and instead fetch assets using a safe HTTP client or spawn a
subprocess with arguments (no shell), validate and allow only safe schemes
(http/https) and disallow internal/loopback IPs/hostnames to mitigate SSRF,
determine the correct extension from the response Content-Type or the URL path
(and apply an allowlist if needed), ensure filenames are sanitized and
deduplicated (e.g., append a short hash of the URL), write to the filesystem
under the public dir, then set fileLocation to the web-accessible path by
stripping the leading public/ prefix.
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone'; | ||
import { DesignSchema } from '@onlook/models/tools'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix the import path for CLONE_WEBSITE_DESIGN_PROMPT.
The import uses a direct path instead of the package export. This could break if the internal structure changes.
-import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
-import { DesignSchema } from '@onlook/models/tools';
+import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai';
+import { DesignSchema } from '@onlook/models';
🏁 Script executed:
#!/bin/bash
# Check if these exports are available from the package roots
rg -n "export.*CLONE_WEBSITE_DESIGN_PROMPT" packages/ai/src/index.ts packages/ai/index.ts
rg -n "export.*DesignSchema" packages/models/src/index.ts packages/models/index.ts
Length of output: 125
🏁 Script executed:
#!/usr/bin/env bash
set -e
echo "Root directories:"
ls -1 .
echo -e "\nApps directory (apps/*):"
ls -1 apps
echo -e "\nPackages directory (packages/*):"
ls -1 packages || echo "No packages directory at root"
echo -e "\nSearching for @onlook/ai in package.json:"
rg -n '"name"\s*:\s*"@onlook/ai"' -g "*/package.json"
echo -e "\nSearching for @onlook/models in package.json:"
rg -n '"name"\s*:\s*"@onlook/models"' -g "*/package.json"
AI_PKG_DIR=$(dirname "$(rg -l '"name"\s*:\s*"@onlook/ai"' -g "*/package.json" | head -n1)")
MODELS_PKG_DIR=$(dirname "$(rg -l '"name"\s*:\s*"@onlook/models"' -g "*/package.json" | head -n1)")
echo -e "\nAI package directory: $AI_PKG_DIR"
echo "Models package directory: $MODELS_PKG_DIR"
echo -e "\nListing index files in AI package:"
find "$AI_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \)
echo -e "\nListing index files in Models package:"
find "$MODELS_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \)
echo -e "\nSearching for export of CLONE_WEBSITE_DESIGN_PROMPT in AI index file(s):"
rg -n "export.*CLONE_WEBSITE_DESIGN_PROMPT" $(find "$AI_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \) )
echo -e "\nSearching for export of DesignSchema in Models index file(s):"
rg -n "export.*DesignSchema" $(find "$MODELS_PKG_DIR" -type f \( -name "index.ts" -o -name "src/index.ts" \) )
Length of output: 440
🏁 Script executed:
#!/usr/bin/env bash
set -e
echo "=== packages/ai/package.json ==="
cat packages/ai/package.json
echo -e "\n=== packages/models/package.json ==="
cat packages/models/package.json
echo -e "\n=== files under packages/ai/src/prompt ==="
if [ -d packages/ai/src/prompt ]; then
find packages/ai/src/prompt -type f
else
echo "No packages/ai/src/prompt directory"
fi
echo -e "\n=== definition of CLONE_WEBSITE_DESIGN_PROMPT in ai package ==="
rg -n "CLONE_WEBSITE_DESIGN_PROMPT" -g "packages/ai/**" || true
echo -e "\n=== packages/ai/src/index.ts (first 200 lines) ==="
if [ -f packages/ai/src/index.ts ]; then
sed -n '1,200p' packages/ai/src/index.ts
else
echo "packages/ai/src/index.ts not found"
fi
echo -e "\n=== packages/ai/index.ts (first 200 lines) ==="
if [ -f packages/ai/index.ts ]; then
sed -n '1,200p' packages/ai/index.ts
else
echo "packages/ai/index.ts not found"
fi
echo -e "\n=== files under packages/models/src/tools ==="
if [ -d packages/models/src/tools ]; then
find packages/models/src/tools -type f
else
echo "No packages/models/src/tools directory"
fi
echo -e "\n=== definition of DesignSchema in models package ==="
rg -n "DesignSchema" -g "packages/models/**" || true
echo -e "\n=== packages/models/src/index.ts (first 200 lines) ==="
if [ -f packages/models/src/index.ts ]; then
sed -n '1,200p' packages/models/src/index.ts
else
echo "packages/models/src/index.ts not found"
fi
echo -e "\n=== packages/models/index.ts (first 200 lines) ==="
if [ -f packages/models/index.ts ]; then
sed -n '1,200p' packages/models/index.ts
else
echo "packages/models/index.ts not found"
fi
Length of output: 4454
Use package root exports instead of deep imports
-import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone';
-import { DesignSchema } from '@onlook/models/tools';
+import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai';
+import { DesignSchema } from '@onlook/models';
Benefits: guards against internal restructuring and aligns with package export contracts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai/src/prompt/clone'; | |
import { DesignSchema } from '@onlook/models/tools'; | |
import { CLONE_WEBSITE_DESIGN_PROMPT } from '@onlook/ai'; | |
import { DesignSchema } from '@onlook/models'; |
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 11-12 the file
uses deep imports ('@onlook/ai/src/prompt/clone' and '@onlook/models/tools');
change these to use the package root exports (e.g., import
CLONE_WEBSITE_DESIGN_PROMPT from '@onlook/ai' and DesignSchema from
'@onlook/models') so consumers rely on the package export surface instead of
internal paths, and update any named vs default import shape to match the
package root exports.
// Scrape the website with screenshot to get visual content | ||
const result = await app.scrapeUrl(input.url, { | ||
formats: ['html', 'screenshot@fullPage', 'markdown'], | ||
onlyMainContent: false, | ||
waitFor: 2000, | ||
}); | ||
const imageAssetsResult = await app.scrapeUrl(input.url, { | ||
formats: ['markdown'], | ||
onlyMainContent: false, | ||
includeTags: ['img'], | ||
waitFor: 2000, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add timeout configuration for Firecrawl operations.
The scraping operations use a fixed 2-second wait time which may not be sufficient for all websites. Consider making this configurable or adaptive.
// Scrape the website with screenshot to get visual content
+const waitTime = Math.min(10000, 2000 + (input.url.length * 10)); // Adaptive wait based on URL complexity, max 10s
const result = await app.scrapeUrl(input.url, {
formats: ['html', 'screenshot@fullPage', 'markdown'],
onlyMainContent: false,
- waitFor: 2000,
+ waitFor: waitTime,
+ timeout: 30000, // Add overall timeout
});
const imageAssetsResult = await app.scrapeUrl(input.url, {
formats: ['markdown'],
onlyMainContent: false,
includeTags: ['img'],
- waitFor: 2000,
+ waitFor: waitTime,
+ timeout: 30000,
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Scrape the website with screenshot to get visual content | |
const result = await app.scrapeUrl(input.url, { | |
formats: ['html', 'screenshot@fullPage', 'markdown'], | |
onlyMainContent: false, | |
waitFor: 2000, | |
}); | |
const imageAssetsResult = await app.scrapeUrl(input.url, { | |
formats: ['markdown'], | |
onlyMainContent: false, | |
includeTags: ['img'], | |
waitFor: 2000, | |
}); | |
// Scrape the website with screenshot to get visual content | |
const waitTime = Math.min(10000, 2000 + (input.url.length * 10)); // Adaptive wait based on URL complexity, max 10s | |
const result = await app.scrapeUrl(input.url, { | |
formats: ['html', 'screenshot@fullPage', 'markdown'], | |
onlyMainContent: false, | |
waitFor: waitTime, | |
timeout: 30000, // Add overall timeout | |
}); | |
const imageAssetsResult = await app.scrapeUrl(input.url, { | |
formats: ['markdown'], | |
onlyMainContent: false, | |
includeTags: ['img'], | |
waitFor: waitTime, | |
timeout: 30000, | |
}); |
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 170 to 181, the
two scrapeUrl calls hardcode waitFor: 2000 which is brittle; make the Firecrawl
timeout configurable and robust by accepting an optional timeout (e.g.,
timeoutMs) from the request input or reading a service-level config/env value,
validate and clamp it to a safe min/max, and pass that value into both scrapeUrl
calls; if no timeout provided fall back to a sensible default (e.g., 2000), and
consider supporting an adaptive strategy (e.g., retry with a larger timeout or
use a networkIdle option if available) so slow pages are handled without
indefinite waits.
let imageAssets: { | ||
url: string; | ||
title: string; | ||
}[] = []; | ||
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) { | ||
const md = imageAssetsResult.markdown; | ||
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g; | ||
let match: RegExpExecArray | null; | ||
while ((match = mdImgRegex.exec(md)) !== null) { | ||
const alt = (match[1] || '').trim(); | ||
const urlCandidate = match[2]; | ||
const title = alt ? alt.replace(/\s+/g, '-') : ''; | ||
if (!urlCandidate) continue; | ||
try { | ||
const absoluteUrl = new URL(urlCandidate, input.url).toString(); | ||
imageAssets.push({ url: absoluteUrl, title }); | ||
} catch { | ||
imageAssets.push({ url: urlCandidate, title }); | ||
} | ||
} | ||
} else if (result.html) { | ||
// Fallback: parse from HTML if markdown not available | ||
const imgTagRegex = /<img[^>]*>/gi; | ||
const srcRegex = /src=["']([^"']+)["']/i; | ||
const altRegex = /alt=["']([^"']*)["']/i; | ||
let tagMatch: RegExpExecArray | null; | ||
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) { | ||
const tag = tagMatch[0]; | ||
const srcMatch = srcRegex.exec(tag); | ||
if (!srcMatch) continue; | ||
const srcCandidate = srcMatch[1] ?? ''; | ||
const altMatch = altRegex.exec(tag); | ||
const alt = (altMatch?.[1] ?? '').trim(); | ||
const title = alt ? alt.replace(/\s+/g, '-') : ''; | ||
try { | ||
const absoluteUrl = new URL(srcCandidate, input.url).toString(); | ||
imageAssets.push({ url: absoluteUrl, title }); | ||
} catch { | ||
imageAssets.push({ url: srcCandidate, title }); | ||
} | ||
} | ||
} | ||
|
||
|
||
// Dedupe by URL | ||
|
||
const byUrl = new Map<string, { url: string; title: string }>(); | ||
for (const asset of imageAssets) { | ||
if (!byUrl.has(asset.url)) { | ||
byUrl.set(asset.url, asset); | ||
} | ||
} | ||
imageAssets = Array.from(byUrl.values()); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve asset extraction robustness and add validation.
The current implementation has several areas for improvement:
- No validation of image URLs (could be data URIs, malformed, etc.)
- Missing error handling for invalid URLs
- No size or type filtering for assets
let imageAssets: {
url: string;
title: string;
}[] = [];
+
+// Helper to validate and normalize image URLs
+const isValidImageUrl = (url: string): boolean => {
+ if (!url || url.startsWith('data:')) return false;
+ try {
+ const parsed = new URL(url);
+ return ['http:', 'https:'].includes(parsed.protocol);
+ } catch {
+ return false;
+ }
+};
+
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) {
const md = imageAssetsResult.markdown;
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g;
let match: RegExpExecArray | null;
while ((match = mdImgRegex.exec(md)) !== null) {
const alt = (match[1] || '').trim();
const urlCandidate = match[2];
const title = alt ? alt.replace(/\s+/g, '-') : '';
if (!urlCandidate) continue;
try {
const absoluteUrl = new URL(urlCandidate, input.url).toString();
- imageAssets.push({ url: absoluteUrl, title });
+ if (isValidImageUrl(absoluteUrl)) {
+ imageAssets.push({ url: absoluteUrl, title });
+ }
} catch {
- imageAssets.push({ url: urlCandidate, title });
+ // Skip invalid URLs
}
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let imageAssets: { | |
url: string; | |
title: string; | |
}[] = []; | |
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) { | |
const md = imageAssetsResult.markdown; | |
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g; | |
let match: RegExpExecArray | null; | |
while ((match = mdImgRegex.exec(md)) !== null) { | |
const alt = (match[1] || '').trim(); | |
const urlCandidate = match[2]; | |
const title = alt ? alt.replace(/\s+/g, '-') : ''; | |
if (!urlCandidate) continue; | |
try { | |
const absoluteUrl = new URL(urlCandidate, input.url).toString(); | |
imageAssets.push({ url: absoluteUrl, title }); | |
} catch { | |
imageAssets.push({ url: urlCandidate, title }); | |
} | |
} | |
} else if (result.html) { | |
// Fallback: parse from HTML if markdown not available | |
const imgTagRegex = /<img[^>]*>/gi; | |
const srcRegex = /src=["']([^"']+)["']/i; | |
const altRegex = /alt=["']([^"']*)["']/i; | |
let tagMatch: RegExpExecArray | null; | |
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) { | |
const tag = tagMatch[0]; | |
const srcMatch = srcRegex.exec(tag); | |
if (!srcMatch) continue; | |
const srcCandidate = srcMatch[1] ?? ''; | |
const altMatch = altRegex.exec(tag); | |
const alt = (altMatch?.[1] ?? '').trim(); | |
const title = alt ? alt.replace(/\s+/g, '-') : ''; | |
try { | |
const absoluteUrl = new URL(srcCandidate, input.url).toString(); | |
imageAssets.push({ url: absoluteUrl, title }); | |
} catch { | |
imageAssets.push({ url: srcCandidate, title }); | |
} | |
} | |
} | |
// Dedupe by URL | |
const byUrl = new Map<string, { url: string; title: string }>(); | |
for (const asset of imageAssets) { | |
if (!byUrl.has(asset.url)) { | |
byUrl.set(asset.url, asset); | |
} | |
} | |
imageAssets = Array.from(byUrl.values()); | |
let imageAssets: { | |
url: string; | |
title: string; | |
}[] = []; | |
// Helper to validate and normalize image URLs | |
const isValidImageUrl = (url: string): boolean => { | |
if (!url || url.startsWith('data:')) return false; | |
try { | |
const parsed = new URL(url); | |
return ['http:', 'https:'].includes(parsed.protocol); | |
} catch { | |
return false; | |
} | |
}; | |
if ('success' in imageAssetsResult && imageAssetsResult.success && imageAssetsResult.markdown) { | |
const md = imageAssetsResult.markdown; | |
const mdImgRegex = /!\[([^\]]*)\]\(([^)]+)\)/g; | |
let match: RegExpExecArray | null; | |
while ((match = mdImgRegex.exec(md)) !== null) { | |
const alt = (match[1] || '').trim(); | |
const urlCandidate = match[2]; | |
const title = alt ? alt.replace(/\s+/g, '-') : ''; | |
if (!urlCandidate) continue; | |
try { | |
const absoluteUrl = new URL(urlCandidate, input.url).toString(); | |
if (isValidImageUrl(absoluteUrl)) { | |
imageAssets.push({ url: absoluteUrl, title }); | |
} | |
} catch { | |
// Skip invalid URLs | |
} | |
} | |
} else if (result.html) { | |
// Fallback: parse from HTML if markdown not available | |
const imgTagRegex = /<img[^>]*>/gi; | |
const srcRegex = /src=["']([^"']+)["']/i; | |
const altRegex = /alt=["']([^"']*)["']/i; | |
let tagMatch: RegExpExecArray | null; | |
while ((tagMatch = imgTagRegex.exec(result.html)) !== null) { | |
const tag = tagMatch[0]; | |
const srcMatch = srcRegex.exec(tag); | |
if (!srcMatch) continue; | |
const srcCandidate = srcMatch[1] ?? ''; | |
const altMatch = altRegex.exec(tag); | |
const alt = (altMatch?.[1] ?? '').trim(); | |
const title = alt ? alt.replace(/\s+/g, '-') : ''; | |
try { | |
const absoluteUrl = new URL(srcCandidate, input.url).toString(); | |
imageAssets.push({ url: absoluteUrl, title }); | |
} catch { | |
imageAssets.push({ url: srcCandidate, title }); | |
} | |
} | |
} | |
// Dedupe by URL | |
const byUrl = new Map<string, { url: string; title: string }>(); | |
for (const asset of imageAssets) { | |
if (!byUrl.has(asset.url)) { | |
byUrl.set(asset.url, asset); | |
} | |
} | |
imageAssets = Array.from(byUrl.values()); |
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 187 to 240, the
image extraction block should validate and filter candidates before adding them:
only accept http(s) absolute URLs (reject data:, mailto:, javascript: etc.),
skip obviously malformed URLs by catching URL construction errors (already
present) and continue; perform a HEAD (or fetch with method: 'HEAD') to check
Content-Type starts with image/ and Content-Length (or streamed size) is below a
configured limit (e.g., 5MB) with a short timeout, and skip on any
network/timeout/error; also whitelist common image extensions as a cheap
pre-check before fetching; keep existing dedupe logic but apply it after
filtering; ensure all network calls are try/catch and do not throw to the
caller, logging or silently skipping invalid assets.
const { model, headers } = await initModel({ | ||
provider: LLMProvider.ANTHROPIC, | ||
model: ANTHROPIC_MODELS.SONNET_4, | ||
}); | ||
|
||
const { object } = await generateObject({ | ||
model, | ||
headers, | ||
schema: DesignSchema, | ||
messages: [ | ||
{ | ||
role: 'system', | ||
content: CLONE_WEBSITE_DESIGN_PROMPT, | ||
}, | ||
{ | ||
role: 'user', | ||
content: `HTML: ${result.html} | ||
Markdown: ${result.markdown} | ||
Screenshot: ${result.screenshot}`, | ||
}, | ||
], | ||
maxOutputTokens: 10000, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for AI model initialization and generation.
The design document generation lacks proper error handling and could fail silently.
+let designDocument: z.infer<typeof DesignSchema> | null = null;
+
+try {
const { model, headers } = await initModel({
provider: LLMProvider.ANTHROPIC,
model: ANTHROPIC_MODELS.SONNET_4,
});
const { object } = await generateObject({
model,
headers,
schema: DesignSchema,
messages: [
{
role: 'system',
content: CLONE_WEBSITE_DESIGN_PROMPT,
},
{
role: 'user',
content: `HTML: ${result.html}
Markdown: ${result.markdown}
Screenshot: ${result.screenshot}`,
},
],
maxOutputTokens: 10000,
});
- const designDocument: z.infer<typeof DesignSchema> = object;
+ designDocument = object;
+} catch (error) {
+ console.error('Failed to generate design document:', error);
+ // Continue without design document rather than failing entire operation
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const { model, headers } = await initModel({ | |
provider: LLMProvider.ANTHROPIC, | |
model: ANTHROPIC_MODELS.SONNET_4, | |
}); | |
const { object } = await generateObject({ | |
model, | |
headers, | |
schema: DesignSchema, | |
messages: [ | |
{ | |
role: 'system', | |
content: CLONE_WEBSITE_DESIGN_PROMPT, | |
}, | |
{ | |
role: 'user', | |
content: `HTML: ${result.html} | |
Markdown: ${result.markdown} | |
Screenshot: ${result.screenshot}`, | |
}, | |
], | |
maxOutputTokens: 10000, | |
}); | |
let designDocument: z.infer<typeof DesignSchema> | null = null; | |
try { | |
const { model, headers } = await initModel({ | |
provider: LLMProvider.ANTHROPIC, | |
model: ANTHROPIC_MODELS.SONNET_4, | |
}); | |
const { object } = await generateObject({ | |
model, | |
headers, | |
schema: DesignSchema, | |
messages: [ | |
{ | |
role: 'system', | |
content: CLONE_WEBSITE_DESIGN_PROMPT, | |
}, | |
{ | |
role: 'user', | |
content: `HTML: ${result.html} | |
Markdown: ${result.markdown} | |
Screenshot: ${result.screenshot}`, | |
}, | |
], | |
maxOutputTokens: 10000, | |
}); | |
designDocument = object; | |
} catch (error) { | |
console.error('Failed to generate design document:', error); | |
// Continue without design document rather than failing entire operation | |
} |
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 242 to 264, the
calls to initModel and generateObject lack error handling and may fail silently;
wrap the initModel and generateObject calls in a try/catch (or separate
try/catches) so failures are caught, log the error with contextual information
(which call failed and relevant inputs), and propagate a clear failure to the
caller (e.g., throw a TRPC/HTTP error with a 500 and concise message or return
an error payload) instead of letting the function continue silently.
return { | ||
result: { | ||
markdown: result.markdown || '', | ||
html: result.html || '', | ||
designScreenshot: result.screenshot || '', | ||
designDocument: designDocument, | ||
assets: imageAssets, | ||
}, | ||
error: null, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add null checks for optional Firecrawl results.
The code assumes all fields from Firecrawl will be present, but they might be undefined.
return {
result: {
- markdown: result.markdown || '',
- html: result.html || '',
- designScreenshot: result.screenshot || '',
+ markdown: result.markdown ?? '',
+ html: result.html ?? '',
+ designScreenshot: result.screenshot ?? '',
designDocument: designDocument,
assets: imageAssets,
},
error: null,
};
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return { | |
result: { | |
markdown: result.markdown || '', | |
html: result.html || '', | |
designScreenshot: result.screenshot || '', | |
designDocument: designDocument, | |
assets: imageAssets, | |
}, | |
error: null, | |
}; | |
return { | |
result: { | |
markdown: result.markdown ?? '', | |
html: result.html ?? '', | |
designScreenshot: result.screenshot ?? '', | |
designDocument: designDocument, | |
assets: imageAssets, | |
}, | |
error: null, | |
}; |
🤖 Prompt for AI Agents
In apps/web/client/src/server/api/routers/code.ts around lines 268-277, the code
assumes all fields returned from Firecrawl exist; add null/undefined checks and
safe defaults when constructing the response so missing fields won't throw.
Coalesce result.markdown and result.html to empty strings if undefined, coalesce
result.screenshot to empty string, ensure designDocument is set to null or an
empty object if missing, and ensure imageAssets is an empty array if undefined;
update any typing if necessary so the response always has those safe defaults.
Description
Calls a clone website tool which returns
{
markdownFile:
htmlfile:
a screenshot of the whole page
a design doc (describes what section on the page should look like)
assets from the website
}
i found pretty good results using this method it was able to clone the Netflix landing page:

it was also able to clone my personal portfolio pretty well.
I just wanted to put this out there as on option for cloning a site. I think it needs a bit more work in terms of general implementation but other than that I think it works really well.
#2669
Type of Change
Testing
Screenshots (if applicable)
Additional Notes
Important
This PR adds a new feature for cloning websites, including a new tool, API handling, and schema definitions for processing and returning cloned website data.
CLONE_WEBSITE_TOOL_NAME
andCLONE_WEBSITE_TOOL_PARAMETERS
totool-call-simple.tsx
,web.ts
, andtools.ts
.handleCloneWebsiteTool()
inweb.ts
to process website cloning requests.cloneWebsite
mutation incode.ts
to handle API requests for cloning websites.CloneWebsiteResultSchema
andDesignSchema
inindex.ts
to define the structure of cloning results and design documents.CLONE_WEBSITE_DESIGN_PROMPT
inclone.ts
for generating design documents from cloned websites.CREATE_NEW_PAGE_SYSTEM_PROMPT
increate.ts
to include conditions for using the clone website tool.toolset.ts
to includecloneWebsiteTool
inBUILD_TOOL_SET
.cloneWebsiteTool
definition inweb.ts
with input schema and description.This description was created by
for ab716ae. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit