Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Additional Suggestions:
- The default text generation model is not updated to match the new default, and the provider detection logic doesn't handle new model formats like
meta/llama-4-scout,google/*, ormoonshotai/*, causing incorrect provider assignment.
View Details
📝 Patch Details
diff --git a/lib/workflow-codegen-sdk.ts b/lib/workflow-codegen-sdk.ts
index ce83d8f..7557b73 100644
--- a/lib/workflow-codegen-sdk.ts
+++ b/lib/workflow-codegen-sdk.ts
@@ -493,13 +493,26 @@ export function generateWorkflowSDKCode(
function buildAITextParams(config: Record<string, unknown>): string[] {
imports.add("import { generateText } from 'ai';");
- const modelId = (config.aiModel as string) || "gpt-4o-mini";
- const provider =
- modelId.startsWith("gpt-") || modelId.startsWith("o1-")
- ? "openai"
- : "anthropic";
+ const modelId = (config.aiModel as string) || "meta/llama-4-scout";
+
+ // Determine the full model string with provider
+ // If the model already contains a "/", it already has a provider prefix, so use as-is
+ let modelString: string;
+ if (modelId.includes("/")) {
+ modelString = modelId;
+ } else {
+ // Infer provider from model name for models without provider prefix
+ const provider =
+ modelId.startsWith("gpt-") || modelId.startsWith("o1-")
+ ? "openai"
+ : modelId.startsWith("claude-")
+ ? "anthropic"
+ : "openai"; // default to openai
+ modelString = `${provider}/${modelId}`;
+ }
+
return [
- `model: "${provider}/${modelId}"`,
+ `model: "${modelString}"`,
`prompt: \`${convertTemplateToJS((config.aiPrompt as string) || "")}\``,
"apiKey: process.env.OPENAI_API_KEY!",
];
Analysis
Broken provider detection in code generation for text steps with multi-provider models
What fails: buildAITextParams() in lib/workflow-codegen-sdk.ts generates invalid model strings for models that already include a provider prefix, producing strings like "anthropic/meta/llama-4-scout" instead of "meta/llama-4-scout".
How to reproduce:
- Create a workflow with "Generate Text" action
- Use the default model (
meta/llama-4-scout) or select any model with a provider prefix:meta/llama-*(e.g.,meta/llama-4-scout,meta/llama-3.3-70b)google/gemini-*(e.g.,google/gemini-2.5-flash)moonshotai/kimi-*(e.g.,moonshotai/kimi-k2-0905)openai/gpt-oss-*(e.g.,openai/gpt-oss-120b)
- Generate the workflow code (export or view generated code)
- The generated code will contain invalid model strings like
"anthropic/meta/llama-4-scout"
Result: When the generated code runs with the AI SDK, it attempts to call the Anthropic provider with model name "meta/llama-4-scout", which is not a valid Anthropic model. This causes a runtime error since the AI SDK/AI Gateway cannot recognize the invalid provider/model combination.
Expected: Model strings should be generated in the format "creator/model-name" as expected by the AI SDK:
- Models with provider prefix should pass through unchanged:
meta/llama-4-scout→meta/llama-4-scout - Models without prefix should get the correct provider prepended based on naming:
gpt-4o→openai/gpt-4o,claude-*→anthropic/claude-*
Root cause: The provider detection logic (lines 506-512) only checks for gpt-* and o1- prefixes to detect OpenAI models, defaulting all others to Anthropic. This fails for models that already have a provider prefix like meta/llama-4-scout, which don't start with gpt- and get incorrectly prefixed with anthropic/.
Affected model count: 7 broken model string patterns identified:
- Meta Llama models:
meta/llama-* - Google Gemini models:
google/gemini-* - Moonshot Kimi models:
moonshotai/kimi-* - OpenAI OSS models:
openai/gpt-oss-* - Any already-prefixed Anthropic models:
anthropic/claude-* - When explicitly using full models with provider:
openai/gpt-4o
View Details
📝 Patch Details
diff --git a/lib/workflow-codegen-sdk.ts b/lib/workflow-codegen-sdk.ts
index ce83d8f..19ed536 100644
--- a/lib/workflow-codegen-sdk.ts
+++ b/lib/workflow-codegen-sdk.ts
@@ -247,11 +247,13 @@ function _generateGenerateTextStepBody(
): string {
imports.add("import { generateText, generateObject } from 'ai';");
imports.add("import { z } from 'zod';");
- const modelId = (config.aiModel as string) || "gpt-5";
- const provider =
- modelId.startsWith("gpt-") || modelId.startsWith("o1")
- ? "openai"
- : "anthropic";
+ const modelId = (config.aiModel as string) || "meta/llama-4-scout";
+ // If model already has a provider prefix (contains "/"), use it as-is
+ const model = modelId.includes("/")
+ ? modelId
+ : modelId.startsWith("gpt-") || modelId.startsWith("o1")
+ ? `openai/${modelId}`
+ : `anthropic/${modelId}`;
const aiPrompt = (config.aiPrompt as string) || "";
const convertedPrompt = convertTemplateToJS(aiPrompt);
@@ -279,7 +281,7 @@ function _generateGenerateTextStepBody(
const zodSchema = z.object(schemaShape);
const { object } = await generateObject({
- model: '${provider}/${modelId}',
+ model: '${model}',
prompt: finalPrompt,
schema: zodSchema,
});
@@ -291,7 +293,7 @@ function _generateGenerateTextStepBody(
}
const { text } = await generateText({
- model: '${provider}/${modelId}',
+ model: '${model}',
prompt: finalPrompt,
});
@@ -493,13 +495,15 @@ export function generateWorkflowSDKCode(
function buildAITextParams(config: Record<string, unknown>): string[] {
imports.add("import { generateText } from 'ai';");
- const modelId = (config.aiModel as string) || "gpt-4o-mini";
- const provider =
- modelId.startsWith("gpt-") || modelId.startsWith("o1-")
- ? "openai"
- : "anthropic";
+ const modelId = (config.aiModel as string) || "meta/llama-4-scout";
+ // If model already has a provider prefix (contains "/"), use it as-is
+ const model = modelId.includes("/")
+ ? modelId
+ : modelId.startsWith("gpt-") || modelId.startsWith("o1")
+ ? `openai/${modelId}`
+ : `anthropic/${modelId}`;
return [
- `model: "${provider}/${modelId}"`,
+ `model: "${model}"`,
`prompt: \`${convertTemplateToJS((config.aiPrompt as string) || "")}\``,
"apiKey: process.env.OPENAI_API_KEY!",
];
Analysis
Default model inconsistency and provider detection bug in buildAITextParams
What fails: The buildAITextParams function (lib/workflow-codegen-sdk.ts line 496) and related code uses incorrect defaults and cannot properly handle pre-prefixed model identifiers, causing generated code to have malformed model references.
How to reproduce:
- When no
aiModelis specified in a workflow configuration,buildAITextParamsdefaults to"gpt-4o-mini"instead of the codebase standard"meta/llama-4-scout"used everywhere else - When a user selects
"meta/llama-4-scout"from the model dropdown, the provider detection logic doesn't recognize it has a provider prefix, incorrectly generatingmodel: "anthropic/meta/llama-4-scout" - Similarly, models like
"anthropic/claude-opus-4.5"becomemodel: "anthropic/anthropic/claude-opus-4.5"(double-prefixed)
Result: Generated code uses non-existent or incorrect model identifiers, causing runtime failures when the workflow SDK generates code.
Expected:
- Default should match the codebase standard:
"meta/llama-4-scout"(used in components/workflow/config/action-config.tsx line 438, lib/steps/generate-text.ts line 90) - Pre-prefixed models like
"meta/llama-4-scout","anthropic/claude-opus-4.5","google/gemini-2.5-pro"should be used as-is - Models without prefixes like
"gpt-5"and"o1-preview"should be prefixed with their provider
Fix applied:
- Changed default from
"gpt-4o-mini"to"meta/llama-4-scout"in bothbuildAITextParams(line 496) and_generateGenerateTextStepBody(line 244) - Updated provider detection logic to check if a model already contains "/" (provider prefix) and if so, use it as-is
- For models without a prefix, correctly detect provider:
gpt-ando1prefix →openai, otherwise →anthropic
View Details
📝 Patch Details
diff --git a/lib/workflow-codegen.ts b/lib/workflow-codegen.ts
index 6c7640d..3cc9a38 100644
--- a/lib/workflow-codegen.ts
+++ b/lib/workflow-codegen.ts
@@ -408,7 +408,7 @@ export function generateWorkflowCode(
const config = node.data.config || {};
const aiPrompt = (config.aiPrompt as string) || "Generate a summary";
- const aiModel = (config.aiModel as string) || "gpt-5";
+ const aiModel = (config.aiModel as string) || "meta/llama-4-scout";
const aiFormat = (config.aiFormat as string) || "text";
const aiSchema = config.aiSchema as string | undefined;
Analysis
Inconsistent default AI model in workflow code generation
What fails: The _generateGenerateTextStepBody() function in lib/workflow-codegen.ts line 411 uses "gpt-5" as the default model when generating code, but the UI (components/workflow/config/action-config.tsx line 438), display layer (components/workflow/nodes/action-node.tsx line 292), and step execution function (lib/steps/generate-text.ts line 90) all default to "meta/llama-4-scout".
How to reproduce:
- Create a new workflow with a "Generate Text" action node
- Do not explicitly select a model in the node configuration
- Generate the workflow code using the code generation function
- The generated code will specify
model: "gpt-5"instead ofmodel: "meta/llama-4-scout" - When the workflow runs, it uses
"gpt-5"despite the UI suggesting"meta/llama-4-scout"as the default
Result: Generated code uses model: "gpt-5" when aiModel config is undefined, causing inconsistency between the UI default and the actual code generation default.
Expected: Generated code should use model: "meta/llama-4-scout" to match the default shown in the UI and used elsewhere in the codebase.
Fix: Changed line 411 in lib/workflow-codegen.ts from "gpt-5" to "meta/llama-4-scout" to align with the defaults in action-config.tsx (line 438), action-node.tsx (line 292), and generate-text.ts (line 90).
…the new default `"meta/llama-4-scout"`\, creating inconsistency with the UI and other code generation paths\.
…w default\, and the provider detection logic doesn\'t handle new model formats like `meta/llama-4-scout`\, `google/*`\, or `moonshotai/*`\, causing incorrect provider assignment\.
…he old default model `"gpt-5"` instead of the new default `"meta/llama-4-scout"`\, potentially confusing users about which model will be used\.
* more models * fixes * Fix: The default text generation model is still `"gpt-5"` instead of the new default `"meta/llama-4-scout"`\, creating inconsistency with the UI and other code generation paths\. * Fix: The default text generation model is not updated to match the new default\, and the provider detection logic doesn\'t handle new model formats like `meta/llama-4-scout`\, `google/*`\, or `moonshotai/*`\, causing incorrect provider assignment\. * Fix: The documentation example for Generate Text action still shows the old default model `"gpt-5"` instead of the new default `"meta/llama-4-scout"`\, potentially confusing users about which model will be used\. * fixes --------- Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com> Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
No description provided.