Skip to content
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion components/anthropic/actions/chat/chat.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import constants from "../common/constants.mjs";

export default {
name: "Chat",
version: "0.1.0",
version: "0.2.0",
key: "anthropic-chat",
description: "The Chat API. [See the documentation](https://docs.anthropic.com/claude/reference/messages_post)",
type: "action",
Expand Down
4 changes: 4 additions & 0 deletions components/anthropic/actions/common/constants.mjs
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
export default {
MESSAGE_MODELS: [
{
label: "Claude Opus 4.1",
value: "claude-opus-4-1-20250805",
},
{
label: "Claude Opus 4",
value: "claude-opus-4-20250514",
Expand Down
2 changes: 1 addition & 1 deletion components/anthropic/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@pipedream/anthropic",
"version": "0.1.0",
"version": "0.2.0",
"description": "Pipedream Anthropic (Claude) Components",
"main": "anthropic.app.mjs",
"keywords": [
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
import openai from "../../openai.app.mjs";
import common from "../common/common.mjs";
import constants from "../../common/constants.mjs";
import { WEB_SEARCH_CHAT_MODELS } from "../../common/models.mjs";

export default {
...common,
name: "Chat using Web Search",
version: "0.0.6",
version: "0.0.7",
key: "openai-chat-using-web-search",
description: "Chat using the web search tool. [See the documentation](https://platform.openai.com/docs/guides/tools-web-search)",
type: "action",
Expand All @@ -15,11 +16,8 @@ export default {
type: "string",
label: "Model",
description: "Model used to generate the response",
default: "gpt-4o",
options: [
"gpt-4o",
"gpt-4o-mini",
],
default: WEB_SEARCH_CHAT_MODELS[0],
options: WEB_SEARCH_CHAT_MODELS,
},
Comment on lines +19 to 21
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Optional: verify supported models for the web search tool

If OpenAI adjusts tool support per model, this list may drift. Consider verifying tool compatibility during CI.


🌐 Web query:

Which OpenAI models currently support the "web_search_preview" tool for the Responses API?

💡 Result:

The "web_search_preview" tool for the OpenAI Responses API is currently supported by the following models:

  • GPT-4o and GPT-4o-mini are both fully supported and officially documented as compatible with the "web_search_preview" tool when using the Responses API[3][5][7].
  • GPT-4.1 and GPT-4.1-mini are also supported, but with some limitations (such as context window size and excluding specific variants—see below)[1][9].

Key limitations and additional notes:

  • GPT-4.1-nano does not support the web_search_preview tool[1].
  • The tool is not available for o3-mini and, in some contexts, other smaller or non-primary variants[10].
  • The web_search functionality in the API provides responses with inline citations from web sources and can be paired with other tools for agentic applications[3][7].
  • Chat Completions API uses dedicated search models (gpt-4o-search-preview, gpt-4o-mini-search-preview), which have different support parameters—refer to individual model documentation for details[1][3][10].

Summary Table:

Model name Supports "web_search_preview" (Responses API) Notes
gpt-4o Full support, recommended
gpt-4o-mini Full support
gpt-4.1 Full support, but context window capped at 128,000 tokens
gpt-4.1-mini Full support, same context window cap
gpt-4.1-nano Not supported

For the most robust and up-to-date web search capability via the Responses API, use gpt-4o or gpt-4o-mini[3][5][7].

Citations:


Ensure only supported models are listed for web_search_preview and automate compatibility checks

Based on the latest OpenAI docs, only the following models support the web_search_preview tool in the Responses API (excluding gpt-4.1-nano and other non-primary variants):

  • gpt-4o
  • gpt-4o-mini
  • gpt-4.1
  • gpt-4.1-mini

Please:

  • Update WEB_SEARCH_CHAT_MODELS in
    components/openai/actions/chat-using-web-search/chat-using-web-search.mjs
    to exactly these four supported model names.
  • Add a CI step (e.g., a simple script against https://api.openai.com/v1/tools) that verifies web_search_preview support hasn’t drifted whenever this list is changed.
🤖 Prompt for AI Agents
In components/openai/actions/chat-using-web-search/chat-using-web-search.mjs
around lines 19 to 21, update the WEB_SEARCH_CHAT_MODELS array to include only
the four supported models: gpt-4o, gpt-4o-mini, gpt-4.1, and gpt-4.1-mini,
removing any others like gpt-4.1-nano. Additionally, create a new CI script that
queries the OpenAI tools API endpoint to verify that these models still support
the web_search_preview tool, ensuring the list remains accurate over time.

input: {
type: "string",
Expand Down
5 changes: 3 additions & 2 deletions components/openai/actions/create-image/create-image.mjs
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import openai from "../../openai.app.mjs";
import constants from "../../common/constants.mjs";
import fs from "fs";
import { IMAGE_MODELS } from "../../common/models.mjs";

export default {
name: "Create Image (Dall-E)",
version: "0.1.23",
version: "0.1.24",
key: "openai-create-image",
description: "Creates an image given a prompt returning a URL to the image. [See the documentation](https://platform.openai.com/docs/api-reference/images)",
type: "action",
Expand All @@ -14,7 +15,7 @@ export default {
label: "Model",
description: "Choose the DALL·E models to generate image(s) with.",
type: "string",
options: constants.IMAGE_MODELS,
options: IMAGE_MODELS,
reloadProps: true,
},
prompt: {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import openai from "../../openai.app.mjs";
import constants from "../../common/constants.mjs";
import { MODERATION_MODELS } from "../../common/models.mjs";

export default {
key: "openai-create-moderation",
name: "Create Moderation",
description: "Classifies if text is potentially harmful. [See the documentation](https://platform.openai.com/docs/api-reference/moderations/create)",
version: "0.0.9",
version: "0.0.10",
type: "action",
props: {
openai,
Expand All @@ -18,7 +18,7 @@ export default {
type: "string",
label: "Model",
description: "The model to use",
options: constants.MODERATION_MODELS,
options: MODERATION_MODELS,
},
},
async run({ $ }) {
Expand Down
4 changes: 2 additions & 2 deletions components/openai/actions/create-thread/create-thread.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ export default {
key: "openai-create-thread",
name: "Create Thread (Assistants)",
description: "Creates a thread with optional messages and metadata, and optionally runs the thread using the specified assistant. [See the documentation](https://platform.openai.com/docs/api-reference/threads/createThread)",
version: "0.0.15",
version: "0.0.16",
type: "action",
props: {
openai,
Expand Down Expand Up @@ -88,7 +88,7 @@ export default {
}));
},
async getAssistantModelPropOptions() {
const models = (await this.openai.models({})).filter(({ id }) => (id.includes("gpt-3.5-turbo") || id.includes("gpt-4-turbo")) && (id !== "gpt-3.5-turbo-0301"));
const models = await this.openai.getAssistantsModels({});
return models.map(({ id }) => id);
},
},
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
import { getFileStreamAndMetadata } from "@pipedream/platform";
import openai from "../../openai.app.mjs";
import FormData from "form-data";
import { TRANSCRIPTION_MODELS } from "../../common/models.mjs";

export default {
key: "openai-create-transcription",
name: "Create Transcription",
description: "Transcribes audio into the input language. [See the documentation](https://platform.openai.com/docs/api-reference/audio/createTranscription)",
version: "0.3.1",
version: "0.3.2",
type: "action",
props: {
openai,
Expand All @@ -20,11 +21,7 @@ export default {
type: "string",
label: "Model",
description: "ID of the model to use",
options: [
"gpt-4o-transcribe",
"gpt-4o-mini-transcribe",
"whisper-1",
],
options: TRANSCRIPTION_MODELS,
},
include: {
type: "string[]",
Expand Down
50 changes: 0 additions & 50 deletions components/openai/common/constants.mjs
Original file line number Diff line number Diff line change
@@ -1,49 +1,3 @@
const FINE_TUNING_MODEL_OPTIONS = [
{
label: "gpt-3.5-turbo-1106 (recommended)",
value: "gpt-3.5-turbo-1106",
},
{
label: "gpt-3.5-turbo-0613",
value: "gpt-3.5-turbo-0613",
},
{
label: "babbage-002",
value: "babbage-002",
},
{
label: "davinci-002",
value: "davinci-002",
},
{
label: "gpt-4-0613 (experimental — eligible users will be presented with an option to request access in the fine-tuning UI)",
value: "gpt-4-0613",
},
{
label: "gpt-4.1-mini-2025-04-14",
value: "gpt-4.1-mini-2025-04-14",
},
{
label: "gpt-4.1-2025-04-14",
value: "gpt-4.1-2025-04-14",
},
];

const TTS_MODELS = [
"tts-1",
"tts-1-hd",
];

const IMAGE_MODELS = [
"dall-e-2",
"dall-e-3",
];

const MODERATION_MODELS = [
"text-moderation-stable",
"text-moderation-latest",
];

const AUDIO_RESPONSE_FORMATS = [
"mp3",
"opus",
Expand Down Expand Up @@ -175,10 +129,6 @@ const BATCH_ENDPOINTS = [
];

export default {
FINE_TUNING_MODEL_OPTIONS,
TTS_MODELS,
IMAGE_MODELS,
MODERATION_MODELS,
AUDIO_RESPONSE_FORMATS,
CHAT_RESPONSE_FORMAT,
IMAGE_RESPONSE_FORMATS,
Expand Down
74 changes: 74 additions & 0 deletions components/openai/common/models.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
export const FINE_TUNING_MODEL_OPTIONS = [
{
label: "gpt-3.5-turbo-1106 (recommended)",
value: "gpt-3.5-turbo-1106",
},
{
label: "gpt-3.5-turbo-0613",
value: "gpt-3.5-turbo-0613",
},
{
label: "babbage-002",
value: "babbage-002",
},
{
label: "davinci-002",
value: "davinci-002",
},
{
label: "gpt-4-0613 (experimental — eligible users will be presented with an option to request access in the fine-tuning UI)",
value: "gpt-4-0613",
},
{
label: "gpt-4.1-mini-2025-04-14",
value: "gpt-4.1-mini-2025-04-14",
},
{
label: "gpt-4.1-2025-04-14",
value: "gpt-4.1-2025-04-14",
},
];

export const TTS_MODELS = [
"tts-1",
"tts-1-hd",
];

export const IMAGE_MODELS = [
"dall-e-2",
"dall-e-3",
];

export const MODERATION_MODELS = [
"text-moderation-stable",
"text-moderation-latest",
];

// Models that are eligible for Assistants API usage when selecting an Assistant model
export const ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS = [
"gpt-3.5-turbo",
"gpt-4-turbo",
"gpt-4o",
"gpt-4.1",
"gpt-5",
"gpt-5-mini",
"gpt-5-nano",
];
Comment on lines +48 to +56
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Prevent accidental inclusion of non-Assistants-safe models

Filtering by substrings can pull in models like transcribe/realtime variants (e.g., "gpt-4o-transcribe") into Assistants model options. Add explicit exclusions to the excluded list to be safe.

 export const ASSISTANTS_MODEL_EXCLUDED = [
   "gpt-3.5-turbo-0301",
+  // Not valid for Assistants model selection
+  "gpt-4o-transcribe",
+  "gpt-4o-mini-transcribe",
+  // Common non-chat families you may want to exclude proactively if surfaced elsewhere
+  "gpt-4o-realtime-preview", // example; adjust to actual IDs if present
 ];

If filtering logic exists in openai.app.mjs, consider enforcing a predicate that excludes IDs ending with "-transcribe" and "-realtime" instead of maintaining a manual list. Want me to propose that change?

Also applies to: 59-61

🤖 Prompt for AI Agents
In components/openai/common/models.mjs around lines 48 to 56 and 59 to 61, the
current substring filtering for Assistants models may unintentionally include
models with suffixes like "-transcribe" or "-realtime". To fix this, update the
filtering logic to explicitly exclude model IDs ending with these suffixes by
adding a predicate that filters out any model strings ending with "-transcribe"
or "-realtime" instead of relying solely on substring inclusion. This will
prevent accidental inclusion of non-Assistants-safe models.


// Exact model IDs to exclude from Assistants model options
export const ASSISTANTS_MODEL_EXCLUDED = [
"gpt-3.5-turbo-0301",
];

// Supported models in the "Chat using Web Search" action
export const WEB_SEARCH_CHAT_MODELS = [
"gpt-4o",
"gpt-4o-mini",
];

// Supported models in the "Create Transcription" action
export const TRANSCRIPTION_MODELS = [
"gpt-4o-transcribe",
"gpt-4o-mini-transcribe",
"whisper-1",
];
20 changes: 14 additions & 6 deletions components/openai/openai.app.mjs
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
import { axios } from "@pipedream/platform";
import constants from "./common/constants.mjs";
import {
ASSISTANTS_MODEL_EXCLUDED,
ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS,
FINE_TUNING_MODEL_OPTIONS, TTS_MODELS,
} from "./common/models.mjs";

export default {
type: "app",
Expand All @@ -21,7 +26,7 @@ export default {
async options() {
return (await this.getChatCompletionModels({})).map((model) => model.id);
},
default: "gpt-4o-mini",
default: "gpt-5-mini",
},
embeddingsModelId: {
label: "Model",
Expand Down Expand Up @@ -268,13 +273,13 @@ export default {
type: "string",
label: "Model",
description: "One of the available [TTS models](https://platform.openai.com/docs/models/tts). `tts-1` is optimized for speed, while `tts-1-hd` is optimized for quality.",
options: constants.TTS_MODELS,
options: TTS_MODELS,
},
fineTuningModel: {
type: "string",
label: "Fine Tuning Model",
description: "The name of the model to fine-tune. [See the supported models](https://platform.openai.com/docs/guides/fine-tuning/what-models-can-be-fine-tuned).",
options: constants.FINE_TUNING_MODEL_OPTIONS,
options: FINE_TUNING_MODEL_OPTIONS,
},
input: {
type: "string",
Expand Down Expand Up @@ -346,13 +351,13 @@ export default {
$,
path: "/models",
});
return models.sort((a, b) => a?.id.localeCompare(b?.id));
return models.sort((a, b) => b?.created - a?.created);
},
Comment on lines +354 to 355
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle undefined creation timestamps in sort comparator

Some model objects may not expose created. A NaN comparator can lead to unstable ordering. Add a fallback and a stable tie-breaker:

-      return models.sort((a, b) => b?.created - a?.created);
+      return models.sort((a, b) => {
+        const bc = typeof b?.created === "number" ? b.created : -Infinity;
+        const ac = typeof a?.created === "number" ? a.created : -Infinity;
+        if (bc !== ac) return bc - ac; // newer first
+        return String(a?.id).localeCompare(String(b?.id)); // stable tie-break
+      });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return models.sort((a, b) => b?.created - a?.created);
},
return models.sort((a, b) => {
const bc = typeof b?.created === "number" ? b.created : -Infinity;
const ac = typeof a?.created === "number" ? a.created : -Infinity;
if (bc !== ac) return bc - ac; // newer first
return String(a?.id).localeCompare(String(b?.id)); // stable tie-break
});
🤖 Prompt for AI Agents
In components/openai/openai.app.mjs around lines 354 to 355, the sort comparator
does not handle cases where the created timestamp is undefined, which can cause
NaN results and unstable sorting. Update the comparator to provide a fallback
value (e.g., 0) for undefined created timestamps and add a stable tie-breaker
such as comparing model IDs or names to ensure consistent ordering.

async getChatCompletionModels({ $ }) {
const models = await this.models({
$,
});
return models.filter((model) => model.id.match(/4o|o[1-9]|4\.1/gi));
return models.filter((model) => model.id.match(/4o|o[1-9]|4\.1|gpt-5/gi));
},
async getCompletionModels({ $ }) {
const models = await this.models({
Expand Down Expand Up @@ -380,7 +385,10 @@ export default {
const models = await this.models({
$,
});
return models.filter(({ id }) => (id.includes("gpt-3.5-turbo") || id.includes("gpt-4-turbo") || id.includes("gpt-4o") || id.includes("gpt-4.1")) && (id !== "gpt-3.5-turbo-0301"));
return models.filter(({ id }) => (
ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS.some((substring) => id.includes(substring))
&& !ASSISTANTS_MODEL_EXCLUDED.includes(id)
));
},
async _makeCompletion({
path, ...args
Expand Down
Loading