-
Notifications
You must be signed in to change notification settings - Fork 5.5k
302 ai new components #18829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
sergio-eliot-rodriguez
wants to merge
3
commits into
PipedreamHQ:master
from
sergio-eliot-rodriguez:302_ai-momps
Closed
302 ai new components #18829
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,79 @@ | ||
| --- | ||
| alwaysApply: true | ||
| --- | ||
| Write the actions for the 302.AI app. | ||
|
|
||
| General guidelines: | ||
| - Avoid manual truthiness checks for optional parameters, as @pipedream/platform utilities (@pipedream/axios) automatically exclude undefined values. | ||
| - Make sure to wrap API calls with the _makeRequest method. All actions methods need to call the _makeRequest methods. No actions method should call axios directly. | ||
| - Don't use the "number" datatype in props, use "string" instead for "number" datatypes. | ||
| - Parameters that in the API documentation are boolean will be defined as type "string with options as `default: "0",options: ["0","1",],` setting the default to "0" or "1" as appropiate per the API documentation. Then in the component code parse it to boolean type variable such as in `include_performers: parseInt(this.includePerformers) === 1,` | ||
| - In the package.json file essentially you'll only set the dependencies property, set version prop to "0.1.0", you can refer to the following example that was created for Leonardo AI components: | ||
| ``` | ||
| { | ||
| "name": "@pipedream/leonardo_ai", | ||
| "version": "0.1.0", | ||
| "description": "Pipedream Leonardo AI Components", | ||
| "main": "leonardo_ai.app.mjs", | ||
| "keywords": [ | ||
| "pipedream", | ||
| "leonardo_ai" | ||
| ], | ||
| "homepage": "https://pipedream.com/apps/leonardo_ai", | ||
| "author": "Pipedream <[email protected]> (https://pipedream.com/)", | ||
| "publishConfig": { | ||
| "access": "public" | ||
| }, | ||
| "dependencies": { | ||
| "@pipedream/platform": "^3.1.0", | ||
| "form-data": "^4.0.4" | ||
| } | ||
| } | ||
| ``` | ||
| --as a reference of an example API call to 302.AI API, including authorization, you can use the one we feature at Pipedream. Remember, you'll wrap API calls inside _makeRequest and action methods need to call _makeRequest. | ||
| ``` | ||
| import { axios } from "@pipedream/platform" | ||
| export default defineComponent({ | ||
| props: { | ||
| _302_ai: { | ||
| type: "app", | ||
| app: "_302_ai", | ||
| } | ||
| }, | ||
| async run({steps, $}) { | ||
| return await axios($, { | ||
| url: `https://api.302.ai/v1/models`, | ||
| headers: { | ||
| Authorization: `Bearer ${this._302_ai.$auth.api_key}`, | ||
| }, | ||
| }) | ||
| }, | ||
| }) | ||
| --The 302.AI API is "OpenAI compliant", so 302.AI API URL is https://api.302.ai or https://api.302.ai/v1 and are analogous to OpenAI's https://api.openai.com and https://api.openai.com/v1 respectively | ||
| --For each prop that is used more than one create a propDefinition for it and reuse it in component code, changing description, detaults as needed. | ||
| --For the model prop which is likely used in several components using async options based on the List Models enpoint documented at https://doc.302.ai/147522038e0 | ||
| ``` | ||
| chat-with-302-ai | ||
| Prompt: Send a message to the 302.AI Chat API using. Ideal for dynamic conversations, contextual assistance, and creative generation. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0) | ||
|
|
||
| chat-using-functions | ||
| Prompt: Enable your 302.AI model to invoke user-defined functions. Useful for conditional logic, workflow orchestration, and tool invocation within conversations. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/211560247e0) | ||
|
|
||
| summarize-text | ||
| Prompt: Summarize long-form text into concise, readable output using the 302.AI Chat API. Great for reports, content digestion, and executive briefs. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0) | ||
|
|
||
| classify-items | ||
| Prompt: Classify input items into predefined categories using 302.AI models. Perfect for tagging, segmentation, and automated organization. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0) | ||
|
|
||
| create-embeddings | ||
| Prompt: Generate vector embeddings from text using the 302.AI Embeddings API. Useful for semantic search, clustering, and vector store indexing. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522048e0) | ||
|
|
||
| create-completion | ||
| Prompt: Send a prompt to the legacy endpoint on 302.AI to generate text using models like . Recommended for backward-compatible flows. | ||
| -When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0) | ||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,11 +1,121 @@ | ||
| import { axios } from "@pipedream/platform"; | ||
|
|
||
| export default { | ||
| type: "app", | ||
| app: "_302_ai", | ||
| propDefinitions: {}, | ||
| propDefinitions: { | ||
| modelId: { | ||
| type: "string", | ||
| label: "Model", | ||
| description: "The ID of the model to use", | ||
| async options() { | ||
| const models = await this.listModels(); | ||
| return models.map((model) => ({ | ||
| label: model.id, | ||
| value: model.id, | ||
| })); | ||
| }, | ||
| }, | ||
| chatCompletionModelId: { | ||
| type: "string", | ||
| label: "Model", | ||
| description: "The ID of the model to use for chat completions", | ||
| async options() { | ||
| const models = await this.listModels(); | ||
| // Filter for chat models (similar to OpenAI) | ||
| return models | ||
| .filter((model) => model.id.match(/gpt|claude|gemini|llama|mistral|deepseek/gi)) | ||
| .map((model) => ({ | ||
| label: model.id, | ||
| value: model.id, | ||
| })); | ||
| }, | ||
| }, | ||
| embeddingsModelId: { | ||
| type: "string", | ||
| label: "Model", | ||
| description: "The ID of the embeddings model to use", | ||
| async options() { | ||
| const models = await this.listModels(); | ||
| // Filter for embedding models | ||
| return models | ||
| .filter((model) => model.id.match(/embedding/gi)) | ||
| .map((model) => ({ | ||
| label: model.id, | ||
| value: model.id, | ||
| })); | ||
| }, | ||
| }, | ||
| }, | ||
| methods: { | ||
| // this.$auth contains connected account data | ||
| authKeys() { | ||
| console.log(Object.keys(this.$auth)); | ||
| _apiKey() { | ||
| return this.$auth.api_key; | ||
| }, | ||
| _baseApiUrl() { | ||
| return "https://api.302.ai/v1"; | ||
| }, | ||
| _makeRequest({ | ||
| $ = this, | ||
| path, | ||
| ...args | ||
| } = {}) { | ||
| return axios($, { | ||
| ...args, | ||
| url: `${this._baseApiUrl()}${path}`, | ||
| headers: { | ||
| ...args.headers, | ||
| "Authorization": `Bearer ${this._apiKey()}`, | ||
| "Content-Type": "application/json", | ||
| }, | ||
| }); | ||
| }, | ||
| async listModels({ $ } = {}) { | ||
| const { data: models } = await this._makeRequest({ | ||
| $, | ||
| path: "/models", | ||
| }); | ||
| return models || []; | ||
| }, | ||
| async _makeCompletion({ | ||
| path, ...args | ||
| }) { | ||
| const data = await this._makeRequest({ | ||
| path, | ||
| method: "POST", | ||
| ...args, | ||
| }); | ||
|
|
||
| // For completions, return the text of the first choice at the top-level | ||
| let generated_text; | ||
| if (path === "/completions") { | ||
| const { choices } = data; | ||
| generated_text = choices?.[0]?.text; | ||
| } | ||
| // For chat completions, return the assistant message at the top-level | ||
| let generated_message; | ||
| if (path === "/chat/completions") { | ||
| const { choices } = data; | ||
| generated_message = choices?.[0]?.message; | ||
| } | ||
|
|
||
| return { | ||
| generated_text, | ||
| generated_message, | ||
| ...data, | ||
| }; | ||
| }, | ||
| createChatCompletion(args = {}) { | ||
| return this._makeCompletion({ | ||
| path: "/chat/completions", | ||
| ...args, | ||
| }); | ||
| }, | ||
| createEmbeddings(args = {}) { | ||
| return this._makeRequest({ | ||
| path: "/embeddings", | ||
| method: "POST", | ||
| ...args, | ||
| }); | ||
| }, | ||
| }, | ||
| }; | ||
| }; |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify if
-rflag is needed and confirm intended behavior change.This change replaces what was presumably
-r(recursive/workspace install) with--frozen-lockfile(strict lockfile validation). These flags serve different purposes and the change has significant implications:Monorepo concern: If this is a monorepo with workspaces, removing
-rmeans dependencies won't be installed across all workspace projects, potentially breaking the build.Behavior change:
--frozen-lockfilefails immediately ifpackage.jsonandpnpm-lock.yamlare out of sync, which changes the workflow from "install → check → friendly reminder" to "fail fast." This makes lines 6-9 somewhat redundant since an out-of-sync lockfile would have already failed at line 4.Developer experience: Developers who legitimately add dependencies will now encounter a cryptic pnpm error instead of the helpful "please commit the file" message.
PR scope: This change appears unrelated to the stated PR objectives (adding 302.AI components).
Run this script to verify if this is a monorepo requiring
-r:🏁 Script executed:
Length of output: 907
Now let me verify the original command and check for patterns of pnpm usage with
-rin the codebase:Now let me check for pnpm usage patterns in the codebase:
Let me check the pnpm configuration and view the full pre-push file:
Based on my verification:
Restore the
-rflag: Changenpx pnpm install --frozen-lockfiletonpx pnpm install -r --frozen-lockfileThis change removes the
-r(recursive) flag from the pnpm install command, which is critical for a monorepo. Inside a workspace, pnpm install installs all dependencies in all the projects, but this behavior can be disabled by setting the recursive-install setting to false. By removing the explicit-rflag, the pre-push hook now relies on implicit configuration defaults rather than explicit control.The
-rflag runs a command in every project of a workspace when used with install. The proper command for a monorepo should benpx pnpm install -r --frozen-lockfileto ensure both recursive installation across all workspace packages AND strict lockfile validation. These flags serve complementary purposes—not competing ones—and should both be present.🤖 Prompt for AI Agents