Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .husky/pre-push
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/bin/sh
. "$(dirname "$0")/_/husky.sh"

npx pnpm install -r
npx pnpm install --frozen-lockfile
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify if -r flag is needed and confirm intended behavior change.

This change replaces what was presumably -r (recursive/workspace install) with --frozen-lockfile (strict lockfile validation). These flags serve different purposes and the change has significant implications:

  1. Monorepo concern: If this is a monorepo with workspaces, removing -r means dependencies won't be installed across all workspace projects, potentially breaking the build.

  2. Behavior change: --frozen-lockfile fails immediately if package.json and pnpm-lock.yaml are out of sync, which changes the workflow from "install → check → friendly reminder" to "fail fast." This makes lines 6-9 somewhat redundant since an out-of-sync lockfile would have already failed at line 4.

  3. Developer experience: Developers who legitimately add dependencies will now encounter a cryptic pnpm error instead of the helpful "please commit the file" message.

  4. PR scope: This change appears unrelated to the stated PR objectives (adding 302.AI components).

Run this script to verify if this is a monorepo requiring -r:


🏁 Script executed:

#!/bin/bash
# Check if this is a pnpm workspace/monorepo

# Look for pnpm-workspace.yaml
if [ -f "pnpm-workspace.yaml" ]; then
  echo "=== pnpm-workspace.yaml exists ==="
  cat pnpm-workspace.yaml
fi

# Check root package.json for workspace patterns
if [ -f "package.json" ]; then
  echo -e "\n=== Checking package.json for workspace indicators ==="
  jq -r 'if .workspaces then "workspaces field: " + (.workspaces | tostring) else "No workspaces field" end' package.json 2>/dev/null || echo "Could not parse package.json"
fi

# Count packages in components directory (common workspace pattern)
echo -e "\n=== Checking for multiple package.json files (workspace indicator) ==="
fd -t f '^package\.json$' | head -20

Length of output: 907


Now let me verify the original command and check for patterns of pnpm usage with -r in the codebase:

Now let me check for pnpm usage patterns in the codebase:

Let me check the pnpm configuration and view the full pre-push file:

Based on my verification:

Restore the -r flag: Change npx pnpm install --frozen-lockfile to npx pnpm install -r --frozen-lockfile

This change removes the -r (recursive) flag from the pnpm install command, which is critical for a monorepo. Inside a workspace, pnpm install installs all dependencies in all the projects, but this behavior can be disabled by setting the recursive-install setting to false. By removing the explicit -r flag, the pre-push hook now relies on implicit configuration defaults rather than explicit control.

The -r flag runs a command in every project of a workspace when used with install. The proper command for a monorepo should be npx pnpm install -r --frozen-lockfile to ensure both recursive installation across all workspace packages AND strict lockfile validation. These flags serve complementary purposes—not competing ones—and should both be present.

🤖 Prompt for AI Agents
.husky/pre-push around line 4: the pnpm install command dropped the recursive
flag, which can break monorepo installs when recursive-install is disabled;
change the command back to include -r so it reads npx pnpm install -r
--frozen-lockfile to force recursive installation across workspace packages
while still enforcing the frozen lockfile.


if ! (git diff HEAD --quiet pnpm-lock.yaml); then
echo "modified pnpm-lock.yaml - please commit the file"
Expand Down
79 changes: 79 additions & 0 deletions components/_302_ai/302_ai_components.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
---
alwaysApply: true
---
Write the actions for the 302.AI app.

General guidelines:
- Avoid manual truthiness checks for optional parameters, as @pipedream/platform utilities (@pipedream/axios) automatically exclude undefined values.
- Make sure to wrap API calls with the _makeRequest method. All actions methods need to call the _makeRequest methods. No actions method should call axios directly.
- Don't use the "number" datatype in props, use "string" instead for "number" datatypes.
- Parameters that in the API documentation are boolean will be defined as type "string with options as `default: "0",options: ["0","1",],` setting the default to "0" or "1" as appropiate per the API documentation. Then in the component code parse it to boolean type variable such as in `include_performers: parseInt(this.includePerformers) === 1,`
- In the package.json file essentially you'll only set the dependencies property, set version prop to "0.1.0", you can refer to the following example that was created for Leonardo AI components:
```
{
"name": "@pipedream/leonardo_ai",
"version": "0.1.0",
"description": "Pipedream Leonardo AI Components",
"main": "leonardo_ai.app.mjs",
"keywords": [
"pipedream",
"leonardo_ai"
],
"homepage": "https://pipedream.com/apps/leonardo_ai",
"author": "Pipedream <[email protected]> (https://pipedream.com/)",
"publishConfig": {
"access": "public"
},
"dependencies": {
"@pipedream/platform": "^3.1.0",
"form-data": "^4.0.4"
}
}
```
--as a reference of an example API call to 302.AI API, including authorization, you can use the one we feature at Pipedream. Remember, you'll wrap API calls inside _makeRequest and action methods need to call _makeRequest.
```
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
_302_ai: {
type: "app",
app: "_302_ai",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.302.ai/v1/models`,
headers: {
Authorization: `Bearer ${this._302_ai.$auth.api_key}`,
},
})
},
})
--The 302.AI API is "OpenAI compliant", so 302.AI API URL is https://api.302.ai or https://api.302.ai/v1 and are analogous to OpenAI's https://api.openai.com and https://api.openai.com/v1 respectively
--For each prop that is used more than one create a propDefinition for it and reuse it in component code, changing description, detaults as needed.
--For the model prop which is likely used in several components using async options based on the List Models enpoint documented at https://doc.302.ai/147522038e0
```
chat-with-302-ai
Prompt: Send a message to the 302.AI Chat API using. Ideal for dynamic conversations, contextual assistance, and creative generation.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0)

chat-using-functions
Prompt: Enable your 302.AI model to invoke user-defined functions. Useful for conditional logic, workflow orchestration, and tool invocation within conversations.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/211560247e0)

summarize-text
Prompt: Summarize long-form text into concise, readable output using the 302.AI Chat API. Great for reports, content digestion, and executive briefs.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0)

classify-items
Prompt: Classify input items into predefined categories using 302.AI models. Perfect for tagging, segmentation, and automated organization.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0)

create-embeddings
Prompt: Generate vector embeddings from text using the 302.AI Embeddings API. Useful for semantic search, clustering, and vector store indexing.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522048e0)

create-completion
Prompt: Send a prompt to the legacy endpoint on 302.AI to generate text using models like . Recommended for backward-compatible flows.
-When you come up for the description for this component follow format: <component description>. [See documentation](https://doc.302.ai/147522039e0)

120 changes: 115 additions & 5 deletions components/_302_ai/_302_ai.app.mjs
Original file line number Diff line number Diff line change
@@ -1,11 +1,121 @@
import { axios } from "@pipedream/platform";

export default {
type: "app",
app: "_302_ai",
propDefinitions: {},
propDefinitions: {
modelId: {
type: "string",
label: "Model",
description: "The ID of the model to use",
async options() {
const models = await this.listModels();
return models.map((model) => ({
label: model.id,
value: model.id,
}));
},
},
chatCompletionModelId: {
type: "string",
label: "Model",
description: "The ID of the model to use for chat completions",
async options() {
const models = await this.listModels();
// Filter for chat models (similar to OpenAI)
return models
.filter((model) => model.id.match(/gpt|claude|gemini|llama|mistral|deepseek/gi))
.map((model) => ({
label: model.id,
value: model.id,
}));
},
},
embeddingsModelId: {
type: "string",
label: "Model",
description: "The ID of the embeddings model to use",
async options() {
const models = await this.listModels();
// Filter for embedding models
return models
.filter((model) => model.id.match(/embedding/gi))
.map((model) => ({
label: model.id,
value: model.id,
}));
},
},
},
methods: {
// this.$auth contains connected account data
authKeys() {
console.log(Object.keys(this.$auth));
_apiKey() {
return this.$auth.api_key;
},
_baseApiUrl() {
return "https://api.302.ai/v1";
},
_makeRequest({
$ = this,
path,
...args
} = {}) {
return axios($, {
...args,
url: `${this._baseApiUrl()}${path}`,
headers: {
...args.headers,
"Authorization": `Bearer ${this._apiKey()}`,
"Content-Type": "application/json",
},
});
},
async listModels({ $ } = {}) {
const { data: models } = await this._makeRequest({
$,
path: "/models",
});
return models || [];
},
async _makeCompletion({
path, ...args
}) {
const data = await this._makeRequest({
path,
method: "POST",
...args,
});

// For completions, return the text of the first choice at the top-level
let generated_text;
if (path === "/completions") {
const { choices } = data;
generated_text = choices?.[0]?.text;
}
// For chat completions, return the assistant message at the top-level
let generated_message;
if (path === "/chat/completions") {
const { choices } = data;
generated_message = choices?.[0]?.message;
}

return {
generated_text,
generated_message,
...data,
};
},
createChatCompletion(args = {}) {
return this._makeCompletion({
path: "/chat/completions",
...args,
});
},
createEmbeddings(args = {}) {
return this._makeRequest({
path: "/embeddings",
method: "POST",
...args,
});
},
},
};
};
Loading