Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ FIREWORKS_API_KEY=
FIRECRAWL_API_KEY=
GROQ_API_KEY=
TOGETHER_API_KEY=
AIML_API_KEY=
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ Example code and guides for building with [E2B SDK](https://github.com/e2b-dev/e

Read more about E2B on the [E2B website](https://e2b.dev) and the official [E2B documentation](https://e2b.dev/docs).

E2B works with any LLM that supports tool use. You can connect providers like OpenAI, Anthropic, Mistral, Groq, or custom OpenAI-compatible APIs such as the [AI/ML API](https://aimlapi.com/app/?utm_source=e2b&utm_medium=github&utm_campaign=integration).

## Examples

**Hello World guide**
Expand Down Expand Up @@ -112,6 +114,13 @@ Read more about E2B on the [E2B website](https://e2b.dev) and the official [E2B
<td><a href="https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/watsonx-ai-code-interpreter-python">Python</a></td>
<td><a href="https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/watsonx-ai-code-interpreter-js">TypeScript</a></td>
</tr>
<tr>
<td>AI/ML API</td>
<td> 300+ models</td>
<td>OpenAI-compatible API</td>
<td><a href="https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/aimlapi-python">Python</a></td>
<td><a href="https://github.com/e2b-dev/e2b-cookbook/tree/main/examples/aimlapi-js">TypeScript</a></td>
</tr>
</tbody>
</table>

Expand Down
3 changes: 3 additions & 0 deletions examples/aimlapi-js/.env.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# API keys for running the example
AIML_API_KEY=your_aimlapi_api_key_here
E2B_API_KEY=your_e2b_api_key_here
4 changes: 4 additions & 0 deletions examples/aimlapi-js/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
.env
node_modules
.eslintcache
.eslintrc.json
67 changes: 67 additions & 0 deletions examples/aimlapi-js/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# AI Code Execution with AI/ML API and E2B

This example demonstrates how to run LLM-generated Python code using the [AI/ML API](https://aimlapi.com/app/?utm_source=e2b&utm_medium=github&utm_campaign=integration) and [E2B Code Interpreter SDK](https://e2b.dev).

The AI performs a data analysis task on an uploaded CSV file, generates Python code with an AI/ML API model, and executes the code inside a sandboxed environment using E2B.

---

## πŸ”§ Setup

### 1. Install dependencies

```bash
npm install
````

### 2. Setup environment

Create `.env` file:

```bash
cp .env.template .env
```

Then set your:

* `AIML_API_KEY`: Get it at [https://aimlapi.com/app/keys](https://aimlapi.com/app/keys/?utm_source=e2b&utm_medium=github&utm_campaign=integration)
* `E2B_API_KEY`: Get it at [https://e2b.dev/docs/getting-started/api-key](https://e2b.dev/docs/getting-started/api-key)

### 3. Run

```bash
npm run start
```

You will see:

* The dataset gets uploaded
* Prompt sent to the model
* Code generated and executed in the cloud
* A result saved as `image_1.png`

![image](image_1.png)

---

## πŸ€– Models

This example defaults to **`openai/gpt-5-chat-latest`** via AI/ML API.
You can switch to any OpenAI-compatible model available on AI/ML API (see the Models Directory in their docs).

**Examples:**

* `openai/gpt-5-chat-latest`
* `openai/gpt-4o`
* `google/gemini-1.5-flash`
* `deepseek/deepseek-chat`
* and many more

> Ensure your API key has access to the chosen model.

---

## 🧠 Learn more

* [AI/ML API Documentation](https://docs.aimlapi.com/?utm_source=e2b&utm_medium=github&utm_campaign=integration)
* [E2B Code Interpreter Docs](https://e2b.dev/docs)
197 changes: 197 additions & 0 deletions examples/aimlapi-js/aimlapi.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
import fs from 'node:fs'
import path from 'node:path'
import { Sandbox, Result, OutputMessage } from '@e2b/code-interpreter'
import * as dotenv from 'dotenv'
import OpenAI from 'openai'

dotenv.config()

const AIML_API_KEY = process.env.AIML_API_KEY || ''
const E2B_API_KEY = process.env.E2B_API_KEY || '' // required by E2B SDK

if (!AIML_API_KEY || !E2B_API_KEY) {
console.error('Missing API key(s). Please set AIML_API_KEY and E2B_API_KEY in your .env file.')
process.exit(1)
}

const openai = new OpenAI({
apiKey: AIML_API_KEY,
baseURL: 'https://api.aimlapi.com/v1',
})

const MODEL_ID = 'openai/gpt-5-chat-latest'

// ---------- Prompts ----------
const SYSTEM_STRAWBERRY = `
You are a helpful assistant that can execute python code in a Jupyter notebook.
Only respond with the code to be executed and nothing else.
Respond with a Python code block in Markdown (\`\`\`python ... \`\`\`).
`

const PROMPT_STRAWBERRY = "Calculate how many r's are in the word 'strawberry'"

const SYSTEM_LINEAR = `
You're a Python data scientist. You are given tasks to complete and you run Python code to solve them.
Information about the csv dataset:
- It's in the \`/home/user/data.csv\` file
- The CSV file uses "," as the delimiter
- It contains statistical country-level data
Rules:
- ALWAYS FORMAT YOUR RESPONSE IN MARKDOWN
- RESPOND ONLY WITH PYTHON CODE INSIDE \`\`\`python ... \`\`\` BLOCKS
- You can use matplotlib/seaborn/pandas/numpy/etc.
- Code is executed in a secure Jupyter-like environment with internet access and preinstalled packages
`

const PROMPT_LINEAR =
'Plot a linear regression of "GDP per capita (current US$)" vs "Life expectancy at birth, total (years)" from the dataset. Drop rows with missing values.'

// ---------- Helpers ----------
function extractPythonCode(markdown: string): string | null {
if (!markdown) return null

// 1) ```python ... ```
const rePython = /```python\s*([\s\S]*?)```/i
const m1 = rePython.exec(markdown)
if (m1 && m1[1]) return m1[1].trim()

// 2) ``` ... ```
const reAnyFence = /```\s*([\s\S]*?)```/
const m2 = reAnyFence.exec(markdown)
if (m2 && m2[1]) return m2[1].trim()

// 3) Fallback: treat whole string as code if it looks python-ish
if (/import |def |print\(|len\(/.test(markdown)) return markdown.trim()

return null
}

async function requestCode(systemPrompt: string, userPrompt: string): Promise<string> {
let response
try {
response = await openai.chat.completions.create(
{
model: MODEL_ID,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt },
],
},
{
headers: {
'HTTP-Referer': 'https://github.com/e2b-dev/e2b-cookbook',
'X-Title': 'e2b-cookbook:aimlapi-js',
},
}
)
} catch (err) {
throw new Error(`LLM request failed: ${String(err)}`)
}

const content = response?.choices?.[0]?.message?.content
if (content == null) {
throw new Error('Model returned null/empty content (possibly filtered). Try adjusting the prompt.')
}

const code = extractPythonCode(content)
if (!code) {
// Show what model returned for easier debugging
console.error('LLM response content:\n', content)
throw new Error('No Python code block found in LLM response.')
}
return code
}

async function runCodeInSandbox(code: string): Promise<{ results: Result[]; text: string; png?: Buffer }> {
const sandbox = await Sandbox.create()
try {
const exec = await sandbox.runCode(code)
if (exec.error) throw new Error(exec.error.value)

const results = exec.results ?? []
const first = (results[0] ?? {}) as any

const text = String(first?.text ?? '').trim()

let png: Buffer | undefined
if (first?.png) {
try {
png = Buffer.from(first.png, 'base64')
} catch {}
}

return { results, text, png }
} finally {
await sandbox.kill()
}
}

async function uploadDatasetIfExists(sandbox: Sandbox, localPath = './data.csv', targetName = 'data.csv') {
const p = path.resolve(localPath)
if (!fs.existsSync(p)) return false
const buf = fs.readFileSync(p)
await sandbox.files.write(targetName, buf)
return true
}

// ---------- Tests ----------
export async function testStrawberry(): Promise<string> {
const code = await requestCode(SYSTEM_STRAWBERRY, PROMPT_STRAWBERRY)
const { text } = await runCodeInSandbox(code)
if (!text.includes('3')) {
throw new Error(`Expected '3' in output, got: ${JSON.stringify(text)}`)
}
return text
}

export async function testLinearRegression(imageOut = 'image_1.png'): Promise<string> {
const code = await requestCode(SYSTEM_LINEAR, PROMPT_LINEAR)

const sandbox = await Sandbox.create()
try {
const uploaded = await uploadDatasetIfExists(sandbox, './data.csv', 'data.csv')
if (!uploaded) {
console.warn('⚠️ data.csv not found next to aimlapi.ts β€” running anyway, code may fail if it expects the file.')
}

const exec = await sandbox.runCode(code)
if (exec.error) throw new Error(exec.error.value)

const first = (exec.results?.[0] ?? {}) as any
const text = String(first?.text ?? '').trim()

if (first?.png) {
try {
const png = Buffer.from(first.png, 'base64')
fs.writeFileSync(imageOut, png)
console.log(`βœ… Image saved as ${imageOut}`)
} catch (e) {
console.warn('⚠️ Could not save image:', e)
}
} else {
console.warn('⚠️ No image result returned.')
}

return text
} finally {
await sandbox.kill()
}
}

// ---------- Entry ----------
async function main() {
console.log('=== Strawberry test ===')
const s = await testStrawberry()
console.log(PROMPT_STRAWBERRY, '->', s)

console.log('\n=== Linear regression test ===')
const l = await testLinearRegression()
console.log('Linear regression output:\n', l)
}

if (require.main === module) {
main().catch((e) => {
console.error('❌ Error:', e)
process.exit(1)
})
}
Loading