Skip to content
8 changes: 4 additions & 4 deletions integrations/llms/ai21.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ print(response.choices[0].message.content)
```

```js Javascript icon="square-js"
import Portkey from 'portkey-ai'
import Portkey from 'portkey-ai'

// 1. Install: npm install portkey-ai
// 2. Add @ai21 provider in model catalog
// 3. Use it:

const portkey = new Portkey({
const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY"
})

Expand All @@ -48,7 +48,7 @@ const response = await portkey.chat.completions.create({
console.log(response.choices[0].message.content)
```

```python OpenAI Py icon="openai"
```python OpenAI Py icon="python"
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

Expand All @@ -69,7 +69,7 @@ response = client.chat.completions.create(
print(response.choices[0].message.content)
```

```js OpenAI JS icon="openai"
```js OpenAI JS icon="square-js"
import OpenAI from "openai"
import { PORTKEY_GATEWAY_URL } from "portkey-ai"

Expand Down
4 changes: 2 additions & 2 deletions integrations/llms/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ const response = await portkey.chat.completions.create({
console.log(response.choices[0].message.content)
```

```python OpenAI Py icon="openai"
```python OpenAI Py icon="python"
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

Expand All @@ -85,7 +85,7 @@ response = client.chat.completions.create(
print(response.choices[0].message.content)
```

```js OpenAI JS icon="openai"
```js OpenAI JS icon="square-js"
import OpenAI from "openai"
import { PORTKEY_GATEWAY_URL } from "portkey-ai"

Expand Down
4 changes: 2 additions & 2 deletions integrations/llms/anyscale-llama2-mistral-zephyr.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ const response = await portkey.chat.completions.create({
console.log(response.choices[0].message.content)
```

```python OpenAI Py icon="openai"
```python OpenAI Py icon="python"
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

Expand All @@ -66,7 +66,7 @@ response = client.chat.completions.create(
print(response.choices[0].message.content)
```

```js OpenAI JS icon="openai"
```js OpenAI JS icon="square-js"
import OpenAI from "openai"
import { PORTKEY_GATEWAY_URL } from "portkey-ai"

Expand Down
202 changes: 124 additions & 78 deletions integrations/llms/cerebras.mdx
Original file line number Diff line number Diff line change
@@ -1,107 +1,153 @@
---
title: "Cerebras"
description: "Integrate Cerebras models with Portkey's AI Gateway"
---

Portkey provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including the models hosted on [Cerebras Inference API](https://cerebras.ai/inference).
Portkey provides a robust and secure gateway to integrate various Large Language Models (LLMs) into applications, including [Cerebras Inference API](https://cerebras.ai/inference).

<Note>
Provider Slug: `cerebras`
</Note>
With Portkey, take advantage of features like fast AI gateway access, observability, prompt management, and more, while securely managing API keys through [Model Catalog](/product/model-catalog).

## Portkey SDK Integration with Cerebras
## Quick Start

Portkey provides a consistent API to interact with models from various providers. To integrate Cerebras with Portkey:
Get Cerebras working in 3 steps:

### 1\. Install the Portkey SDK
<Tabs>
<Tab title="NodeJS">
```sh
npm install --save portkey-ai
```
</Tab>
<Tab title="Python">
```sh
pip install portkey-ai
```
</Tab>
<CodeGroup>
```python Python icon="python"
from portkey_ai import Portkey

</Tabs>
# 1. Install: pip install portkey-ai
# 2. Add @cerebras provider in model catalog
# 3. Use it:

portkey = Portkey(api_key="PORTKEY_API_KEY")

response = portkey.chat.completions.create(
model="@cerebras/llama3.1-8b",
messages=[{"role": "user", "content": "Say this is a test"}]
)

### 2\. Initialize Portkey with Cerebras
print(response.choices[0].message.content)
```

To use Cerebras with Portkey, get your API key from [here](https://cerebras.ai/inference), then add it to Portkey to create the virtual key.
<Tabs>
<Tab title="NodeJS SDK">
```js
import Portkey from 'portkey-ai'
```js Javascript icon="square-js"
import Portkey from 'portkey-ai'

const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
provider:"@CEREBRAS_PROVIDER" // Your Cerebras Inference virtual key
})
```
</Tab>
<Tab title="Python SDK">
```py
from portkey_ai import Portkey

portkey = Portkey(
api_key ="PORTKEY_API_KEY", # Replace with your Portkey API key
provider="@CEREBRAS_PROVIDER" # Your Cerebras Inference virtual key
)
```
</Tab>
// 1. Install: npm install portkey-ai
// 2. Add @cerebras provider in model catalog
// 3. Use it:

const portkey = new Portkey({
apiKey: "PORTKEY_API_KEY"
})

</Tabs>
const response = await portkey.chat.completions.create({
model: "@cerebras/llama3.1-8b",
messages: [{ role: "user", content: "Say this is a test" }]
})

console.log(response.choices[0].message.content)
```

```python OpenAI Py icon="python"
from openai import OpenAI
from portkey_ai import PORTKEY_GATEWAY_URL

# 1. Install: pip install openai portkey-ai
# 2. Add @cerebras provider in model catalog
# 3. Use it:

client = OpenAI(
api_key="PORTKEY_API_KEY", # Portkey API key
base_url=PORTKEY_GATEWAY_URL
)

response = client.chat.completions.create(
model="@cerebras/llama3.1-8b",
messages=[{"role": "user", "content": "Say this is a test"}]
)

print(response.choices[0].message.content)
```

```js OpenAI JS icon="square-js"
import OpenAI from "openai"
import { PORTKEY_GATEWAY_URL } from "portkey-ai"

// 1. Install: npm install openai portkey-ai
// 2. Add @cerebras provider in model catalog
// 3. Use it:

const client = new OpenAI({
apiKey: "PORTKEY_API_KEY", // Portkey API key
baseURL: PORTKEY_GATEWAY_URL
})

const response = await client.chat.completions.create({
model: "@cerebras/llama3.1-8b",
messages: [{ role: "user", content: "Say this is a test" }]
})

console.log(response.choices[0].message.content)
```

```sh cURL icon="square-terminal"
# 1. Add @cerebras provider in model catalog
# 2. Use it:

curl https://api.portkey.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-d '{
"model": "@cerebras/llama3.1-8b",
"messages": [
{ "role": "user", "content": "Say this is a test" }
]
}'
```
</CodeGroup>

<Note>
**Tip:** You can also set `provider="@cerebras"` in `Portkey()` and use just `model="llama3.1-8b"` in the request.
</Note>

### 3\. Invoke Chat Completions
<Tabs>
<Tab title="NodeJS SDK">
```js
const chatCompletion = await portkey.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'llama3.1-8b',
});
## Add Provider in Model Catalog

console.log(chatCompletion.choices);
```
</Tab>
<Tab title="Python SDK">
```python
completion = portkey.chat.completions.create(
messages= [{ "role": 'user', "content": 'Say this is a test' }],
model= 'llama3.1-8b'
)

print(completion)
```
</Tab>
1. Go to [**Model Catalog → Add Provider**](https://app.portkey.ai/model-catalog/providers)
2. Select **Cerebras**
3. Choose existing credentials or create new by entering your [Cerebras API key](https://cerebras.ai/inference)
4. Name your provider (e.g., `cerebras-prod`)

</Tabs>
<Card title="Complete Setup Guide →" href="/product/model-catalog">
See all setup options, code examples, and detailed instructions
</Card>

---

## Supported Models

Cerebras currently supports `Llama-3.1-8B` and `Llama-3.1-70B`. You can find more info here:
<Info>
[![Logo](/images/llms/apple-touch-icon.png)Overview - Starter KitStarter Kit](https://inference-docs.cerebras.ai/introduction)
</Info>
## Next Steps

The complete list of features supported in the SDK are available on the link below.

<Card title="SDK" href="/api-reference/sdk">
<Card title="Cerebras Models" icon="list" href="https://inference-docs.cerebras.ai/introduction">
View all available models and documentation
</Card>

You'll find more information in the relevant sections:
## Next Steps

1. [Add metadata to your requests](/product/observability/metadata)
2. [Add gateway configs to your Cerebras](/product/ai-gateway/configs)[ requests](/product/ai-gateway/configs)
3. [Tracing Cerebras requests](/product/observability/traces)
4. [Setup a fallback from OpenAI to Cerebras](/product/ai-gateway/fallbacks)
<CardGroup cols={2}>
<Card title="Add Metadata" icon="tags" href="/product/observability/metadata">
Add metadata to your Cerebras requests
</Card>
<Card title="Gateway Configs" icon="gear" href="/product/ai-gateway/configs">
Add gateway configs to your Cerebras requests
</Card>
<Card title="Tracing" icon="chart-line" href="/product/observability/traces">
Trace your Cerebras requests
</Card>
<Card title="Fallbacks" icon="arrow-rotate-left" href="/product/ai-gateway/fallbacks">
Setup fallback from OpenAI to Cerebras
</Card>
</CardGroup>

For complete SDK documentation:

<Card title="SDK Reference" icon="code" href="/api-reference/sdk/list">
Complete Portkey SDK documentation
</Card>
Loading