Skip to content

Commit 5e0019c

Browse files
Merge pull request #324 from Portkey-AI/new-changes-all
Prompt security, groq speech support, workers AI image gen support
2 parents a636795 + 2ee1561 commit 5e0019c

File tree

6 files changed

+374
-2
lines changed

6 files changed

+374
-2
lines changed

docs.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,7 @@
108108
"product/guardrails/lasso",
109109
"product/guardrails/aporia",
110110
"product/guardrails/pillar",
111+
"product/guardrails/prompt-security",
111112
"product/guardrails/pangea",
112113
"product/guardrails/acuvity",
113114
"product/guardrails/mistral",

images/guardrails/prompt-secuirty.png

2.28 KB
Loading

integrations/llms/groq.mdx

Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -214,6 +214,131 @@ curl -X POST "https://api.portkey.ai/v1/chat/completions" \
214214

215215

216216

217+
218+
219+
220+
221+
### Groq Speech to Text (Whisper)
222+
223+
OpenAI's Audio API converts speech to text using the Whisper model. It offers transcription in the original language and translation to English, supporting multiple file formats and languages with high accuracy.
224+
225+
<CodeGroup>
226+
```python Python
227+
audio_file= open("/path/to/file.mp3", "rb")
228+
229+
# Transcription
230+
transcription = portkey.audio.transcriptions.create(
231+
model="whisper-large-v3",
232+
file=audio_file
233+
)
234+
print(transcription.text)
235+
236+
# Translation
237+
translation = portkey.audio.translations.create(
238+
model="whisper-large-v3",
239+
file=audio_file
240+
)
241+
print(translation.text)
242+
```
243+
244+
```javascript Node.js
245+
import fs from "fs";
246+
247+
// Transcription
248+
async function transcribe() {
249+
const transcription = await portkey.audio.transcriptions.create({
250+
file: fs.createReadStream("/path/to/file.mp3"),
251+
model: "whisper-large-v3",
252+
});
253+
console.log(transcription.text);
254+
}
255+
transcribe();
256+
257+
// Translation
258+
async function translate() {
259+
const translation = await portkey.audio.translations.create({
260+
file: fs.createReadStream("/path/to/file.mp3"),
261+
model: "whisper-large-v3",
262+
});
263+
console.log(translation.text);
264+
}
265+
translate();
266+
```
267+
268+
```curl REST
269+
# Transcription
270+
curl -X POST "https://api.portkey.ai/v1/audio/transcriptions" \
271+
-H "Authorization: Bearer YOUR_PORTKEY_API_KEY" \
272+
-H "Content-Type: multipart/form-data" \
273+
-F "file=@/path/to/file.mp3" \
274+
-F "model=whisper-large-v3"
275+
276+
# Translation
277+
curl -X POST "https://api.portkey.ai/v1/audio/translations" \
278+
-H "Authorization: Bearer YOUR_PORTKEY_API_KEY" \
279+
-H "Content-Type: multipart/form-data" \
280+
-F "file=@/path/to/file.mp3" \
281+
-F "model=whisper-large-v3"
282+
```
283+
</CodeGroup>
284+
285+
286+
---
287+
288+
### Groq Text to Speech
289+
290+
Groq's Text to Speech (TTS) API converts written text into natural-sounding audio using six distinct voices. It supports multiple languages, streaming capabilities, and various audio formats for different use cases.
291+
292+
<CodeGroup>
293+
```python Python
294+
from pathlib import Path
295+
296+
speech_file_path = Path(__file__).parent / "speech.mp3"
297+
response = portkey.audio.speech.create(
298+
model="playai-tts",
299+
voice="Fritz-PlayAI",
300+
input="Today is a wonderful day to build something people love!"
301+
)
302+
303+
with open(speech_file_path, "wb") as f:
304+
f.write(response.content)
305+
```
306+
307+
```javascript Node.js
308+
import path from 'path';
309+
import fs from 'fs';
310+
311+
const speechFile = path.resolve("./speech.mp3");
312+
313+
async function main() {
314+
const mp3 = await portkey.audio.speech.createCertainly! I'll continue with the Text to Speech section and then move on to the additional features and sections:
315+
316+
```javascript Node.js
317+
({
318+
model: "playai-tts",
319+
voice: "Fritz-PlayAI",
320+
input: "Today is a wonderful day to build something people love!",
321+
});
322+
const buffer = Buffer.from(await mp3.arrayBuffer());
323+
await fs.promises.writeFile(speechFile, buffer);
324+
}
325+
326+
main();
327+
```
328+
329+
```curl REST
330+
curl -X POST "https://api.portkey.ai/v1/audio/speech" \
331+
-H "Authorization: Bearer YOUR_PORTKEY_API_KEY" \
332+
-H "Content-Type: application/json" \
333+
-d '{
334+
"model": "playai-tts",
335+
"voice": "Fritz-PlayAI",
336+
"input": "Today is a wonderful day to build something people love!"
337+
}' \
338+
--output speech.mp3
339+
```
340+
</CodeGroup>
341+
217342
---
218343
219344
You'll find more information in the relevant sections:

integrations/llms/workers-ai.mdx

Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,128 @@ Use the Portkey instance to send requests to Workers AI. You can also override t
9191

9292

9393

94+
### **4\. Invoke Image Generation with** Workers AI
95+
96+
Use the Portkey instance to send requests to Workers AI. You can also override the virtual key directly in the API call if needed.
97+
98+
Portkey supports the OpenAI signature to make text-to-image requests.
99+
100+
<Tabs>
101+
<Tab title="NodeJS">
102+
```js
103+
import Portkey from 'portkey-ai';
104+
105+
// Initialize the Portkey client
106+
const portkey = new Portkey({
107+
apiKey: "PORTKEY_API_KEY", // Replace with your Portkey API key
108+
virtualKey: "VIRTUAL_KEY" // Add your provider's virtual key
109+
});
110+
111+
async function main() {
112+
const image = await portkey.images.generate({
113+
model: "image_model_name",
114+
prompt: "Lucy in the sky with diamonds"
115+
});
116+
117+
console.log(image.data);
118+
}
119+
120+
main();
121+
```
122+
</Tab>
123+
<Tab title="Python">
124+
125+
```py
126+
from portkey_ai import Portkey
127+
from IPython.display import display, Image
128+
129+
# Initialize the Portkey client
130+
portkey = Portkey(
131+
api_key="PORTKEY_API_KEY", # Replace with your Portkey API key
132+
virtual_key="VIRTUAL_KEY" # Add your provider's virtual key
133+
)
134+
135+
image = portkey.images.generate(
136+
model="image_model_name",
137+
prompt="Lucy in the sky with diamonds"
138+
)
139+
140+
# Display the image
141+
display(Image(url=image.data[0].url))
142+
```
143+
</Tab>
144+
<Tab title="OpenAI NodeJS">
145+
146+
```js
147+
import OpenAI from 'openai'; // We're using the v4 SDK
148+
import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai'
149+
150+
const openai = new OpenAI({
151+
apiKey: 'WORKERS_AI_API_KEY', // defaults to process.env["WORKERS_AI_API_KEY"],
152+
baseURL: PORTKEY_GATEWAY_URL,
153+
defaultHeaders: createHeaders({
154+
provider: "openai",
155+
apiKey: "PORTKEY_API_KEY" // defaults to process.env["PORTKEY_API_KEY"]
156+
})
157+
});
158+
159+
async function main() {
160+
const image = await openai.images.generate({
161+
model: "image_model_name",
162+
prompt: "Lucy in the sky with diamonds"
163+
});
164+
165+
console.log(image.data);
166+
}
167+
168+
main();
169+
```
170+
</Tab>
171+
<Tab title="OpenAI Python">
172+
173+
```py
174+
from openai import OpenAI
175+
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
176+
from IPython.display import display, Image
177+
178+
client = OpenAI(
179+
api_key='WORKERS_AI_API_KEY',
180+
base_url=PORTKEY_GATEWAY_URL,
181+
default_headers=createHeaders(
182+
provider="openai",
183+
api_key="PORTKEY_API_KEY"
184+
)
185+
)
186+
187+
image = client.images.generate(
188+
model="image_model_name",
189+
prompt="Lucy in the sky with diamonds",
190+
n=1,
191+
size="1024x1024"
192+
)
193+
194+
# Display the image
195+
display(Image(url=image.data[0].url))
196+
```
197+
</Tab>
198+
<Tab title="cURL">
199+
200+
```sh
201+
curl "https://api.portkey.ai/v1/images/generations" \
202+
-H "Content-Type: application/json" \
203+
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
204+
-H "x-portkey-virtual-key: $WORKERS_AI_VIRTUAL_KEY" \
205+
-d '{
206+
"model": "image_model_name",
207+
"prompt": "Lucy in the sky with diamonds"
208+
}'
209+
```
210+
</Tab>
211+
</Tabs>
212+
213+
214+
215+
94216
## Managing Workers AI Prompts
95217

96218
You can manage all prompts to Workers AI in the [Prompt Library](/product/prompt-library). All the current models of Workers AI are supported and you can easily start testing different prompts.

product/guardrails/list-of-guardrail-checks.mdx

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,17 +37,21 @@ Each Guardrail Check has a specific purpose, it's own parameters, supported hook
3737
* Analyze and redact text to avoid model manipulation
3838
* Detect malicious content and undesirable data transfers
3939
</Card>
40-
<Card title="Patronus" href="/product/guardrails/list-of-guardrail-checks/patronus-ai" img="/images/guardrails/logo-3.avif">
40+
<Card title="Patronus" href="/product/guardrails/patronus-ai" img="/images/guardrails/logo-3.avif">
4141
* Hallucination detection
4242
* Check for conciseness, helpfulness, politeness
4343
* Check for gender, racial bias
4444
* and more!
4545
</Card>
46-
<Card title="Pillar" href="/product/guardrails/list-of-guardrail-checks/pillar" img="/images/guardrails/logo-2.avif">
46+
<Card title="Pillar" href="/product/guardrails/pillar" img="/images/guardrails/logo-2.avif">
4747
* Scan Prompts
4848
* Scan Responses
4949
For PII, toxicity, prompt injection detection, and more.
5050
</Card>
51+
<Card title="Prompt Security" href="/product/guardrails/prompt-security" img="/images/guardrails/prompt-secuirty.png">
52+
* Scan Prompts
53+
* Scan Responses
54+
</Card>
5155
</CardGroup>
5256

5357

0 commit comments

Comments
 (0)