Skip to content

Commit 9a3940e

Browse files
Merge pull request #325 from Portkey-AI/sdk-changes
Comprehensive SDK docs update
2 parents e2643a1 + 4f78277 commit 9a3940e

File tree

10 files changed

+647
-598
lines changed

10 files changed

+647
-598
lines changed

api-reference/inference-api/headers.mdx

Lines changed: 164 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -479,61 +479,184 @@ Pass more configuration headers for `Azure OpenAI`, `Google Vertex AI`, or `AWS
479479
480480
## List of All Headers
481481
482-
For a comprehensive list of all available parameters and their detailed descriptions, please refer to the Portkey SDK Client documentation.
482+
The following is a comprehensive list of headers that can be used when initializing the Portkey client.
483483
484-
<Card title="SDK" href="/api-reference/portkey-sdk-client" />
484+
Portkey adheres to language-specific naming conventions:
485+
* **camelCase** for **JavaScript/Node.js** parameters
486+
* **snake_case** for **Python** parameters
487+
* **hyphenated-keys** for **HTTP headers**
485488
486-
## Using Headers in SDKs
489+
<Tabs>
490+
<Tab title="NodeJS">
491+
| Parameter | Type | Key |
492+
| :--- | :--- | :--- |
493+
| **API Key** Your Portkey account's API Key. | stringrequired | `apiKey` |
494+
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtualKey` |
495+
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` |
496+
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` |
497+
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `baseURL` |
498+
| **Trace ID** An ID you can pass to refer to 1 or more requests later on. Generated automatically for every request, if not sent. | string | `traceID` |
499+
| **Metadata** Any metadata to attach to the requests. These can be filtered later on in the analytics and log dashboards Can contain `_prompt`, `_user`, `_organisation`, or `_environment` that are special metadata types in Portkey. You can also send any other keys as part of this object. | object | `metadata` |
500+
| **Cache Force Refresh** Force refresh the cache for your request by making a new call and storing that value. | boolean | `cacheForceRefresh` |
501+
| **Cache Namespace** Partition your cache based on custom strings, ignoring metadata and other headers. | string | `cacheNamespace` |
502+
| **Custom Host** Route to locally or privately hosted model by configuring the API URL with custom host | string | `customHost` |
503+
| **Forward Headers** Forward sensitive headers directly to your model's API without any processing from Portkey. | array of string | `forwardHeaders` |
504+
| **Azure OpenAI Headers** Configuration headers for Azure OpenAI that you can send separately | string | `azureResourceName` `azureDeploymentId` `azureApiVersion` `azureModelName` |
505+
| **Google Vertex AI Headers** Configuration headers for Vertex AI that you can send separately | string | `vertexProjectId` `vertexRegion` |
506+
| **AWS Bedrock Headers** Configuration headers for Bedrock that you can send separately | string | `awsAccessKeyId` `awsSecretAccessKey` `awsRegion` `awsSessionToken` |
487507
488-
You can send these headers through REST API calls as well as by using the OpenAI or Portkey SDKs. With the Portkey SDK, Other than `cacheForceRefresh`, `traceID`, and `metadata`, rest of the headers are passed while instantiating the Portkey client.
508+
</Tab>
509+
<Tab title="Python">
510+
511+
| Parameter | Type | Key |
512+
| :--- | :--- | :--- |
513+
| **API Key** Your Portkey account's API Key. | stringrequired | `api_key` |
514+
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `virtual_key` |
515+
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | stringobject | `config` |
516+
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `provider` |
517+
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | `base_url` |
518+
| **Trace ID** An ID you can pass to refer to 1 or more requests later on. Generated automatically for every request, if not sent. | string | `trace_id` |
519+
| **Metadata** Any metadata to attach to the requests. These can be filtered later on in the analytics and log dashboards Can contain `_prompt`, `_user`, `_organisation`, or `_environment` that are special metadata types in Portkey. You can also send any other keys as part of this object. | object | `metadata` |
520+
| **Cache Force Refresh** Force refresh the cache for your request by making a new call and storing that value. | boolean | `cache_force_refresh` |
521+
| **Cache Namespace** Partition your cache based on custom strings, ignoring metadata and other headers. | string | `cache_namespace` |
522+
| **Custom Host** Route to locally or privately hosted model by configuring the API URL with custom host | string | `custom_host` |
523+
| **Forward Headers** Forward sensitive headers directly to your model's API without any processing from Portkey. | array of string | `forward_headers` |
524+
| **Azure OpenAI Headers** Configuration headers for Azure OpenAI that you can send separately | string | `azure_resource_name` `azure_deployment_id` `azure_api_version` `azure_model_name` |
525+
| **Google Vertex AI Headers** Configuration headers for Vertex AI that you can send separately | string | `vertex_project_id` `vertex_region` |
526+
| **AWS Bedrock Headers** Configuration headers for Bedrock that you can send separately | string | `aws_access_key_id` `aws_secret_access_key` `aws_region` `aws_session_token` |
527+
</Tab>
528+
<Tab title="REST Headers">
529+
| Parameter | Type | Header Key |
530+
| :--- | :--- | :--- |
531+
| **API Key** Your Portkey account's API Key. | stringrequired | `x-portkey-api-key` |
532+
| **Virtual Key** The virtual key created from Portkey's vault for a specific provider | string | `x-portkey-virtual-key` |
533+
| **Config** The slug or [config object](/api-reference/inference-api/config-object) to use | string | `x-portkey-config` |
534+
| **Provider** The AI provider to use for your calls. ([supported providers](/integrations/llms#supported-ai-providers)). | string | `x-portkey-provider` |
535+
| **Base URL** You can edit the URL of the gateway to use. Needed if you're [self-hosting the AI gateway](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md) | string | Change the request URL |
536+
| **Trace ID** An ID you can pass to refer to 1 or more requests later on. Generated automatically for every request, if not sent. | string | `x-portkey-trace-id` |
537+
| **Metadata** Any metadata to attach to the requests. These can be filtered later on in the analytics and log dashboards Can contain `_prompt`, `_user`, `_organisation`, or `_environment` that are special metadata types in Portkey. You can also send any other keys as part of this object. | string | `x-portkey-metadata` |
538+
| **Cache Force Refresh** Force refresh the cache for your request by making a new call and storing that value. | boolean | `x-portkey-cache-force-refresh` |
539+
| **Cache Namespace** Partition your cache based on custom strings, ignoring metadata and other headers | string | `x-portkey-cache-namespace` |
540+
| **Custom Host** Route to locally or privately hosted model by configuring the API URL with custom host | string | `x-portkey-custom-host` |
541+
| **Forward Headers** Forward sensitive headers directly to your model's API without any processing from Portkey. | array of string | `x-portkey-forward-headers` |
542+
| **Azure OpenAI Headers** Configuration headers for Azure OpenAI that you can send separately | string | `x-portkey-azure-resource-name`, `x-portkey-azure-deployment-id`, `x-portkey-azure-api-version`, `x-portkey-azure-model-name` |
543+
| **Google Vertex AI Headers** Configuration headers for Vertex AI that you can send separately | string | `x-portkey-vertex-project-id`, `x-portkey-vertex-region` |
544+
| **AWS Bedrock Headers** Configuration headers for Bedrock that you can send separately | string | `x-portkey-aws-session-token`, `x-portkey-aws-secret-access-key`, `x-portkey-aws-region`, `x-portkey-aws-session-token` |
545+
</Tab>
546+
</Tabs>
489547
490-
<Tabs>
491-
<Tab title="Portkey Node SDK">
492-
```ts
493-
import Portkey from 'portkey-ai';
494548
495-
const portkey = new Portkey({
496-
apiKey: "PORTKEY_API_KEY",
497-
// Authorization: "Bearer PROVIDER_API_KEY",
498-
// provider: "anthropic",
499-
// customHost: "CUSTOM_URL",
500-
// forwardHeaders: ["Authorization"],
501-
virtualKey: "VIRTUAL_KEY",
502-
config: "CONFIG_ID",
503-
})
504-
505-
const chatCompletion = await portkey.chat.completions.create({
506-
messages: [{ role: 'user', content: 'Say this is a test' }],
507-
model: 'gpt-4o',
508-
},{
509-
traceId: "your_trace_id",
510-
metadata: {"_user": "432erf6"}
511-
});
549+
### Using Headers
512550
513-
console.log(chatCompletion.choices);
514-
```
515-
</Tab>
516-
<Tab title="Portkey Python SDK">
517-
```python
551+
You can send these headers in multiple ways:
552+
553+
<CodeGroup>
554+
```sh REST API
555+
curl https://api.portkey.ai/v1/chat/completions \
556+
-H "Content-Type: application/json" \
557+
-H "x-portkey-api-key: PORTKEY_API_KEY" \
558+
-H "x-portkey-virtual-key: VIRTUAL_KEY" \
559+
-H "x-portkey-trace-id: your_trace_id" \
560+
-H "x-portkey-metadata: {\"_user\": \"user_12345\"}" \
561+
-d '{
562+
"model": "gpt-4o",
563+
"messages": [{"role": "user", "content": "Hello!"}]
564+
}'
565+
```
566+
```python Portkey SDK
567+
# Python - At initialization
518568
from portkey_ai import Portkey
519569

520570
portkey = Portkey(
521571
api_key="PORTKEY_API_KEY",
522-
## Authorization="Bearer PROVIDER_API_KEY",
523-
## provider="openai",
524-
## custom_host="CUSTOM_URL",
525-
## forward_headers=["Authorization"],
526572
virtual_key="VIRTUAL_KEY",
527573
config="CONFIG_ID"
528574
)
529575

576+
# Python - At runtime
530577
completion = portkey.with_options(
531-
trace_id = "TRACE_ID",
532-
metadata = {"_user": "user_12345"}
533-
)chat.completions.create(
534-
messages = [{ "role": 'user', "content": 'Say this is a test' }],
535-
model = 'gpt-4o'
578+
trace_id="your_trace_id",
579+
metadata={"_user": "user_12345"}
580+
).chat.completions.create(
581+
messages=[{"role": "user", "content": "Hello!"}],
582+
model="gpt-4o"
536583
)
537584
```
538-
</Tab>
539-
</Tabs>
585+
586+
```js Node.js SDK
587+
// Node.js - At initialization
588+
import Portkey from 'portkey-ai';
589+
590+
const portkey = new Portkey({
591+
apiKey: "PORTKEY_API_KEY",
592+
virtualKey: "VIRTUAL_KEY",
593+
config: "CONFIG_ID"
594+
});
595+
596+
// Node.js - At runtime
597+
const chatCompletion = await portkey.chat.completions.create(
598+
{
599+
messages: [{ role: 'user', content: 'Hello!' }],
600+
model: 'gpt-4o'
601+
},
602+
{
603+
traceId: "your_trace_id",
604+
metadata: {"_user": "user_12345"}
605+
}
606+
);
607+
```
608+
```python OpenAI SDK
609+
# Python
610+
from openai import OpenAI
611+
from portkey_ai import createHeaders
612+
613+
# At initialization
614+
client = OpenAI(
615+
api_key="OPENAI_API_KEY",
616+
base_url="https://api.portkey.ai/v1",
617+
default_headers=createHeaders({
618+
"apiKey": "PORTKEY_API_KEY",
619+
"virtualKey": "VIRTUAL_KEY"
620+
})
621+
)
622+
623+
# At runtime
624+
response = client.with_options(
625+
extra_headers={"x-portkey-trace-id": "your_trace_id",
626+
"x-portkey-metadata": '{"_user": "user_12345"}'}
627+
).chat.completions.create(
628+
model="gpt-4o",
629+
messages=[{"role": "user", "content": "Hello!"}]
630+
)
631+
```
632+
633+
```js OpenAI Node.js SDK
634+
// Node.js
635+
import OpenAI from "openai";
636+
import { createHeaders } from "portkey-ai";
637+
638+
// At initialization
639+
const client = new OpenAI({
640+
apiKey: "OPENAI_API_KEY",
641+
baseURL: "https://api.portkey.ai/v1",
642+
defaultHeaders: createHeaders({
643+
apiKey: "PORTKEY_API_KEY",
644+
virtualKey: "VIRTUAL_KEY"
645+
})
646+
});
647+
648+
// At runtime
649+
const response = await client.chat.completions.create(
650+
{
651+
model: "gpt-4o",
652+
messages: [{ role: "user", content: "Hello!" }]
653+
},
654+
{
655+
headers: {
656+
"x-portkey-trace-id": "your_trace_id",
657+
"x-portkey-metadata": JSON.stringify({"_user": "user_12345"})
658+
}
659+
}
660+
);
661+
```
662+
</CodeGroup>

0 commit comments

Comments
 (0)