Skip to content

Commit f400c7e

Browse files
authored
replace hf_xxxxxxxxxxxx by process.env.HF_TOKEN in examples (#1764)
1 parent f203dac commit f400c7e

File tree

1 file changed

+6
-4
lines changed

1 file changed

+6
-4
lines changed

docs/inference-providers/index.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -95,10 +95,11 @@ curl https://router.huggingface.co/novita/v3/openai/chat/completions \
9595
In Python, you can use the `requests` library to make raw requests to the API:
9696

9797
```python
98+
import os
9899
import requests
99100

100101
API_URL = "https://router.huggingface.co/novita/v3/openai/chat/completions"
101-
headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
102+
headers = {"Authorization": f"Bearer {os.environ['HF_TOKEN']}"}
102103
payload = {
103104
"messages": [
104105
{
@@ -116,11 +117,12 @@ print(response.json()["choices"][0]["message"])
116117
For convenience, the Python library `huggingface_hub` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface_hub/guides/inference) that handles inference for you. Make sure to install it with `pip install huggingface_hub`.
117118

118119
```python
120+
import os
119121
from huggingface_hub import InferenceClient
120122

121123
client = InferenceClient(
122124
provider="novita",
123-
api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx",
125+
api_key=os.environ["HF_TOKEN"],
124126
)
125127

126128
completion = client.chat.completions.create(
@@ -149,7 +151,7 @@ const response = await fetch(
149151
{
150152
method: "POST",
151153
headers: {
152-
Authorization: `Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`,
154+
Authorization: `Bearer ${process.env.HF_TOKEN}`,
153155
"Content-Type": "application/json",
154156
},
155157
body: JSON.stringify({
@@ -173,7 +175,7 @@ For convenience, the JS library `@huggingface/inference` provides an [`Inference
173175
```js
174176
import { InferenceClient } from "@huggingface/inference";
175177

176-
const client = new InferenceClient("hf_xxxxxxxxxxxxxxxxxxxxxxxx");
178+
const client = new InferenceClient(process.env.HF_TOKEN);
177179

178180
const chatCompletion = await client.chatCompletion({
179181
provider: "novita",

0 commit comments

Comments
 (0)