diff --git a/docs/inference-providers/index.md b/docs/inference-providers/index.md index 380d2c894..4cec406b4 100644 --- a/docs/inference-providers/index.md +++ b/docs/inference-providers/index.md @@ -95,10 +95,11 @@ curl https://router.huggingface.co/novita/v3/openai/chat/completions \ In Python, you can use the `requests` library to make raw requests to the API: ```python +import os import requests API_URL = "https://router.huggingface.co/novita/v3/openai/chat/completions" -headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} +headers = {"Authorization": f"Bearer {os.environ['HF_TOKEN']}"} payload = { "messages": [ { @@ -116,11 +117,12 @@ print(response.json()["choices"][0]["message"]) For convenience, the Python library `huggingface_hub` provides an [`InferenceClient`](https://huggingface.co/docs/huggingface_hub/guides/inference) that handles inference for you. Make sure to install it with `pip install huggingface_hub`. ```python +import os from huggingface_hub import InferenceClient client = InferenceClient( provider="novita", - api_key="hf_xxxxxxxxxxxxxxxxxxxxxxxx", + api_key=os.environ["HF_TOKEN"], ) completion = client.chat.completions.create( @@ -149,7 +151,7 @@ const response = await fetch( { method: "POST", headers: { - Authorization: `Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`, + Authorization: `Bearer ${process.env.HF_TOKEN}`, "Content-Type": "application/json", }, body: JSON.stringify({ @@ -173,7 +175,7 @@ For convenience, the JS library `@huggingface/inference` provides an [`Inference ```js import { InferenceClient } from "@huggingface/inference"; -const client = new InferenceClient("hf_xxxxxxxxxxxxxxxxxxxxxxxx"); +const client = new InferenceClient(process.env.HF_TOKEN); const chatCompletion = await client.chatCompletion({ provider: "novita",