Skip to content

Commit a726dad

Browse files
committed
docs(genapi): add hugging face library
1 parent 63ec339 commit a726dad

File tree

2 files changed

+112
-0
lines changed

2 files changed

+112
-0
lines changed

menu/navigation.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -987,6 +987,10 @@
987987
{
988988
"label": "Use function calling",
989989
"slug": "use-function-calling"
990+
},
991+
{
992+
"label": "Connect through Hugging Face library",
993+
"slug": "connect-through-hugging-face-library"
990994
}
991995
],
992996
"label": "How to",
Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
---
2+
meta:
3+
title: How to connect using Hugging Face library
4+
description: Learn how to interact with Generative APIs using the Hugging Face library
5+
content:
6+
h1: How to Connect using Hugging Face library
7+
paragraph: Learn how to interact with Generative APIs using the Hugging Face library.
8+
tags: generative-apis hugging-face library
9+
dates:
10+
validation: 2025-04-01
11+
posted: 2025-04-01
12+
---
13+
<Macro id="requirements" />
14+
15+
- A Scaleway account logged into the [console](https://console.scaleway.com)
16+
- [Owner](/iam/concepts/#owner) status or [IAM permissions](/iam/concepts/#permission) allowing you to perform actions in the intended Organization
17+
- A valid [API key](/iam/how-to/create-api-keys/) for API authentication
18+
- Node.js installed on your local machine
19+
- Scaleway credentials or Hugging Face credentials with the proper access rights (two methods of connection are available)
20+
21+
## Steps to Connect Using Hugging Face Library
22+
23+
1. Create a new directory on your local machine where you will store all your project files.
24+
25+
2. Open a terminal in your project directory and run the following command to install the Hugging Face inference library:
26+
```bash
27+
npm install @huggingface/inference
28+
```
29+
30+
3. Create a new file named `main.js` in your project directory and add the following code to it:
31+
```js
32+
import { InferenceClient } from '@huggingface/inference';
33+
34+
const client = new InferenceClient({ apiKey: process.env.SCW_SECRET_KEY });
35+
36+
const out = await client.chatCompletion({
37+
provider: "scaleway",
38+
// endpointUrl is not supported with third-party providers
39+
// endpointUrl: "https://api.scaleway.ai/b409cb09-756c-430f-a8e8-748f88ef4bad",
40+
// model: "meta-llama/Meta-Llama-3-8B-Instruct",
41+
model: "meta-llama/Llama-3.3-70B-Instruct",
42+
messages: [{ role: "user", content: "Tell me about Scaleway." }],
43+
max_tokens: 512,
44+
temperature: 0.1,
45+
});
46+
47+
console.log(out.choices[0].message.content);
48+
```
49+
50+
4. Execute the script by running the following command in your terminal:
51+
```bash
52+
node main.js
53+
```
54+
The model's response should be displayed in your terminal.
55+
56+
### Using stream completion
57+
58+
To use stream completion, you can modify your script as follows:
59+
```js
60+
import { InferenceClient } from '@huggingface/inference';
61+
62+
const client = new InferenceClient({ apiKey: process.env.SCW_SECRET_KEY });
63+
64+
for await (const chunk of client.chatCompletionStream({
65+
model: "meta-llama/Llama-3.3-70B-Instruct",
66+
provider: "scaleway",
67+
messages: [{ role: "user", content: "Tell me about Scaleway." }],
68+
max_tokens: 512,
69+
})) {
70+
console.log(chunk.choices[0].delta.content);
71+
}
72+
```
73+
74+
### Using Hugging Face tokens
75+
76+
You can also authenticate using Hugging Face tokens. Set the `HF_TOKEN` environment variable and modify your script slightly:
77+
```js
78+
import { InferenceClient } from '@huggingface/inference';
79+
80+
const client = new InferenceClient({ apiKey: process.env.HF_TOKEN });
81+
82+
const out = await client.chatCompletion({
83+
provider: "scaleway",
84+
model: "meta-llama/Llama-3.3-70B-Instruct",
85+
messages: [{ role: "user", content: "Tell me about Scaleway." }],
86+
max_tokens: 512,
87+
temperature: 0.1,
88+
});
89+
90+
console.log(out.choices[0].message.content);
91+
```
92+
93+
In some cases, providing the token directly in the `InferenceClient` constructor might not be necessary if the environment variable is set correctly:
94+
```js
95+
import { InferenceClient } from '@huggingface/inference';
96+
97+
const client = new InferenceClient();
98+
99+
const out = await client.chatCompletion({
100+
provider: "scaleway",
101+
model: "meta-llama/Llama-3.3-70B-Instruct",
102+
messages: [{ role: "user", content: "Tell me about Scaleway." }],
103+
max_tokens: 512,
104+
temperature: 0.1,
105+
});
106+
107+
console.log(out.choices[0].message.content);
108+
```

0 commit comments

Comments
 (0)