Skip to content

Commit d61bb86

Browse files
docs(huggingface): add pipeline→wrap guide; note upcoming init_chat_model support
1 parent 5c52440 commit d61bb86

File tree

1 file changed

+53
-0
lines changed

1 file changed

+53
-0
lines changed
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: Hugging Face (chat)
3+
sidebar_label: Hugging Face
4+
---
5+
6+
## Chat models with Hugging Face
7+
8+
### Option 1 (works today): pipeline → wrap with `ChatHuggingFace`
9+
10+
```python
11+
from transformers import pipeline
12+
from langchain_huggingface import ChatHuggingFace
13+
14+
# Create a text-generation pipeline (CPU/GPU as available)
15+
pipe = pipeline(
16+
"text-generation",
17+
model="microsoft/Phi-3-mini-4k-instruct",
18+
do_sample=False, # deterministic (similar to temperature=0)
19+
max_new_tokens=128, # HF uses max_new_tokens (not max_tokens)
20+
)
21+
22+
# Wrap the pipeline as a LangChain chat model
23+
llm = ChatHuggingFace(llm=pipe)
24+
25+
print(llm.invoke("Say hi in one sentence.").content)
26+
```
27+
28+
:::note
29+
- **Install**: `pip install langchain-huggingface transformers`
30+
- For Hugging Face pipelines prefer `max_new_tokens` (not `max_tokens`).
31+
:::
32+
33+
### Option 2 (coming after fix): `init_chat_model(..., model_provider="huggingface")`
34+
35+
Once available, you’ll be able to initialize via `init_chat_model`:
36+
37+
```python
38+
from langchain.chat_models import init_chat_model
39+
40+
llm = init_chat_model(
41+
model="microsoft/Phi-3-mini-4k-instruct",
42+
model_provider="huggingface",
43+
task="text-generation",
44+
do_sample=False,
45+
max_new_tokens=128,
46+
)
47+
48+
print(llm.invoke("Say hi in one sentence.").content)
49+
```
50+
51+
This path depends on a bug fix tracked for Hugging Face chat initialization. If your version doesn’t support it yet, use Option 1 above.
52+
53+

0 commit comments

Comments
 (0)