Skip to content

Commit 85497f3

Browse files
authored
Merge pull request #59 from tisan-das/inference_model_access_resolution
Changed MCP client code example to support access token
2 parents 2e6aa9b + 5be9c99 commit 85497f3

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

units/en/unit2/gradio-client.mdx

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ Now, we can import the necessary libraries and create a simple Gradio interface
5454

5555
```python
5656
import gradio as gr
57+
import os
5758

5859
from mcp import StdioServerParameters
5960
from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
@@ -69,10 +70,11 @@ tools = mcp_client.get_tools()
6970
```
7071

7172
Now that we have the tools, we can create a simple agent that uses them to answer questions. We'll just use a simple `InferenceClientModel` and the default model from `smolagents` for now.
72-
It is important to pass your api_key to the InferenceClientModel. You can access the token from your huggingface account. [check here.](https://huggingface.co/docs/hub/en/security-tokens).
73+
74+
It is important to pass your api_key to the InferenceClientModel. You can access the token from your huggingface account. [check here.](https://huggingface.co/docs/hub/en/security-tokens), and set the access token with the environment variable `HF_TOKEN`.
7375

7476
```python
75-
model = InferenceClientModel(api_key = YOUR_API_TOKEN)
77+
model = InferenceClientModel(token=os.getenv("HF_TOKEN"))
7678
agent = CodeAgent(tools=[*tools], model=model)
7779
```
7880

@@ -106,6 +108,7 @@ Here's the complete example of the MCP Client in Gradio:
106108

107109
```python
108110
import gradio as gr
111+
import os
109112

110113
from mcp import StdioServerParameters
111114
from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
@@ -117,7 +120,7 @@ try:
117120
)
118121
tools = mcp_client.get_tools()
119122

120-
model = InferenceClientModel()
123+
model = InferenceClientModel(token=os.getenv("HUGGINGFACE_API_TOKEN"))
121124
agent = CodeAgent(tools=[*tools], model=model)
122125

123126
demo = gr.ChatInterface(

0 commit comments

Comments
 (0)