You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we have the tools, we can create a simple agent that uses them to answer questions. We'll just use a simple `InferenceClientModel` and the default model from `smolagents` for now.
72
-
It is important to pass your api_key to the InferenceClientModel. You can access the token from your huggingface account. [check here.](https://huggingface.co/docs/hub/en/security-tokens).
73
+
74
+
It is important to pass your api_key to the InferenceClientModel. You can access the token from your huggingface account. [check here.](https://huggingface.co/docs/hub/en/security-tokens), and set the access token with the environment variable `HF_TOKEN`.
73
75
74
76
```python
75
-
model = InferenceClientModel(api_key=YOUR_API_TOKEN)
77
+
model = InferenceClientModel(token=os.getenv("HF_TOKEN"))
76
78
agent = CodeAgent(tools=[*tools], model=model)
77
79
```
78
80
@@ -106,6 +108,7 @@ Here's the complete example of the MCP Client in Gradio:
106
108
107
109
```python
108
110
import gradio as gr
111
+
import os
109
112
110
113
from mcp import StdioServerParameters
111
114
from smolagents import InferenceClientModel, CodeAgent, ToolCollection, MCPClient
@@ -117,7 +120,7 @@ try:
117
120
)
118
121
tools = mcp_client.get_tools()
119
122
120
-
model = InferenceClientModel()
123
+
model = InferenceClientModel(token=os.getenv("HUGGINGFACE_API_TOKEN"))
0 commit comments