Skip to content

Commit 7a66ef0

Browse files
committed
hfoptions
1 parent a4aa6ca commit 7a66ef0

File tree

1 file changed

+73
-16
lines changed

1 file changed

+73
-16
lines changed

units/en/unit2/lemonade-server.mdx

Lines changed: 73 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,10 @@ This section of the course assumes you have already installed `npx` and `Tiny Ag
6464
## Running your Tiny Agents application with AMD NPU and iGPU
6565

6666
To run your Tiny Agents application with AMD NPU and iGPU, simply point to the MCP server we created in the [previous section](https://huggingface.co/learn/mcp-course/en/unit2/tiny-agents) to the Lemonade Server, as shown below:
67+
68+
<hfoptions id="agent-config">
69+
<hfoption id="windows">
70+
6771
```json
6872
{
6973
"model": "Qwen3-8B-GGUF",
@@ -72,7 +76,7 @@ To run your Tiny Agents application with AMD NPU and iGPU, simply point to the M
7276
{
7377
"type": "stdio",
7478
"config": {
75-
"command": "C:\\Program Files\\nodejs\\npx.cmd", // Or simply "npx" on Linux
79+
"command": "C:\\Program Files\\nodejs\\npx.cmd",
7680
"args": [
7781
"mcp-remote",
7882
"http://localhost:7860/gradio_api/mcp/sse"
@@ -83,6 +87,31 @@ To run your Tiny Agents application with AMD NPU and iGPU, simply point to the M
8387
}
8488
```
8589

90+
</hfoption>
91+
<hfoption id="linux">
92+
93+
```json
94+
{
95+
"model": "Qwen3-8B-GGUF",
96+
"endpointUrl": "http://localhost:8000/api/",
97+
"servers": [
98+
{
99+
"type": "stdio",
100+
"config": {
101+
"command": "npx",
102+
"args": [
103+
"mcp-remote",
104+
"http://localhost:7860/gradio_api/mcp/sse"
105+
]
106+
}
107+
}
108+
]
109+
}
110+
```
111+
112+
</hfoption>
113+
</hfoptions>
114+
86115
You can then choose from a variety of models to run on your local machine. For this example, used the [`Qwen3-8B-GGUF`](https://huggingface.co/Qwen/Qwen3-8B-GGUF) model, which runs efficiently on AMD GPUs through Vulkan acceleration. You can find the list of models supported and even import your own models by navigating to http://localhost:8000/#model-management.
87116

88117
## Creating an assistant to handle sensitive information locally
@@ -103,25 +132,53 @@ cd file-assistant
103132

104133
Let's then create a new `agent.json` file in the `file-assistant` folder.
105134

135+
<hfoptions id="agent-file-config">
136+
<hfoption id="windows">
137+
106138
```json
107-
{
108-
"model": "user.jan-nano",
109-
"endpointUrl": "http://localhost:8000/api/",
110-
"servers": [
111-
{
112-
"type": "stdio",
113-
"config": {
114-
"command": "C:\\Program Files\\nodejs\\npx.cmd",
115-
"args": [
116-
"-y",
117-
"@wonderwhy-er/desktop-commander"
118-
]
119-
}
139+
{
140+
"model": "user.jan-nano",
141+
"endpointUrl": "http://localhost:8000/api/",
142+
"servers": [
143+
{
144+
"type": "stdio",
145+
"config": {
146+
"command": "C:\\Program Files\\nodejs\\npx.cmd",
147+
"args": [
148+
"-y",
149+
"@wonderwhy-er/desktop-commander"
150+
]
151+
}
152+
}
153+
]
154+
}
155+
```
156+
157+
</hfoption>
158+
<hfoption id="linux">
159+
160+
```json
161+
{
162+
"model": "user.jan-nano",
163+
"endpointUrl": "http://localhost:8000/api/",
164+
"servers": [
165+
{
166+
"type": "stdio",
167+
"config": {
168+
"command": "npx",
169+
"args": [
170+
"-y",
171+
"@wonderwhy-er/desktop-commander"
172+
]
120173
}
121-
]
122-
}
174+
}
175+
]
176+
}
123177
```
124178

179+
</hfoption>
180+
</hfoptions>
181+
125182
Finally, we have to download the `Jan Nano` model. You can do this by navigating to http://localhost:8000/#model-management, clicking on `Add a Model` and providing the following information:
126183

127184
```

0 commit comments

Comments
 (0)