Skip to content

Commit 6a9e3bc

Browse files
committed
ollama show
1 parent aed661c commit 6a9e3bc

File tree

1 file changed

+33
-0
lines changed

1 file changed

+33
-0
lines changed

docs/use-cases/AI_ML/MCP/ollama.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,39 @@ NAME ID SIZE MODIFIED
5757
qwen3:latest 500a1f067a9f 5.2 GB 3 days ago
5858
```
5959

60+
We can use the following command to see more information about the model that we've downloaded:
61+
62+
```bash
63+
ollama show qwen3
64+
```
65+
66+
```text
67+
Model
68+
architecture qwen3
69+
parameters 8.2B
70+
context length 40960
71+
embedding length 4096
72+
quantization Q4_K_M
73+
74+
Capabilities
75+
completion
76+
tools
77+
78+
Parameters
79+
repeat_penalty 1
80+
stop "<|im_start|>"
81+
stop "<|im_end|>"
82+
temperature 0.6
83+
top_k 20
84+
top_p 0.95
85+
86+
License
87+
Apache License
88+
Version 2.0, January 2004
89+
```
90+
91+
We can see from this output that the default qwen3 model has just over 8 billion parameters.
92+
6093
## Install MCPHost {#install-mcphost}
6194

6295
At the time of writing (July 2025) there is no native functionality for using Ollama with MCP Servers.

0 commit comments

Comments
 (0)