Skip to content

Commit ed4b936

Browse files
committed
update hub namespace
1 parent 2a1e411 commit ed4b936

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

content/manuals/desktop/features/model-runner.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -73,14 +73,14 @@ $ docker model pull <model>
7373
Example:
7474

7575
```console
76-
$ docker model pull ignaciolopezluna020/llama3.2:1b
76+
$ docker model pull ai/llama3.2:1b
7777
```
7878

7979
Output:
8080

8181
```text
8282
Downloaded: 626.05 MB
83-
Model ignaciolopezluna020/llama3.2:1b pulled successfully
83+
Model ai/llama3.2:1b pulled successfully
8484
```
8585

8686
### List available models
@@ -91,7 +91,7 @@ Lists all models currently pulled to your local environment.
9191
$ docker model list
9292
```
9393

94-
If no models have been pulled yet, you will something similar to:
94+
You will something similar to:
9595

9696
```text
9797
MODEL PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED SIZE
@@ -105,7 +105,7 @@ Run a model and interact with it using a submitted prompt or in chat mode.
105105
#### One-time prompt
106106

107107
```console
108-
$ docker model run ignaciolopezluna020/llama3.2:1b "Hi"
108+
$ docker model run ai/llama3.2:1b "Hi"
109109
```
110110

111111
Output:
@@ -117,7 +117,7 @@ Hi! How can I assist you today
117117
#### Interactive chat
118118

119119
```console
120-
docker model run ignaciolopezluna020/llama3.2:1b
120+
docker model run ai/llama3.2:1b
121121
```
122122

123123
Output:
@@ -152,7 +152,7 @@ If you want to try an existing GenAI application, follow these instructions.
152152
1. Pull the required model from Docker Hub so it's ready for use in your app.
153153

154154
```console
155-
$ docker model pull ignaciolopezluna020/llama3.2:1b
155+
$ docker model pull ai/llama3.2:1b
156156
```
157157

158158
2. Set up the sample app. Download and unzip the following folder:
@@ -176,7 +176,7 @@ You can now interact with your own GenAI app, powered by a local model. Try a fe
176176

177177
### What models are available?
178178

179-
Currently, all models are hosted in the public Docker Hub namespace of <CHANGE>. You can pull and use any of the following:
179+
All the available models are hosted in the [public Docker Hub namespace of `ai`](https://hub.docker.com/u/ai).
180180

181181
### What API endpoints are available?
182182

@@ -222,7 +222,7 @@ Examples of calling an OpenAI endpoint (`chat/completions`) from within another
222222
curl http://model-runner.docker.internal/engines/llama.cpp/v1/chat/completions \
223223
-H "Content-Type: application/json" \
224224
-d '{
225-
"model": "ignaciolopezluna020/llama3.2:1b",
225+
"model": "ai/llama3.2:1b",
226226
"messages": [
227227
{
228228
"role": "system",
@@ -248,7 +248,7 @@ curl --unix-socket $HOME/.docker/run/docker.sock \
248248
localhost/exp/vDD4.40/engines/llama.cpp/v1/chat/completions \
249249
-H "Content-Type: application/json" \
250250
-d '{
251-
"model": "ignaciolopezluna020/llama3.2:1b",
251+
"model": "ai/llama3.2:1b",
252252
"messages": [
253253
{
254254
"role": "system",
@@ -279,7 +279,7 @@ Afterwards, interact with it as previously documented using `localhost` and the
279279
curl http://localhost:8080/engines/llama.cpp/v1/chat/completions \
280280
-H "Content-Type: application/json" \
281281
-d '{
282-
"model": "ignaciolopezluna020/llama3.2:1b",
282+
"model": "ai/llama3.2:1b",
283283
"messages": [
284284
{
285285
"role": "system",

0 commit comments

Comments
 (0)