You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/manuals/ai/model-runner.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ Models are pulled from Docker Hub the first time they're used and stored locally
41
41
5. Open the **Settings** view in Docker Desktop.
42
42
6. Navigate to **Features in development**.
43
43
7. From the **Beta** tab, tick the **Enable Docker Model Runner** setting.
44
-
8. If you are running on Windows, check tick the **Enable GPU-backed inference** setting.
44
+
8. If you are running on Windows with a supported NVIDIA GPU, you should also see and be able to tick the **Enable GPU-backed inference** setting.
45
45
46
46
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
0 commit comments