Skip to content

Commit f21845d

Browse files
authored
ollama no longer requires experimental config
1 parent 49b16d2 commit f21845d

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

docs/cody/clients/install-vscode.mdx

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -358,7 +358,6 @@ To generate chat and commands with Ollama locally, follow these steps:
358358
- Select a chat model (model that includes instruct or chat, for example, [gemma:7b-instruct-q4_K_M](https://ollama.com/library/gemma:7b-instruct-q4_K_M)) from the [Ollama Library](https://ollama.com/library)
359359
- Pull the chat model locally (for example, `ollama pull gemma:7b-instruct-q4_K_M`)
360360
- Once the chat model is downloaded successfully, open Cody in VS Code
361-
- Enable the `cody.experimental.ollamaChat` configuration
362361
- Open a new Cody chat
363362
- In the new chat panel, you should see the chat model you've pulled in the dropdown list
364363
- Currently, you will need to restart VS Code to see the new models

0 commit comments

Comments
 (0)