Skip to content

LazerLars/how_to_setup_continue_extension_vs_code_with_ollama_local_LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

How to setup continue extension in VS Code with Ollama and a local llm for chat and autocomplete suggestions on Windows 11

For that matter also how to use Ollama the simplest way on your computer

Tutorial in how to setup the continue extension in VS Code with Ollama and a Local LLM for chat, and autocomplete

Download and install Ollama

Once installed you should see Ollama running in your system tray.

You can also check by opening the Ollama Desktop app

Now its time to install some local LLM models

Go to Ollama and find some models:
Ollama Models Download
In our case we want to download llama3.1:8b for chat and qwen-2.5-coder:1.5b.

We are doing that via Powershell run the below command.

Get llama3.18b

ollama run llama3.1:8b


Once its done you can now chat with the LLM in powershell (your CLI). To quit running the LLM in chat write /bye. If you accedentially close powershell you can find all installed models by writing:

ollama list

ollama run your_model_name_found_in_the_ollama_list_you_ran_before
To run your model again you simply write

ollama run llama3.1:8b

Do the same for qwen2.5-coder:1.5b

ollama run qwen2.5-coder:1.5b

Now you can actually use your LLM models via Ollama locally

Now you see all your installed LLM models in the Ollama desktop app

Install Continue extension in VS Code

Open extensions in VS Code (ctrl + shift + x)
Search for Continue and hit install

Configure the Continue plugin

Open config file
Now you should paste this config in. Note the models you assign most be installed in your Ollama, else will it not work.
Check that the models names below are matching what you have installed, use the 'ollama list' in powershell to see your installed models. We use a small model for autocomplete, to increase speed and a model which also are specialized in "autocomplete" and for chat we use a 8b model which gives more depth, and can run on many types of computers. If you want to try others, then use Gemeni, ChatGPT, Claude, CoPilot or whatever "up to date" state of the art online chat model to get new or better suggestions.

name: Local Config
version: 0.0.1
schema: v1

models:
  - name: Local Chat
    provider: ollama
    model: llama3.1:8b
    roles:
      - chat

  - name: Local Autocomplete
    provider: ollama
    model: qwen2.5-coder:1.5b
    roles:
      - autocomplete

Once you have configured it, you can start using it. Ensure you have the continue "exension" shown in the status bar. This is where you can enable/disable autocomplete if you want that for some reason.

Sometime you might need to reload vs code to make everything run properly.

Now you are ready to rock ma dudes/dudeins

Ctrl + L: open chat
Highlight code + Ctrl L / I ask questions / edits code
Autocomplete should come by it self, if not you can force it with ctrl + alt + space
Below are a snippet from Continue once you have installed it

"""                    _________               _____ _____
                       __  ____/______ _______ __  /____(_)_______ ____  _______
                       _  /     _  __ \__  __ \_  __/__  / __  __ \_  / / /_  _ \
                       / /___   / /_/ /_  / / // /_  _  /  _  / / // /_/ / /  __/
                       \____/   \____/ /_/ /_/ \__/  /_/   /_/ /_/ \__,_/  \___/

                                 Autocomplete, Edit, Chat, and Agent tutorial
"""


# —————————————————————————————————————————————     Autocomplete     —————————————————————————————————————————————— #
#                            Autocomplete provides inline code suggestions as you type.

# 1. Place cursor after `sorting_algorithm:` below and press [Enter]
# 2. Press [Tab] to accept the Autocomplete suggestion

# Basic assertion for sorting_algorithm:

# —————————————————————————————————————————————————     Edit      ————————————————————————————————————————————————— #
#                   Edit is a convenient way to make quick changes to specific code and files.

# 1. Highlight the code below
# 2. Press [Cmd/Ctrl + I] to Edit
# 3. Try asking Continue to "make this more readable"
def sorting_algorithm(x):
    for i in range(len(x)):
        for j in range(len(x) - 1):
            if x[j] > x[j + 1]:
                x[j], x[j + 1] = x[j + 1], x[j]
    return x

# —————————————————————————————————————————————————     Chat      ————————————————————————————————————————————————— #
#                    Chat makes it easy to ask for help from an LLM without needing to leave the IDE.

# 1. Highlight the code below
# 2. Press [Cmd/Ctrl + L] to add to Chat
# 3. Try asking Continue "what sorting algorithm is this?"
def sorting_algorithm2(x):
    for i in range(len(x)):
        for j in range(len(x) - 1):
            if x[j] > x[j + 1]:
                x[j], x[j + 1] = x[j + 1], x[j]
    return x

# —————————————————————————————————————————————————     Agent      ————————————————————————————————————————————————— #
#           Agent equips the Chat model with the tools needed to handle a wide range of coding tasks, allowing
#           the model to make decisions and save you the work of manually finding context and performing actions.

# 1. Switch from "Chat" to "Agent" mode using the dropdown in the bottom left of the input box
# 2. Use the "/init" slash command to generate a CONTINUE.md file

  # ——————————————————      Learn more at https://docs.continue.dev      ——————————————————— #



HELP

If you encounter anything which not are acting as expected, then any ai chat would be your friend in need. You could encounter problems with chat saying that your models isn't configured right, or your model dont exist. It can come down to formatting in the config.yaml file, so etc try to paste your config.yaml file to any LLM and see if i can see any issues, it could be as simple as indensation gone wrong :).

Hope this will help you get coding like a fully fuled rocket.

THANKS CONTINUE For this.

About

Tutorial in how to setup the continue extension in VS Code with Ollama and a Local LLM for chat, and autocomplete

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages