Skip to content

This provides sample codes that uses Microsoft.Extensions.AI for locally running LLMs through Docker Model Runner, Foundry Local, Hugging Face and Ollama

License

Notifications You must be signed in to change notification settings

devkimchi/meai-for-local-llms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This provides sample codes that uses Microsoft.Extensions.AI for locally running LLMs through Docker Model Runner, Foundry Local, Hugging Face and Ollama.

This is a trim-down version of OpenChat Playground OpenChat Playground, specific to dealing with locally running LLMs. If you want to see more language models supported including MaaS and other vendors, try out OpenChat Playground OpenChat Playground instead.

Prerequisites

Getting Started

Get the repository ready

  1. Login to GitHub.

    gh auth login
  2. Check login status.

    gh auth status
  3. Fork this repository to your account and clone the forked repository to your local machine.

    gh repo fork devkimchi/meai-for-local-llms --clone --default-branch-only
  4. Navigate to the cloned repository.

    cd meai-for-local-llms
  5. Get the repository root.

    # bash/zsh
    REPOSITORY_ROOT=$(git rev-parse --show-toplevel)
    # PowerShell
    $REPOSITORY_ROOT = git rev-parse --show-toplevel

Use GitHub Models

As a default, this app uses GitHub Models.

With .NET Aspire
  1. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  2. Add GitHub Personal Access Token (PAT) for GitHub Models connection. Make sure you should replace {{YOUR_TOKEN}} with your GitHub PAT.

    # bash/zsh
    dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        set "Parameters:github-models-gh-apikey" "{{YOUR_TOKEN}}"
    # PowerShell
    dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        set "Parameters:github-models-gh-apikey" "{{YOUR_TOKEN}}"

    For more details about GitHub PAT, refer to the doc, Managing your personal access tokens.

  3. Run the app. The default language model is openai/gpt-4o-mini.

    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost

    If you want to change the language model, add the --model option with a preferred model name. You can find the language model from the GitHub Models catalog page.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type GitHubModels --model <model-name>
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type GitHubModels --model <model-name>
  4. Once the .NET Aspire dashboard opens, click navigate to https://localhost:45160, and enter prompts.

Without .NET Aspire
  1. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  2. Add GitHub Personal Access Token (PAT) for GitHub Models connection. Make sure you should replace {{YOUR_TOKEN}} with your GitHub PAT.

    # bash/zsh
    dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        set GitHubModels:Token "{{YOUR_TOKEN}}"
    # PowerShell
    dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        set GitHubModels:Token "{{YOUR_TOKEN}}"

    For more details about GitHub PAT, refer to the doc, Managing your personal access tokens.

  3. Run the app. The default language model is openai/gpt-4o-mini.

    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp

    If you want to change the language model, add the --model option with a preferred model name. You can find the language model from the GitHub Models catalog page.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type GitHubModels --model <model-name>
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type GitHubModels --model <model-name>
  4. Open your web browser, navigate to http://localhost:5160, and enter prompts.

Use Docker Model Runner

With .NET Aspire
  1. Make sure Docker Desktop is up and running.

    docker info
  2. Download language model, ai/gpt-oss, to your local machine.

    docker model pull ai/gpt-oss
  3. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  4. Run the app using the --connector-type option with the DockerModelRunner value. The default language model is ai/gpt-oss.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type DockerModelRunner
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type DockerModelRunner

    If you want to change the language model, add the --model option with a preferred model name.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type DockerModelRunner --model <model-name>
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type DockerModelRunner --model <model-name>
  5. Once the .NET Aspire dashboard opens, click navigate to https://localhost:45160, and enter prompts.

Without .NET Aspire
  1. Make sure Docker Desktop is up and running.

    docker info
  2. Download language model, ai/gpt-oss, to your local machine.

    docker model pull ai/gpt-oss
  3. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  4. Run the app using the --connector-type option with the DockerModelRunner value. The default language model is ai/gpt-oss.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type DockerModelRunner
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type DockerModelRunner

    If you want to change the language model, add the --model option with a preferred model name.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type DockerModelRunner --model <model-name>
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type DockerModelRunner --model <model-name>
  5. Open your web browser, navigate to http://localhost:5160, and enter prompts.

Use Foundry Local

With .NET Aspire
  1. Make sure Foundry Local is NOT running.

    foundry service status

    If Foundry Local service is up and running, run the following command:

    foundry service stop
  2. Download language model, gpt-oss-20b, to your local machine.

    foundry model download gpt-oss-20b

    Once you download a language model, the foundry service is automatically up and running. If the service is up and running, stop it first.

    foundry service stop
  3. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  4. Run the app using the --connector-type option with the FoundryLocal value. The default language model is gpt-oss-20b.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type FoundryLocal
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type FoundryLocal

    If you want to change the language model, add the --alias option with a preferred model name.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type FoundryLocal --alias <model-name>
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type FoundryLocal --model <model-name>
  5. Once the .NET Aspire dashboard opens, click navigate to https://localhost:45160, and enter prompts.

Without .NET Aspire
  1. Download language model, gpt-oss-20b, to your local machine.

    foundry model download gpt-oss-20b
  2. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  3. Run the app using the --connector-type option with the FoundryLocal value. The default language model is gpt-oss-20b.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type FoundryLocal
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type FoundryLocal

    If you want to change the language model, add the --alias option with a preferred model name.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type FoundryLocal --alias <model-name>
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type FoundryLocal --alias <model-name>
  4. Open your web browser, navigate to http://localhost:5160, and enter prompts.

Use Hugging Face

Models from Hugging Face are running through Ollama server.

With .NET Aspire

With .NET Aspire, it uses the ollama container image. Therefore, there's no need to run the Ollama server on your local machine.

  1. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  2. Run the app using the --connector-type option with the HuggingFace value. The default language model is hf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type HuggingFace
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type HuggingFace

    If you want to change the language model, add the --model option with a preferred model name. Make sure that the model name format MUST follow hf.co/{ORG_NAME}/{MODEL_NAME}, and the model name MUST be formatted in GGUF.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type HuggingFace --model <model-name>
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type HuggingFace --model <model-name>
  3. Once the .NET Aspire dashboard opens, click navigate to https://localhost:45160, and enter prompts.

Without .NET Aspire
  1. Make sure Ollama is up and running.

    ollama start
  2. In a separate terminal, download language model, LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF, to your local machine.

    ollama pull hf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
  3. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  4. Run the app using the --connector-type option with the HuggingFace value. The default language model is hf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type HuggingFace
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type HuggingFace

    If you want to change the language model, add the --model option with a preferred model name.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type HuggingFace --model <model-name>
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type HuggingFace --model <model-name>
  5. Open your web browser, navigate to http://localhost:5160, and enter prompts.

Use Ollama

With .NET Aspire

With .NET Aspire, it uses the ollama container image. Therefore, there's no need to run the Ollama server on your local machine.

  1. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  2. Run the app using the --connector-type option with the Ollama value. The default language model is gpt-oss.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type Ollama
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type Ollama

    If you want to change the language model, add the --model option with a preferred model name.

    # bash/zsh
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \
        -- --connector-type Ollama --model <model-name>
    # PowerShell
    dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost `
        -- --connector-type Ollama --model <model-name>
  3. Once the .NET Aspire dashboard opens, click navigate to https://localhost:45160, and enter prompts.

Without .NET Aspire
  1. Make sure Ollama is up and running.

    ollama start
  2. In a separate terminal, download language model, gpt-oss, to your local machine.

    ollama pull gpt-oss
  3. Make sure you are at the repository root.

    cd $REPOSITORY_ROOT
  4. Run the app using the --connector-type option with the Ollama value. The default language model is gpt-oss.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type Ollama
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type Ollama

    If you want to change the language model, add the --model option with a preferred model name.

    # bash/zsh
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \
        -- --connector-type Ollama --model <model-name>
    # PowerShell
    dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp `
        -- --connector-type Ollama --model <model-name>
  5. Open your web browser, navigate to http://localhost:5160, and enter prompts.

About

This provides sample codes that uses Microsoft.Extensions.AI for locally running LLMs through Docker Model Runner, Foundry Local, Hugging Face and Ollama

Topics

Resources

License

Stars

Watchers

Forks