Microsoft.Extensions.AI for Local LLMs
This provides sample codes that uses Microsoft.Extensions.AI for locally running LLMs through Docker Model Runner, Foundry Local, Hugging Face and Ollama.
This is a trim-down version of
OpenChat Playground, specific to dealing with locally running LLMs. If you want to see more language models supported including MaaS and other vendors, try out
OpenChat Playground instead.
- .NET SDK 9
- Visual Studio Code + C# DevKit or Visual Studio 2022 v17.14+
- GitHub CLI
- PowerShell 7.5+ 👉 Windows only
- Docker Desktop
- Foundry Local
- Ollama
-
Login to GitHub.
gh auth login
-
Check login status.
gh auth status
-
Fork this repository to your account and clone the forked repository to your local machine.
gh repo fork devkimchi/meai-for-local-llms --clone --default-branch-only
-
Navigate to the cloned repository.
cd meai-for-local-llms
-
Get the repository root.
# bash/zsh REPOSITORY_ROOT=$(git rev-parse --show-toplevel)
# PowerShell $REPOSITORY_ROOT = git rev-parse --show-toplevel
As a default, this app uses GitHub Models.
With .NET Aspire
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Add GitHub Personal Access Token (PAT) for GitHub Models connection. Make sure you should replace
{{YOUR_TOKEN}}
with your GitHub PAT.# bash/zsh dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ set "Parameters:github-models-gh-apikey" "{{YOUR_TOKEN}}"
# PowerShell dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` set "Parameters:github-models-gh-apikey" "{{YOUR_TOKEN}}"
For more details about GitHub PAT, refer to the doc, Managing your personal access tokens.
-
Run the app. The default language model is
openai/gpt-4o-mini
.dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost
If you want to change the language model, add the
--model
option with a preferred model name. You can find the language model from the GitHub Models catalog page.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type GitHubModels --model <model-name>
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type GitHubModels --model <model-name>
-
Once the .NET Aspire dashboard opens, click navigate to
https://localhost:45160
, and enter prompts.
Without .NET Aspire
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Add GitHub Personal Access Token (PAT) for GitHub Models connection. Make sure you should replace
{{YOUR_TOKEN}}
with your GitHub PAT.# bash/zsh dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ set GitHubModels:Token "{{YOUR_TOKEN}}"
# PowerShell dotnet user-secrets --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` set GitHubModels:Token "{{YOUR_TOKEN}}"
For more details about GitHub PAT, refer to the doc, Managing your personal access tokens.
-
Run the app. The default language model is
openai/gpt-4o-mini
.dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp
If you want to change the language model, add the
--model
option with a preferred model name. You can find the language model from the GitHub Models catalog page.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type GitHubModels --model <model-name>
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type GitHubModels --model <model-name>
-
Open your web browser, navigate to
http://localhost:5160
, and enter prompts.
With .NET Aspire
-
Make sure Docker Desktop is up and running.
docker info
-
Download language model,
ai/gpt-oss
, to your local machine.docker model pull ai/gpt-oss
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theDockerModelRunner
value. The default language model isai/gpt-oss
.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type DockerModelRunner
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type DockerModelRunner
If you want to change the language model, add the
--model
option with a preferred model name.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type DockerModelRunner --model <model-name>
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type DockerModelRunner --model <model-name>
-
Once the .NET Aspire dashboard opens, click navigate to
https://localhost:45160
, and enter prompts.
Without .NET Aspire
-
Make sure Docker Desktop is up and running.
docker info
-
Download language model,
ai/gpt-oss
, to your local machine.docker model pull ai/gpt-oss
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theDockerModelRunner
value. The default language model isai/gpt-oss
.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type DockerModelRunner
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type DockerModelRunner
If you want to change the language model, add the
--model
option with a preferred model name.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type DockerModelRunner --model <model-name>
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type DockerModelRunner --model <model-name>
-
Open your web browser, navigate to
http://localhost:5160
, and enter prompts.
With .NET Aspire
-
Make sure Foundry Local is NOT running.
foundry service status
If Foundry Local service is up and running, run the following command:
foundry service stop
-
Download language model,
gpt-oss-20b
, to your local machine.foundry model download gpt-oss-20b
Once you download a language model, the foundry service is automatically up and running. If the service is up and running, stop it first.
foundry service stop
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theFoundryLocal
value. The default language model isgpt-oss-20b
.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type FoundryLocal
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type FoundryLocal
If you want to change the language model, add the
--alias
option with a preferred model name.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type FoundryLocal --alias <model-name>
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type FoundryLocal --model <model-name>
-
Once the .NET Aspire dashboard opens, click navigate to
https://localhost:45160
, and enter prompts.
Without .NET Aspire
-
Download language model,
gpt-oss-20b
, to your local machine.foundry model download gpt-oss-20b
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theFoundryLocal
value. The default language model isgpt-oss-20b
.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type FoundryLocal
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type FoundryLocal
If you want to change the language model, add the
--alias
option with a preferred model name.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type FoundryLocal --alias <model-name>
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type FoundryLocal --alias <model-name>
-
Open your web browser, navigate to
http://localhost:5160
, and enter prompts.
Models from Hugging Face are running through Ollama server.
With .NET Aspire
With .NET Aspire, it uses the ollama container image. Therefore, there's no need to run the Ollama server on your local machine.
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theHuggingFace
value. The default language model ishf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type HuggingFace
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type HuggingFace
If you want to change the language model, add the
--model
option with a preferred model name. Make sure that the model name format MUST followhf.co/{ORG_NAME}/{MODEL_NAME}
, and the model name MUST be formatted in GGUF.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type HuggingFace --model <model-name>
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type HuggingFace --model <model-name>
-
Once the .NET Aspire dashboard opens, click navigate to
https://localhost:45160
, and enter prompts.
Without .NET Aspire
-
Make sure Ollama is up and running.
ollama start
-
In a separate terminal, download language model,
LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
, to your local machine.ollama pull hf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theHuggingFace
value. The default language model ishf.co/LGAI-EXAONE/EXAONE-4.0-1.2B-GGUF
.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type HuggingFace
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type HuggingFace
If you want to change the language model, add the
--model
option with a preferred model name.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type HuggingFace --model <model-name>
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type HuggingFace --model <model-name>
-
Open your web browser, navigate to
http://localhost:5160
, and enter prompts.
With .NET Aspire
With .NET Aspire, it uses the ollama container image. Therefore, there's no need to run the Ollama server on your local machine.
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theOllama
value. The default language model isgpt-oss
.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type Ollama
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type Ollama
If you want to change the language model, add the
--model
option with a preferred model name.# bash/zsh dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost \ -- --connector-type Ollama --model <model-name>
# PowerShell dotnet watch run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.AppHost ` -- --connector-type Ollama --model <model-name>
-
Once the .NET Aspire dashboard opens, click navigate to
https://localhost:45160
, and enter prompts.
Without .NET Aspire
-
Make sure Ollama is up and running.
ollama start
-
In a separate terminal, download language model,
gpt-oss
, to your local machine.ollama pull gpt-oss
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app using the
--connector-type
option with theOllama
value. The default language model isgpt-oss
.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type Ollama
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type Ollama
If you want to change the language model, add the
--model
option with a preferred model name.# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp \ -- --connector-type Ollama --model <model-name>
# PowerShell dotnet run --project $REPOSITORY_ROOT/src/MEAIForLocalLLMs.WebApp ` -- --connector-type Ollama --model <model-name>
-
Open your web browser, navigate to
http://localhost:5160
, and enter prompts.