Skip to content

Commit f69177e

Browse files
authored
Merge pull request #7378 from jonburchel/2025-09-30-oct-freshness-updates
October freshness updates
2 parents 47b847c + 0009d74 commit f69177e

16 files changed

+355
-340
lines changed

articles/ai-foundry/concepts/encryption-keys-portal.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to use customer-managed keys (CMK) for enhanced encryptio
55
ms.author: jburchel
66
author: jonburchel
77
ms.reviewer: deeikele
8-
ms.date: 09/22/2025
8+
ms.date: 10/01/2025
99
ms.service: azure-ai-services
1010
ms.topic: concept-article
1111
ms.custom:

articles/ai-foundry/foundry-local/concepts/foundry-local-architecture.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10,26 +10,27 @@ ms.author: jburchel
1010
ms.reviewer: samkemp
1111
author: jonburchel
1212
reviewer: samuel100
13-
ms.date: 7/3/2025
13+
ms.date: 10/01/2025
14+
ai-usage: ai-assisted
1415
---
1516

1617
# Foundry Local architecture
1718

1819
[!INCLUDE [foundry-local-preview](./../includes/foundry-local-preview.md)]
1920

20-
Foundry Local enables efficient, secure, and scalable AI model inference directly on your devices. This article explains the core components of Foundry Local and how they work together to deliver AI capabilities.
21+
Foundry Local enables efficient, secure, and scalable AI model inference directly on your device. This article explains the core components of Foundry Local and how they work together to deliver AI capabilities.
2122

22-
Key benefits of Foundry Local include:
23+
Foundry Local offers these key benefits:
2324

2425
> [!div class="checklist"]
2526
>
26-
> - **Low Latency**: Run models locally to minimize processing time and deliver faster results.
27-
> - **Data Privacy**: Process sensitive data locally without sending it to the cloud, helping meet data protection requirements.
27+
> - **Low latency**: Run models locally to minimize processing time and deliver faster results.
28+
> - **Data privacy**: Process sensitive data locally without sending it to the cloud, helping meet data protection requirements.
2829
> - **Flexibility**: Support for diverse hardware configurations lets you choose the optimal setup for your needs.
2930
> - **Scalability**: Deploy across various devices, from laptops to servers, to suit different use cases.
30-
> - **Cost-Effectiveness**: Reduce cloud computing costs, especially for high-volume applications.
31-
> - **Offline Operation**: Work without an internet connection in remote or disconnected environments.
32-
> - **Seamless Integration**: Easily incorporate into existing development workflows for smooth adoption.
31+
> - **Cost-effectiveness**: Reduce cloud computing costs, especially for high-volume applications.
32+
> - **Offline operation**: Work without an internet connection in remote or disconnected environments.
33+
> - **Seamless integration**: Easily incorporate into existing development workflows for smooth adoption.
3334
3435
## Key components
3536

@@ -148,7 +149,7 @@ The AI Toolkit for Visual Studio Code provides a user-friendly interface for dev
148149

149150
After completing these steps, your Foundry Local model will appear in the 'My Models' list in AI Toolkit and is ready to be used by right-clicking on your model and select 'Load in Playground'.
150151

151-
## Next Steps
152+
## Related content
152153

153154
- [Get started with Foundry Local](../get-started.md)
154155
- [Integrate inferencing SDKs with Foundry Local](../how-to/how-to-integrate-with-inference-sdks.md)

articles/ai-foundry/foundry-local/get-started.md

Lines changed: 42 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Get started with Foundry Local
33
titleSuffix: Foundry Local
44
description: Learn how to install, configure, and run your first AI model with Foundry Local
5-
ms.date: 07/03/2025
5+
ms.date: 10/01/2025
66
ms.service: azure-ai-foundry
77
ms.subservice: foundry-local
88
ms.topic: quickstart
@@ -18,6 +18,7 @@ keywords:
1818
- cognitive
1919
- AI models
2020
- local inference
21+
ai-usage: ai-assisted
2122
# customer intent: As a developer, I want to get started with Foundry Local so that I can run AI models locally.
2223
---
2324

@@ -32,25 +33,25 @@ This guide walks you through setting up Foundry Local to run AI models on your d
3233
Your system must meet the following requirements to run Foundry Local:
3334

3435
- **Operating System**: Windows 10 (x64), Windows 11 (x64/ARM), Windows Server 2025, macOS.
35-
- **Hardware**: Minimum 8GB RAM, 3GB free disk space. Recommended 16GB RAM, 15GB free disk space.
36-
- **Network**: Internet connection for initial model download (optional for offline use)
36+
- **Hardware**: Minimum 8 GB RAM and 3 GB free disk space. Recommended 16 GB RAM and 15 GB free disk space.
37+
- **Network**: Internet connection to download the initial model (optional for offline use).
3738
- **Acceleration (optional)**: NVIDIA GPU (2,000 series or newer), AMD GPU (6,000 series or newer), AMD NPU, Intel iGPU, Intel NPU (32GB or more of memory), Qualcomm Snapdragon X Elite (8GB or more of memory), Qualcomm NPU, or Apple silicon.
3839

3940
> [!NOTE]
40-
> New NPUs are only supported on systems running Windows version 24H2 or later. If you have an Intel NPU on Windows, you need to install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration with Foundry Local.
41+
> New NPUs are supported only on systems running Windows 24H2 or later. If you use an Intel NPU on Windows, install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration in Foundry Local.
4142
42-
Also, ensure you have administrative privileges to install software on your device.
43+
Make sure you have admin rights to install software.
4344

4445
> [!TIP]
45-
> If you encounter service connection issues after installation (such as "Request to local service failed" errors), try running `foundry service restart` to resolve the issue.
46+
> If you see a service connection error after installation (for example, 'Request to local service failed'), run `foundry service restart`.
4647
4748
## Quickstart
4849

49-
Get started with Foundry Local quickly with these options:
50+
Get started fast with Foundry Local:
5051

5152
### Option 1: Quick CLI setup
5253

53-
1. **Install Foundry Local**
54+
1. **Install Foundry Local**.
5455

5556
- **Windows**: Open a terminal and run the following command:
5657
```bash
@@ -63,47 +64,47 @@ brew install foundrylocal
6364
`
6465
Alternatively, you can download the installer from the [Foundry Local GitHub repository](https://aka.ms/foundry-local-installer).
6566

66-
1. **Run your first model** Open a terminal window and run the following command to run a model:
67+
1. **Run your first model**. Open a terminal and run this command:
6768

6869
```bash
6970
foundry model run qwen2.5-0.5b
7071
```
7172

72-
The model downloads - which can take a few minutes, depending on your internet speed - and the model runs. Once the model is running, you can interact with it using the command line interface (CLI). For example, you can ask:
73+
Foundry Local downloads the model, which can take a few minutes depending on your internet speed, then runs it. After the model starts, interact with it by using the command-line interface (CLI). For example, you can ask:
7374

7475
```text
7576
Why is the sky blue?
7677
```
7778

78-
You should see a response from the model in the terminal:
79-
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from foundry local run command." lightbox="media/get-started-output.png":::
79+
You see a response from the model in the terminal:
80+
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from Foundry Local run command." lightbox="media/get-started-output.png":::
8081

8182
### Option 2: Download starter projects
8283

8384
For practical, hands-on learning, download one of our starter projects that demonstrate real-world scenarios:
8485

85-
- **[Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat)**: Build a local chat interface with multiple model support
86-
- **[Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize)**: A command-line utility that generates summaries of text files or direct text input.
87-
- **[Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling)**: Enabling and using function calling with Phi-4 mini.
86+
- [Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat): Build a local chat interface with multiple model support.
87+
- [Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize): A command-line utility that generates summaries of text files or direct text input.
88+
- [Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling): Enable and use function calling with Phi-4 mini.
8889

8990
Each project includes:
9091

9192
- Step-by-step setup instructions
9293
- Complete source code
9394
- Configuration examples
94-
- Best practices implementation
95+
- Best practices
9596

9697
> [!TIP]
97-
> These starter projects align with scenarios covered in our [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
98+
> These starter projects align with scenarios in the [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
9899

99100
> [!TIP]
100-
> You can replace `qwen2.5-0.5b` with any model name from the catalog (see `foundry model list` for available models). Foundry Local downloads the model variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, it downloads the CUDA version of the model. If you have a Qualcomm NPU, it downloads the NPU variant. If you have no GPU or NPU, it downloads the CPU version.
101+
> Replace `qwen2.5-0.5b` with any model name from the catalog (run `foundry model list` to view available models). Foundry Local downloads the variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, Foundry Local downloads the CUDA version. If you have a Qualcomm NPU, Foundry Local downloads the NPU variant. If you have no GPU or NPU, Foundry Local downloads the CPU version.
101102
>
102-
> Note that when you run `foundry model list` for the first time, you'll see a download progress bar as Foundry Local downloads the execution providers for your machine's hardware.
103+
> When you run `foundry model list` the first time, you see a download progress bar while Foundry Local downloads the execution providers for your hardware.
103104
104105
## Run the latest OpenAI open-source model
105106
106-
To run the latest OpenAI open-source model - `GPT-OSS-20B` - use the following command:
107+
Run the latest OpenAI open-source model, `GPT-OSS-20B`, with this command:
107108
108109
```bash
109110
foundry model run gpt-oss-20b
@@ -112,8 +113,8 @@ foundry model run gpt-oss-20b
112113
> [!IMPORTANT]
113114
> Requirements for running GPT-OSS-20B:
114115
>
115-
> - Nvidia GPU with 16GB VRAM or more.
116-
> - Foundry Local version **0.6.87** or above. Any version below this will not support the model. You can check your Foundry Local version by running:
116+
> - NVIDIA GPU with 16 GB of VRAM or more.
117+
> - Foundry Local version **0.6.87** or later. Earlier versions don't support the model. Check your version with:
117118
>
118119
> ```bash
119120
> foundry --version
@@ -127,49 +128,49 @@ The Foundry CLI organizes commands into these main categories:
127128
- **Service**: Commands for managing the Foundry Local service.
128129
- **Cache**: Commands for managing the local model cache (downloaded models on local disk).
129130

130-
View all available commands with:
131+
View all commands:
131132

132133
```bash
133134
foundry --help
134135
```
135136

136-
To view available **model** commands, run:
137+
View **model** commands:
137138

138139
```bash
139140
foundry model --help
140141
```
141142

142-
To view available **service** commands, run:
143+
View **service** commands:
143144

144145
```bash
145146
foundry service --help
146147
```
147148

148-
To view available **cache** commands, run:
149+
View **cache** commands:
149150

150151
```bash
151152
foundry cache --help
152153
```
153154

154155
> [!TIP]
155-
> For a complete guide to all CLI commands and their usage, see the [Foundry Local CLI Reference](reference/reference-cli.md).
156+
> For details on all CLI commands, see [Foundry Local CLI reference](reference/reference-cli.md).
156157

157-
## Upgrading Foundry Local
158+
## Upgrade Foundry Local
158159

159-
To upgrade Foundry Local to the latest version, use the following commands based on your operating system:
160+
Run the command for your OS to upgrade Foundry Local.
160161

161-
- **Windows**: Open a terminal and run:
162+
- Windows: In a terminal, run:
162163
```bash
163164
winget upgrade --id Microsoft.FoundryLocal
164165
```
165-
- **macOS**: Open a terminal and run:
166+
- macOS: In a terminal, run:
166167
```bash
167168
brew upgrade foundrylocal
168169
```
169170

170-
## Uninstalling Foundry Local
171+
## Uninstall Foundry Local
171172

172-
If you wish to uninstall Foundry Local, use the following commands based on your operating system:
173+
To uninstall Foundry Local, run the command for your operating system:
173174

174175
- **Windows**: Open a terminal and run:
175176
```bash
@@ -186,7 +187,7 @@ If you wish to uninstall Foundry Local, use the following commands based on your
186187

187188
### Service connection issues
188189

189-
If you encounter the following error when running `foundry model list` or other commands:
190+
If you see this error when you run `foundry model list` or a similar command:
190191

191192
```
192193
>foundry model list
@@ -200,18 +201,18 @@ The requested address is not valid in its context. (127.0.0.1:0)
200201
Please check service status with 'foundry service status'.
201202
```
202203
203-
**Solution**: Run the following command to restart the service:
204+
Run this command to restart the service:
204205
205206
```bash
206207
foundry service restart
207208
```
208209

209-
This resolves issues where the service is running but not properly accessible due to port binding problems.
210+
This command fixes cases where the service runs but isn't accessible because of a port binding issue.
210211

211212
## Related content
212213

213-
- [Integrate inferencing SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
214-
- [Explore the Foundry Local documentation](index.yml)
215-
- [Learn about best practices and troubleshooting](reference/reference-best-practice.md)
216-
- [Explore the Foundry Local API reference](reference/reference-catalog-api.md)
217-
- [Learn Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)
214+
- [Integrate inference SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
215+
- [Foundry Local documentation](index.yml)
216+
- [Best practices and troubleshooting](reference/reference-best-practice.md)
217+
- [Foundry Local API reference](reference/reference-catalog-api.md)
218+
- [Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)

0 commit comments

Comments
 (0)