You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-local/get-started.md
+41-41Lines changed: 41 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Get started with Foundry Local
3
3
titleSuffix: Foundry Local
4
4
description: Learn how to install, configure, and run your first AI model with Foundry Local
5
-
ms.date: 07/03/2025
5
+
ms.date: 10/01/2025
6
6
ms.service: azure-ai-foundry
7
7
ms.subservice: foundry-local
8
8
ms.topic: quickstart
@@ -32,25 +32,25 @@ This guide walks you through setting up Foundry Local to run AI models on your d
32
32
Your system must meet the following requirements to run Foundry Local:
33
33
34
34
-**Operating System**: Windows 10 (x64), Windows 11 (x64/ARM), Windows Server 2025, macOS.
35
-
-**Hardware**: Minimum 8GB RAM, 3GB free disk space. Recommended 16GB RAM, 15GB free disk space.
36
-
-**Network**: Internet connection for initial model download (optional for offline use)
35
+
-**Hardware**: Minimum 8 GB RAM and 3 GB free disk space. Recommended 16 GB RAM and 15 GB free disk space.
36
+
-**Network**: Internet connection to download the initial model (optional for offline use).
37
37
-**Acceleration (optional)**: NVIDIA GPU (2,000 series or newer), AMD GPU (6,000 series or newer), AMD NPU, Intel iGPU, Intel NPU (32GB or more of memory), Qualcomm Snapdragon X Elite (8GB or more of memory), Qualcomm NPU, or Apple silicon.
38
38
39
39
> [!NOTE]
40
-
> New NPUs are only supported on systems running Windows version 24H2 or later. If you have an Intel NPU on Windows, you need to install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration with Foundry Local.
40
+
> New NPUs are supported only on systems running Windows 24H2 or later. If you use an Intel NPU on Windows, install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration in Foundry Local.
41
41
42
-
Also, ensure you have administrative privileges to install software on your device.
42
+
Make sure you have admin rights to install software.
43
43
44
44
> [!TIP]
45
-
> If you encounter service connection issues after installation (such as "Request to local service failed" errors), try running `foundry service restart` to resolve the issue.
45
+
> If you see a service connection error after installation (for example, 'Request to local service failed'), run `foundry service restart`.
46
46
47
47
## Quickstart
48
48
49
-
Get started with Foundry Local quickly with these options:
49
+
Get started fast with Foundry Local:
50
50
51
51
### Option 1: Quick CLI setup
52
52
53
-
1.**Install Foundry Local**
53
+
1.**Install Foundry Local**.
54
54
55
55
-**Windows**: Open a terminal and run the following command:
56
56
```bash
@@ -63,47 +63,47 @@ brew install foundrylocal
63
63
`
64
64
Alternatively, you can download the installer from the [Foundry Local GitHub repository](https://aka.ms/foundry-local-installer).
65
65
66
-
1. **Run your first model** Open a terminal window and run the following command to run a model:
66
+
1. **Run your first model**. Open a terminal and run this command:
67
67
68
68
```bash
69
69
foundry model run qwen2.5-0.5b
70
70
```
71
71
72
-
The model downloads - which can take a few minutes, depending on your internet speed - and the model runs. Once the model is running, you can interact with it using the commandline interface (CLI). For example, you can ask:
72
+
Foundry Local downloads the model, which can take a few minutes depending on your internet speed, thenruns it. After the model starts, interact with it by using the command-line interface (CLI). For example, you can ask:
73
73
74
74
```text
75
75
Why is the sky blue?
76
76
```
77
77
78
-
You should see a response from the model in the terminal:
79
-
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from foundry local run command." lightbox="media/get-started-output.png":::
78
+
You see a response from the model in the terminal:
79
+
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from Foundry Local run command." lightbox="media/get-started-output.png":::
80
80
81
81
### Option 2: Download starter projects
82
82
83
83
For practical, hands-on learning, download one of our starter projects that demonstrate real-world scenarios:
84
84
85
-
- **[Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat)**: Build a local chat interface with multiple model support
86
-
- **[Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize)**: A command-line utility that generates summaries of text files or direct text input.
87
-
- **[Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling)**: Enabling and usingfunctioncalling with Phi-4 mini.
85
+
- [Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat): Build a local chat interface with multiple model support.
86
+
- [Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize): A command-line utility that generates summaries of text files or direct text input.
87
+
- [Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling): Enable and usefunctioncalling with Phi-4 mini.
88
88
89
89
Each project includes:
90
90
91
91
- Step-by-step setup instructions
92
92
- Complete source code
93
93
- Configuration examples
94
-
- Best practices implementation
94
+
- Best practices
95
95
96
96
> [!TIP]
97
-
> These starter projects align with scenarios covered inour [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
97
+
> These starter projects align with scenarios inthe [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
98
98
99
99
> [!TIP]
100
-
>You can replace `qwen2.5-0.5b` with any model name from the catalog (see`foundry model list`foravailable models). Foundry Local downloads the model variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, it downloads the CUDA version of the model. If you have a Qualcomm NPU, it downloads the NPU variant. If you have no GPU or NPU, it downloads the CPU version.
100
+
>Replace `qwen2.5-0.5b` with any model name from the catalog (run`foundry model list`to view available models). Foundry Local downloads the variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, Foundry Local downloads the CUDA version. If you have a Qualcomm NPU, Foundry Local downloads the NPU variant. If you have no GPU or NPU, Foundry Local downloads the CPU version.
101
101
>
102
-
> Note that when you run `foundry model list` for the first time, you'll see a download progress bar as Foundry Local downloads the execution providers for your machine's hardware.
102
+
> When you run `foundry model list` the first time, you see a download progress bar while Foundry Local downloads the execution providers for your hardware.
103
103
104
104
## Run the latest OpenAI open-source model
105
105
106
-
To run the latest OpenAI open-source model - `GPT-OSS-20B` - use the following command:
106
+
Run the latest OpenAI open-source model, `GPT-OSS-20B`, with this command:
107
107
108
108
```bash
109
109
foundry model run gpt-oss-20b
@@ -112,8 +112,8 @@ foundry model run gpt-oss-20b
112
112
> [!IMPORTANT]
113
113
> Requirements for running GPT-OSS-20B:
114
114
>
115
-
> - Nvidia GPU with 16GB VRAM or more.
116
-
> - Foundry Local version **0.6.87** or above. Any version below this will not support the model. You can check your Foundry Local version by running:
115
+
> - NVIDIA GPU with 16 GB of VRAM or more.
116
+
> - Foundry Local version **0.6.87** or later. Earlier versions don't support the model. Check your version with:
117
117
>
118
118
>```bash
119
119
> foundry --version
@@ -127,49 +127,49 @@ The Foundry CLI organizes commands into these main categories:
127
127
- **Service**: Commands for managing the Foundry Local service.
128
128
- **Cache**: Commands for managing the local model cache (downloaded models on local disk).
129
129
130
-
View all available commands with:
130
+
View all commands:
131
131
132
132
```bash
133
133
foundry --help
134
134
```
135
135
136
-
To view available **model** commands, run:
136
+
View **model** commands:
137
137
138
138
```bash
139
139
foundry model --help
140
140
```
141
141
142
-
To view available **service** commands, run:
142
+
View **service** commands:
143
143
144
144
```bash
145
145
foundry service --help
146
146
```
147
147
148
-
To view available **cache** commands, run:
148
+
View **cache** commands:
149
149
150
150
```bash
151
151
foundry cache --help
152
152
```
153
153
154
154
> [!TIP]
155
-
> For a complete guide to all CLI commands and their usage, see the [Foundry Local CLI Reference](reference/reference-cli.md).
155
+
> For details on all CLI commands, see [Foundry Local CLI reference](reference/reference-cli.md).
156
156
157
-
## Upgrading Foundry Local
157
+
## Upgrade Foundry Local
158
158
159
-
To upgrade Foundry Local to the latest version, use the following commands based on your operating system:
159
+
Run the commandfor your OS to upgrade Foundry Local.
160
160
161
-
- **Windows**: Open a terminal and run:
161
+
- Windows: In a terminal, run:
162
162
```bash
163
163
winget upgrade --id Microsoft.FoundryLocal
164
164
```
165
-
- **macOS**: Open a terminal and run:
165
+
- macOS: In a terminal, run:
166
166
```bash
167
167
brew upgrade foundrylocal
168
168
```
169
169
170
-
## Uninstalling Foundry Local
170
+
## Uninstall Foundry Local
171
171
172
-
If you wish to uninstall Foundry Local, use the following commands based on your operating system:
172
+
To uninstall Foundry Local, run the commandfor your operating system:
173
173
174
174
- **Windows**: Open a terminal and run:
175
175
```bash
@@ -186,7 +186,7 @@ If you wish to uninstall Foundry Local, use the following commands based on your
186
186
187
187
### Service connection issues
188
188
189
-
If you encounter the following error when running `foundry model list` or other commands:
189
+
If you see this error when you run `foundry model list` or a similar command:
190
190
191
191
```
192
192
>foundry model list
@@ -200,18 +200,18 @@ The requested address is not valid in its context. (127.0.0.1:0)
200
200
Please check service status with 'foundry service status'.
201
201
```
202
202
203
-
**Solution**: Run the following command to restart the service:
203
+
Run this command to restart the service:
204
204
205
205
```bash
206
206
foundry service restart
207
207
```
208
208
209
-
This resolves issues where the service is running but not properly accessible due to port binding problems.
209
+
This command fixes cases where the service runs but isn't accessible because of a port binding issue.
210
210
211
211
## Related content
212
212
213
-
- [Integrate inferencing SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
214
-
- [Explore the Foundry Local documentation](index.yml)
215
-
- [Learn about best practices and troubleshooting](reference/reference-best-practice.md)
216
-
- [Explore the Foundry Local API reference](reference/reference-catalog-api.md)
217
-
- [Learn Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)
213
+
-[Integrate inference SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
214
+
-[Foundry Local documentation](index.yml)
215
+
-[Best practices and troubleshooting](reference/reference-best-practice.md)
216
+
-[Foundry Local API reference](reference/reference-catalog-api.md)
217
+
-[Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)
0 commit comments