Skip to content

Commit 078625c

Browse files
committed
Refreshes get-started.md
1 parent 057829e commit 078625c

File tree

1 file changed

+41
-41
lines changed

1 file changed

+41
-41
lines changed

articles/ai-foundry/foundry-local/get-started.md

Lines changed: 41 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Get started with Foundry Local
33
titleSuffix: Foundry Local
44
description: Learn how to install, configure, and run your first AI model with Foundry Local
5-
ms.date: 07/03/2025
5+
ms.date: 10/01/2025
66
ms.service: azure-ai-foundry
77
ms.subservice: foundry-local
88
ms.topic: quickstart
@@ -32,25 +32,25 @@ This guide walks you through setting up Foundry Local to run AI models on your d
3232
Your system must meet the following requirements to run Foundry Local:
3333

3434
- **Operating System**: Windows 10 (x64), Windows 11 (x64/ARM), Windows Server 2025, macOS.
35-
- **Hardware**: Minimum 8GB RAM, 3GB free disk space. Recommended 16GB RAM, 15GB free disk space.
36-
- **Network**: Internet connection for initial model download (optional for offline use)
35+
- **Hardware**: Minimum 8 GB RAM and 3 GB free disk space. Recommended 16 GB RAM and 15 GB free disk space.
36+
- **Network**: Internet connection to download the initial model (optional for offline use).
3737
- **Acceleration (optional)**: NVIDIA GPU (2,000 series or newer), AMD GPU (6,000 series or newer), AMD NPU, Intel iGPU, Intel NPU (32GB or more of memory), Qualcomm Snapdragon X Elite (8GB or more of memory), Qualcomm NPU, or Apple silicon.
3838

3939
> [!NOTE]
40-
> New NPUs are only supported on systems running Windows version 24H2 or later. If you have an Intel NPU on Windows, you need to install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration with Foundry Local.
40+
> New NPUs are supported only on systems running Windows 24H2 or later. If you use an Intel NPU on Windows, install the [Intel NPU driver](https://www.intel.com/content/www/us/en/download/794734/intel-npu-driver-windows.html) to enable NPU acceleration in Foundry Local.
4141
42-
Also, ensure you have administrative privileges to install software on your device.
42+
Make sure you have admin rights to install software.
4343

4444
> [!TIP]
45-
> If you encounter service connection issues after installation (such as "Request to local service failed" errors), try running `foundry service restart` to resolve the issue.
45+
> If you see a service connection error after installation (for example, 'Request to local service failed'), run `foundry service restart`.
4646
4747
## Quickstart
4848

49-
Get started with Foundry Local quickly with these options:
49+
Get started fast with Foundry Local:
5050

5151
### Option 1: Quick CLI setup
5252

53-
1. **Install Foundry Local**
53+
1. **Install Foundry Local**.
5454

5555
- **Windows**: Open a terminal and run the following command:
5656
```bash
@@ -63,47 +63,47 @@ brew install foundrylocal
6363
`
6464
Alternatively, you can download the installer from the [Foundry Local GitHub repository](https://aka.ms/foundry-local-installer).
6565

66-
1. **Run your first model** Open a terminal window and run the following command to run a model:
66+
1. **Run your first model**. Open a terminal and run this command:
6767

6868
```bash
6969
foundry model run qwen2.5-0.5b
7070
```
7171

72-
The model downloads - which can take a few minutes, depending on your internet speed - and the model runs. Once the model is running, you can interact with it using the command line interface (CLI). For example, you can ask:
72+
Foundry Local downloads the model, which can take a few minutes depending on your internet speed, then runs it. After the model starts, interact with it by using the command-line interface (CLI). For example, you can ask:
7373

7474
```text
7575
Why is the sky blue?
7676
```
7777

78-
You should see a response from the model in the terminal:
79-
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from foundry local run command." lightbox="media/get-started-output.png":::
78+
You see a response from the model in the terminal:
79+
:::image type="content" source="media/get-started-output.png" alt-text="Screenshot of output from Foundry Local run command." lightbox="media/get-started-output.png":::
8080

8181
### Option 2: Download starter projects
8282

8383
For practical, hands-on learning, download one of our starter projects that demonstrate real-world scenarios:
8484

85-
- **[Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat)**: Build a local chat interface with multiple model support
86-
- **[Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize)**: A command-line utility that generates summaries of text files or direct text input.
87-
- **[Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling)**: Enabling and using function calling with Phi-4 mini.
85+
- [Chat Application Starter](https://github.com/microsoft/Foundry-Local/tree/main/samples/electron/foundry-chat): Build a local chat interface with multiple model support.
86+
- [Summarize Sample](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/summarize): A command-line utility that generates summaries of text files or direct text input.
87+
- [Function Calling Example](https://github.com/microsoft/Foundry-Local/tree/main/samples/python/functioncalling): Enable and use function calling with Phi-4 mini.
8888

8989
Each project includes:
9090

9191
- Step-by-step setup instructions
9292
- Complete source code
9393
- Configuration examples
94-
- Best practices implementation
94+
- Best practices
9595

9696
> [!TIP]
97-
> These starter projects align with scenarios covered in our [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
97+
> These starter projects align with scenarios in the [how-to guides](how-to/how-to-chat-application-with-open-web-ui.md) and provide immediate practical value.
9898

9999
> [!TIP]
100-
> You can replace `qwen2.5-0.5b` with any model name from the catalog (see `foundry model list` for available models). Foundry Local downloads the model variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, it downloads the CUDA version of the model. If you have a Qualcomm NPU, it downloads the NPU variant. If you have no GPU or NPU, it downloads the CPU version.
100+
> Replace `qwen2.5-0.5b` with any model name from the catalog (run `foundry model list` to view available models). Foundry Local downloads the variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, Foundry Local downloads the CUDA version. If you have a Qualcomm NPU, Foundry Local downloads the NPU variant. If you have no GPU or NPU, Foundry Local downloads the CPU version.
101101
>
102-
> Note that when you run `foundry model list` for the first time, you'll see a download progress bar as Foundry Local downloads the execution providers for your machine's hardware.
102+
> When you run `foundry model list` the first time, you see a download progress bar while Foundry Local downloads the execution providers for your hardware.
103103
104104
## Run the latest OpenAI open-source model
105105
106-
To run the latest OpenAI open-source model - `GPT-OSS-20B` - use the following command:
106+
Run the latest OpenAI open-source model, `GPT-OSS-20B`, with this command:
107107
108108
```bash
109109
foundry model run gpt-oss-20b
@@ -112,8 +112,8 @@ foundry model run gpt-oss-20b
112112
> [!IMPORTANT]
113113
> Requirements for running GPT-OSS-20B:
114114
>
115-
> - Nvidia GPU with 16GB VRAM or more.
116-
> - Foundry Local version **0.6.87** or above. Any version below this will not support the model. You can check your Foundry Local version by running:
115+
> - NVIDIA GPU with 16 GB of VRAM or more.
116+
> - Foundry Local version **0.6.87** or later. Earlier versions don't support the model. Check your version with:
117117
>
118118
> ```bash
119119
> foundry --version
@@ -127,49 +127,49 @@ The Foundry CLI organizes commands into these main categories:
127127
- **Service**: Commands for managing the Foundry Local service.
128128
- **Cache**: Commands for managing the local model cache (downloaded models on local disk).
129129

130-
View all available commands with:
130+
View all commands:
131131

132132
```bash
133133
foundry --help
134134
```
135135

136-
To view available **model** commands, run:
136+
View **model** commands:
137137

138138
```bash
139139
foundry model --help
140140
```
141141

142-
To view available **service** commands, run:
142+
View **service** commands:
143143

144144
```bash
145145
foundry service --help
146146
```
147147

148-
To view available **cache** commands, run:
148+
View **cache** commands:
149149

150150
```bash
151151
foundry cache --help
152152
```
153153

154154
> [!TIP]
155-
> For a complete guide to all CLI commands and their usage, see the [Foundry Local CLI Reference](reference/reference-cli.md).
155+
> For details on all CLI commands, see [Foundry Local CLI reference](reference/reference-cli.md).
156156

157-
## Upgrading Foundry Local
157+
## Upgrade Foundry Local
158158

159-
To upgrade Foundry Local to the latest version, use the following commands based on your operating system:
159+
Run the command for your OS to upgrade Foundry Local.
160160

161-
- **Windows**: Open a terminal and run:
161+
- Windows: In a terminal, run:
162162
```bash
163163
winget upgrade --id Microsoft.FoundryLocal
164164
```
165-
- **macOS**: Open a terminal and run:
165+
- macOS: In a terminal, run:
166166
```bash
167167
brew upgrade foundrylocal
168168
```
169169

170-
## Uninstalling Foundry Local
170+
## Uninstall Foundry Local
171171

172-
If you wish to uninstall Foundry Local, use the following commands based on your operating system:
172+
To uninstall Foundry Local, run the command for your operating system:
173173

174174
- **Windows**: Open a terminal and run:
175175
```bash
@@ -186,7 +186,7 @@ If you wish to uninstall Foundry Local, use the following commands based on your
186186

187187
### Service connection issues
188188

189-
If you encounter the following error when running `foundry model list` or other commands:
189+
If you see this error when you run `foundry model list` or a similar command:
190190

191191
```
192192
>foundry model list
@@ -200,18 +200,18 @@ The requested address is not valid in its context. (127.0.0.1:0)
200200
Please check service status with 'foundry service status'.
201201
```
202202
203-
**Solution**: Run the following command to restart the service:
203+
Run this command to restart the service:
204204
205205
```bash
206206
foundry service restart
207207
```
208208

209-
This resolves issues where the service is running but not properly accessible due to port binding problems.
209+
This command fixes cases where the service runs but isn't accessible because of a port binding issue.
210210

211211
## Related content
212212

213-
- [Integrate inferencing SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
214-
- [Explore the Foundry Local documentation](index.yml)
215-
- [Learn about best practices and troubleshooting](reference/reference-best-practice.md)
216-
- [Explore the Foundry Local API reference](reference/reference-catalog-api.md)
217-
- [Learn Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)
213+
- [Integrate inference SDKs with Foundry Local](how-to/how-to-integrate-with-inference-sdks.md)
214+
- [Foundry Local documentation](index.yml)
215+
- [Best practices and troubleshooting](reference/reference-best-practice.md)
216+
- [Foundry Local API reference](reference/reference-catalog-api.md)
217+
- [Compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)

0 commit comments

Comments
 (0)