Skip to content

Commit 5537cc0

Browse files
authored
Merge pull request #3591 from samuel100/samuel100/foundry-local
Foundry Local
2 parents 38ca6f2 + 5d02c9d commit 5537cc0

22 files changed

+2374
-1
lines changed
Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
---
2+
title: Foundry Local architecture
3+
titleSuffix: Foundry Local
4+
description: Learn about the architecture and components of Foundry Local
5+
manager: scottpolly
6+
ms.service: azure-ai-foundry
7+
ms.custom: build-2025
8+
ms.topic: concept-article
9+
ms.date: 02/12/2025
10+
ms.author: samkemp
11+
author: samuel100
12+
---
13+
14+
# Foundry Local architecture
15+
16+
Foundry Local enables efficient, secure, and scalable AI model inference directly on your devices. This article explains the core components of Foundry Local and how they work together to deliver AI capabilities.
17+
18+
Key benefits of Foundry Local include:
19+
20+
> [!div class="checklist"]
21+
>
22+
> - **Low Latency**: Run models locally to minimize processing time and deliver faster results.
23+
> - **Data Privacy**: Process sensitive data locally without sending it to the cloud, helping meet data protection requirements.
24+
> - **Flexibility**: Support for diverse hardware configurations lets you choose the optimal setup for your needs.
25+
> - **Scalability**: Deploy across various devices, from laptops to servers, to suit different use cases.
26+
> - **Cost-Effectiveness**: Reduce cloud computing costs, especially for high-volume applications.
27+
> - **Offline Operation**: Work without an internet connection in remote or disconnected environments.
28+
> - **Seamless Integration**: Easily incorporate into existing development workflows for smooth adoption.
29+
30+
## Key components
31+
32+
The Foundry Local architecture consists of these main components:
33+
34+
:::image type="content" source="../media/architecture/foundry-local-arch.png" alt-text="Diagram of Foundry Local Architecture.":::
35+
36+
### Foundry Local service
37+
38+
The Foundry Local Service is an OpenAI-compatible REST server that provides a standard interface for working with the inference engine and managing models. Developers use this API to send requests, run models, and get results programmatically.
39+
40+
- **Endpoint**: `http://localhost:5272/v1`
41+
- **Use Cases**:
42+
- Connect Foundry Local to your custom applications
43+
- Execute models through HTTP requests
44+
45+
### ONNX runtime
46+
47+
The ONNX Runtime is a core component that executes AI models. It runs optimized ONNX models efficiently on local hardware like CPUs, GPUs, or NPUs.
48+
49+
**Features**:
50+
51+
- Works with multiple hardware providers (NVIDIA, AMD, Intel) and device types (NPUs, CPUs, GPUs)
52+
- Offers a consistent interface for running across models different hardware
53+
- Delivers best-in-class performance
54+
- Supports quantized models for faster inference
55+
56+
### Model management
57+
58+
Foundry Local provides robust tools for managing AI models, ensuring that they're readily available for inference and easy to maintain. Model management is handled through the **Model Cache** and the **Command-Line Interface (CLI)**.
59+
60+
#### Model cache
61+
62+
The model cache stores downloaded AI models locally on your device, which ensures models are ready for inference without needing to download them repeatedly. You can manage the cache using either the Foundry CLI or REST API.
63+
64+
- **Purpose**: Speeds up inference by keeping models locally available
65+
- **Key Commands**:
66+
- `foundry cache list`: Shows all models in your local cache
67+
- `foundry cache remove <model-name>`: Removes a specific model from the cache
68+
- `foundry cache cd <path>`: Changes the storage location for cached models
69+
70+
#### Model lifecycle
71+
72+
1. **Download**: Get models from the Azure AI Foundry model catalog and save them to your local disk.
73+
2. **Load**: Load models into the Foundry Local service memory for inference. Set a TTL (time-to-live) to control how long the model stays in memory (default: 10 minutes).
74+
3. **Run**: Execute model inference for your requests.
75+
4. **Unload**: Remove models from memory to free up resources when no longer needed.
76+
5. **Delete**: Remove models from your local cache to reclaim disk space.
77+
78+
#### Model compilation using Olive
79+
80+
Before models can be used with Foundry Local, they must be compiled and optimized in the [ONNX](https://onnx.ai) format. Microsoft provides a selection of published models in the Azure AI Foundry Model Catalog that are already optimized for Foundry Local. However, you aren't limited to those models - by using [Olive](https://microsoft.github.io/Olive/). Olive is a powerful framework for preparing AI models for efficient inference. It converts models into the ONNX format, optimizes their graph structure, and applies techniques like quantization to improve performance on local hardware.
81+
82+
> [!TIP]
83+
> To learn more about compiling models for Foundry Local, read [How to compile Hugging Face models to run on Foundry Local](../how-to/how-to-compile-hugging-face-models.md).
84+
85+
### Hardware abstraction layer
86+
87+
The hardware abstraction layer ensures that Foundry Local can run on various devices by abstracting the underlying hardware. To optimize performance based on the available hardware, Foundry Local supports:
88+
89+
- **multiple _execution providers_**, such as NVIDIA CUDA, AMD, Qualcomm, Intel.
90+
- **multiple _device types_**, such as CPU, GPU, NPU.
91+
92+
### Developer experiences
93+
94+
The Foundry Local architecture is designed to provide a seamless developer experience, enabling easy integration and interaction with AI models.
95+
Developers can choose from various interfaces to interact with the system, including:
96+
97+
#### Command-Line Interface (CLI)
98+
99+
The Foundry CLI is a powerful tool for managing models, the inference engine, and the local cache.
100+
101+
**Examples**:
102+
103+
- `foundry model list`: Lists all available models in the local cache.
104+
- `foundry model run <model-name>`: Runs a model.
105+
- `foundry service status`: Checks the status of the service.
106+
107+
> [!TIP]
108+
> To learn more about the CLI commands, read [Foundry Local CLI Reference](../reference/reference-cli.md).
109+
110+
#### Inferencing SDK integration
111+
112+
Foundry Local supports integration with various SDKs, such as the OpenAI SDK, enabling developers to use familiar programming interfaces to interact with the local inference engine.
113+
114+
- **Supported SDKs**: Python, JavaScript, C#, and more.
115+
116+
> [!TIP]
117+
> To learn more about integrating with inferencing SDKs, read [Integrate Foundry Local with Inferencing SDKs](../how-to/integrate-with-inference-sdks.md).
118+
119+
#### AI Toolkit for Visual Studio Code
120+
121+
The AI Toolkit for Visual Studio Code provides a user-friendly interface for developers to interact with Foundry Local. It allows users to run models, manage the local cache, and visualize results directly within the IDE.
122+
123+
- **Features**:
124+
- Model management: Download, load, and run models from within the IDE.
125+
- Interactive console: Send requests and view responses in real-time.
126+
- Visualization tools: Graphical representation of model performance and results.
127+
128+
## Next Steps
129+
130+
- [Get started with Foundry Local](../get-started.md)
131+
- [Integrate with Inference SDKs](../how-to/integrate-with-inference-sdks.md)
132+
- [Foundry Local CLI Reference](../reference/reference-cli.md)
Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
---
2+
title: Get started with Foundry Local
3+
titleSuffix: Foundry Local
4+
description: Learn how to install, configure, and run your first AI model with Foundry Local
5+
manager: scottpolly
6+
keywords: Azure AI services, cognitive, AI models, local inference
7+
ms.service: azure-ai-foundry
8+
ms.topic: quickstart
9+
ms.date: 02/20/2025
10+
ms.reviewer: samkemp
11+
ms.author: samkemp
12+
author: samuel100
13+
ms.custom: build-2025
14+
#customer intent: As a developer, I want to get started with Foundry Local so that I can run AI models locally.
15+
---
16+
17+
# Get started with Foundry Local
18+
19+
This guide walks you through setting up Foundry Local to run AI models on your device. Follow these clear steps to install the tool, discover available models, and launch your first local AI model.
20+
21+
## Prerequisites
22+
23+
Your system must meet the following requirements to run Foundry Local:
24+
25+
- **Operating System**: Windows 10 (x64), Windows 11 (x64/ARM), macOS, or Linux (x64/ARM)
26+
- **Hardware**: Minimum 8GB RAM, 3GB free disk space. Recommended 16GB RAM, 15GB free disk space.
27+
- **Network**: Internet connection for initial model download (optional for offline use)
28+
- **Acceleration (optional)**: NVIDIA GPU (2,000 series or newer), AMD GPU (6,000 series or newer), or Qualcomm Snapdragon X Elite, with 8GB or more of memory (RAM).
29+
30+
Also, ensure you have administrative privileges to install software on your device.
31+
32+
## Quickstart
33+
34+
Get started with Foundry Local quickly:
35+
36+
1. **Download** Foundry Local for your platform:
37+
- [Windows](https://aka.ms/foundry-local-windows)
38+
- [macOS](https://aka.ms/foundry-local-macos)
39+
- [Linux](https://aka.ms/foundry-local-linux)
40+
1. **Install** the package by following the on-screen prompts.
41+
1. **Run your first model** Open a terminal window and run the following command to run a model (the model will be downloaded and an interactive prompt will appear):
42+
43+
```bash
44+
foundry model run phi-3-mini-4k
45+
```
46+
47+
> [!TIP]
48+
> You can replace `phi-3-mini-4k` with any model name from the catalog (see `foundry model list` for available models). Foundry Local will download the model variant that best matches your system's hardware and software configuration. For example, if you have an NVIDIA GPU, it will download the CUDA version of the model. If you have an QNN NPU, it will download the NPU variant. If you have no GPU or NPU, it will download the CPU version.
49+
50+
> [!IMPORTANT]
51+
> **For macOS/Linux users:** Run both components in separate terminals:
52+
> - Neutron Server (`Inference.Service.Agent`) - Make it executable with `chmod +x Inference.Service.Agent`
53+
> - Foundry Client (`foundry`) - Make it executable with `chmod +x foundry` and add it to your PATH
54+
55+
## Explore Foundry Local CLI commands
56+
57+
The Foundry CLI organizes commands into these main categories:
58+
59+
- **Model**: Commands for managing and running models.
60+
- **Service**: Commands for managing the Foundry Local service.
61+
- **Cache**: Commands for managing the local model cache (downloaded models on local disk).
62+
63+
View all available commands with:
64+
65+
```bash
66+
foundry --help
67+
```
68+
69+
To view available **model** commands, run:
70+
71+
```bash
72+
foundry model --help
73+
```
74+
To view available **service** commands, run:
75+
76+
```bash
77+
foundry service --help
78+
```
79+
80+
To view available **cache** commands, run:
81+
82+
```bash
83+
foundry cache --help
84+
```
85+
86+
> [!TIP]
87+
> For a complete guide to all CLI commands and their usage, see the [Foundry Local CLI Reference](reference/reference-cli.md).
88+
89+
90+
## Next steps
91+
92+
- [Learn how to integrate Foundry Local with your applications](how-to/integrate-with-inference-sdks.md)
93+
- [Explore the Foundry Local documentation](index.yml)
94+
- [Learn about best practices and troubleshooting](reference/reference-best-practice.md)
95+
- [Explore the Foundry Local API reference](reference/reference-catalog-api.md)
96+
- [Learn how to compile Hugging Face models](how-to/how-to-compile-hugging-face-models.md)
97+

0 commit comments

Comments
 (0)