Skip to content
Original file line number Diff line number Diff line change
@@ -1,53 +1,69 @@
---
title: Development environment
title: Set up your Environment
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Set up your development environment
## Overview

In this learning path, you will learn how to build and deploy a LLM on a Windows on Arm (WoA) laptop using ONNX Runtime for inference.
In this Learning Path, you'll learn how to build and deploy a large language model (LLM) on a Windows on Arm (WoA) machine using ONNX Runtime for inference.

You will first learn how to build the ONNX Runtime and ONNX Runtime Generate() API library and then how to download the Phi-3 model and run the inference. You will run the short context (4k) mini (3.3B) variant of Phi 3 model. The short context version accepts a shorter (4K) prompts and produces shorter output text compared to the long (128K) context version. The short version will consume less memory.
Specifically, you'll learn how to:

Your first task is to prepare a development environment with the required software:
* Build ONNX Runtime and the Generate() API library.
* Download the Phi-3 model and run inference.
* Run the short-context (4K) Mini (3.3B) variant of Phi 3 model.

- Visual Studio 2022 IDE (latest version recommended)
- Python 3.10 or higher
- CMake 3.28 or higher
{{% notice Note %}}
The short-context version accepts shorter (4K) prompts and generates shorter outputs than the long-context (128K) version. It also consumes less memory.
{{% /notice %}}

## Set up your development environment

Your first task is to prepare a development environment with the required software.

Start by installing the required tools:

The following instructions were tested on an WoA 64-bit Windows machine with at least 16GB of RAM.
- Visual Studio 2022 IDE (the latest version available is recommended).
- Python 3.10 or higher.
- CMake 3.28 or higher.

## Install Visual Studio 2022 IDE
{{% notice Note %}}
These instructions were tested on a 64-bit WoA machine with at least 16GB of RAM.
{{% /notice %}}

## Install and Configure Visual Studio 2022

Follow these steps to install and configure Visual Studio 2022 IDE:
Now, to install and configure Visual Studio, follow these steps:

1. Download and install the latest version of [Visual Studio IDE](https://visualstudio.microsoft.com/downloads/).
1. Download the latest [Visual Studio IDE](https://visualstudio.microsoft.com/downloads/).

2. Select the **Community Version**. An installer called *VisualStudioSetup.exe* will be downloaded.
2. Select the **Community** edition. This downloads an installer called `VisualStudioSetup.exe`.

3. From your Downloads folder, double-click the installer to start the installation.
3. Run `VisualStudioSetup.exe` from your **Downloads** folder.

4. Follow the prompts and acknowledge **License Terms** and **Privacy Statement**.
4. Follow the prompts and accept the License Terms and Privacy Statement.

5. Once "Downloaded" and "Installed" complete select your workloads. As a minimum you should select **Desktop Development with C++**. This will install the **Microsoft Visual Studio Compiler** or **MSVC**.
5. When prompted to select workloads, select **Desktop Development with C++**. This installs the **Microsoft Visual Studio Compiler** (**MSVC**).

## Install Python

Download and install [Python for Windows on Arm](/install-guides/py-woa)
Download and install [Python for Windows on Arm](/install-guides/py-woa).

You will need Python version 3.10 or higher. This learning path was tested with version 3.11.9.
{{% notice Note %}}
You'll need Python version 3.10 or higher. This Learning Path was tested with version 3.11.9.
{{% /notice %}}

## Install CMake

CMake is an open-source tool that automates the build process for software projects, helping to generate platform-specific build configurations.
CMake is an open-source tool that automates the build process and generates platform-specific build configurations.

[Download and install CMake](/install-guides/cmake)
Download and install [CMake for Windows on Arm](/install-guides/cmake).

{{% notice Note %}}
The instructions were tested with version 3.30.5
The instructions were tested with version 3.30.5.
{{% /notice %}}

You now have the required development tools installed to follow this learning path.
You’re now ready to build ONNX Runtime and run inference using the Phi-3 model.
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,22 @@ weight: 3
layout: learningpathall
---

## Compile ONNX Runtime for Windows on Arm
Now that you have your environment set up correctly, you can build the ONNX Runtime inference engine.
## Build ONNX Runtime for Windows on Arm
Now that your environment is set up, you're ready to build the ONNX Runtime inference engine.

ONNX Runtime is an open-source inference engine designed to accelerate the deployment of machine learning models, particularly those in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is optimized for high performance and low latency, making it popular for production deployment of AI models. You can learn more by reading the [ONNX Runtime Overview](https://onnxruntime.ai/).
ONNX Runtime is an open-source engine for accelerating machine learning model inference, especially those in the Open Neural Network Exchange (ONNX) format.

### Clone ONNX Runtime Repo
ONNX Runtime is optimized for high performance and low latency, and is widely used in production deployments.

Open a Developer Command Prompt for Visual Studio to properly setup the environment including path to compiler, linker, utilities and header files. Create your workspace and check out the source tree:
{{% notice Learning Tip %}}
You can learn more about ONNX Runtime by reading the [ONNX Runtime Overview](https://onnxruntime.ai/).
{{% /notice %}}

### Clone the ONNX Runtime repository

Open a command prompt for Visual Studio to set up the environment, which includes paths to the compiler, linker, utilities, and header files.

Then, create your workspace and clone the repository:

```bash
cd C:\Users\%USERNAME%
Expand All @@ -28,31 +36,34 @@ git checkout 4eeefd7260b7fa42a71dd1a08b423d5e7c722050
You might be able to use a later commit. These steps have been tested with the commit `4eeefd7260b7fa42a71dd1a08b423d5e7c722050`.
{{% /notice %}}

### Build for Windows
### Build ONNX Runtime

You can build the "Release" configuration for a build optimized for performance but without debug information.
To build the ONNX Runtime shared library, use one of the following configurations:

* **Release** configuration, for a build optimized for performance but without debug information:


```bash
.\build.bat --config Release --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests
```


As an alternative, you can build with "RelWithDebInfo" configuration for a release-optimized build with debug information.
* **RelWithDebInfo** configuration, which includes debug symbols for profiling or inspection:

```bash
.\build.bat --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests
```


### Resulting Dynamic Link Library
When the build is complete, the `onnxruntime.dll` dynamic linked library can be found in:
### Resulting Dynamically Linked Library
When the build is complete, you'll find the `onnxruntime.dll` dynamically linked library in the following respective directories:

* For **Release** build:

```
dir .\build\Windows\Release\Release\onnxruntime.dll
```

or if you build with debug information it can be found in:
* For **RelWithDebInfo** build:

```
dir .\build\Windows\RelWithDebInfo\RelWithDebInfo\onnxruntime.dll
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,23 @@ weight: 4
layout: learningpathall
---

## Compile the ONNX Runtime Generate() API for Windows on Arm
## Build the ONNX Runtime Generate() API for Windows on Arm

The Generate() API in ONNX Runtime is designed for text generation tasks using models like Phi-3. It implements the generative AI loop for ONNX models, including:
- pre- and post-processing
- inference with ONNX Runtime- logits processing
- search and sampling
- KV cache management
- Pre- and post-processing.
- Inference with ONNX Runtime (including logits processing).
- Search and sampling.
- KV cache management.

You can learn more by reading the [ONNX Runtime Generate() API page](https://onnxruntime.ai/docs/genai/).
{{% notice Learning Tip %}}
You can learn more about this area by reading the [ONNX Runtime Generate() API documentation](https://onnxruntime.ai/docs/genai/).
{{% /notice %}}

In this section you will learn how to build the Generate API() from source.
In this section, you'll build the Generate() API from source.


### Clone onnxruntime-genai Repo
Within your Windows Developer Command Prompt for Visual Studio, checkout the source repo:
### Clone the onnxruntime-genai repository
From your **Windows Developer Command Prompt for Visual Studio**, clone the repository and checkout the following tested commit:

```bash
cd C:\Users\%USERNAME%
Expand All @@ -35,19 +37,21 @@ You might be able to use later commits. These steps have been tested with the co
{{% /notice %}}

### Build for Windows on Arm
The build command below has a ---config argument, which takes the following options:
- ```Release``` builds release build
- ```Debug``` builds binaries with debug symbols
- ```RelWithDebInfo``` builds release binaries with debug info
The build script uses a --config argument, which supports the following options:
- ```Release``` builds release build.
- ```Debug``` builds binaries with debug symbols.
- ```RelWithDebInfo``` builds release binaries with debug info.

You will build the `Release` variant of the ONNX Runtime Generate() API:
To build the `Release` variant of the ONNX Runtime Generate() API:

```bash
pip install requests
python build.py --config Release --skip_tests
```

When the build is complete, confirm the ONNX Runtime Generate() API Dynamic Link Library has been created:
### Verify the output

When the build is complete, confirm the ONNX Runtime Generate() API Dynamically Linked Library has been created:

```output
dir build\Windows\Release\Release\onnxruntime-genai.dll
Expand Down
Original file line number Diff line number Diff line change
@@ -1,52 +1,62 @@
---
title: Run Phi3 model on a Windows on Arm machine
title: Run Phi3 Model
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Run a Phi-3 model on your Windows on Arm machine
## Run the Phi-3 model on your Windows on Arm machine

In this section, you will learn how to download the Phi3-mini model and run it on your Windows on Arm machine (physical or virtual machine). You will be use a simple model runner program which provides performance metrics.
In this section, you'll download the Phi-3 Mini model and run it on your WoA machine - either physical or virtual. You'll use a simple model runner that also reports performance metrics.

The Phi-3-mini (3.3B) model has a short (4k) context version and a long (128k) context version. The long context version can accept much longer prompts and produces longer output text, but it consumes more memory.
In this learning path, you will use the short context version, which is quantized to 4-bits.
The Phi-3 Mini (3.3B) model is available in two versions:

The Phi-3-mini model used here is in an ONNX format.
- Short context (4K) - supports shorter prompts and uses less memory.
- Long context (128K) - supports longer prompts and outputs but consumes more memory.

### Setup
This Learning Path uses the short context version, which is quantized to 4-bits.

The Phi-3 Mini model used here is in ONNX format.

### Set up

[Phi-3 ONNX models](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) are hosted on HuggingFace.
Hugging Face uses Git for version control and to download ONNX model files, which can be quite large.
You will first need to install the Git Large File Storage (LFS) extension.
Hugging Face uses Git for both version control and to download the ONNX model files, which are large.

### Install Git LFS

You'll first need to install the Git Large File Storage (LFS) extension:

``` bash
winget install -e --id GitHub.GitLFS
git lfs install
```
If you don’t have winget, download and run the exe from the [official source](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage?platform=windows).
If the extension is already installed for you when you run the above ``git`` command it will say ``Git LFS initialized``.
If you don’t have winget, [download the installer manually](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage?platform=windows).

You then need to install the ``HuggingFace CLI``.
If Git LFS is already installed, you'll see ``Git LFS initialized``.

### Install Hugging Face CLI

You then need to install the ``HuggingFace CLI``:
``` bash
pip install huggingface-hub[cli]
```

### Download the Phi-3-mini (4k) model
### Download the Phi-3-Mini (4K) model

``` bash
cd C:\Users\%USERNAME%
cd repos\lp
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-onnx --include cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4/* --local-dir .
```
This command downloads the model into a folder called `cpu_and_mobile`.
This command downloads the model into a folder named `cpu_and_mobile`.

### Build the Model Runner (ONNX Runtime GenAI C Example)

### Build model runner (ONNX Runtime GenAI C Example)
In the previous section you built ONNX RUntime Generate() API from source.
The headers and dynamic linked libraries that are built need to be copied over to appropriate folders (``lib`` and ``inclue``).
Building from source is a better practice because the examples usually are updated to run with the latest changes.
In the previous step, you built the ONNX Runtime Generate() API from source. Now, copy over the resulting headers and Dynamically Linked Libraries into the appropriate folders (``lib`` and ``include``).

Building from source is a better practice because the examples usually are updated to run with the latest changes:

``` bash
copy onnxruntime\build\Windows\Release\Release\onnxruntime.* onnxruntime-genai\examples\c\lib
Expand All @@ -65,14 +75,15 @@ cd build
cmake --build . --config Release
```

After a successful build, a binary program called `phi3` will be created in the ''onnxruntime-genai'' folder:
After a successful build, the binary `phi3` will be created in the ''onnxruntime-genai'' folder:

```output
dir Release\phi3.exe
```

#### Run the model
### Run the model

Use the runner you just built to execute the model with the following commands:
Execute the model using the following command:

``` bash
cd C:\Users\%USERNAME%
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,16 @@
---
title: Run Phi-3 on a Windows on Arm machine with ONNX Runtime
title: Run Phi-3 on Windows on Arm using ONNX Runtime

draft: true
cascade:
draft: true


minutes_to_complete: 60

who_is_this_for: A deep-dive for advanced developers looking to build ONNX Runtime on Windows on Arm (WoA) and leverage the Generate() API to run Phi-3 inference with KleidiAI acceleration.
who_is_this_for: This is an advanced topic for developers looking to build ONNX Runtime for Windows on Arm (WoA) and leverage the Generate() API to run Phi-3 inference with KleidiAI acceleration.

learning_objectives:
- Build ONNX Runtime and ONNX Runtime Generate() API for Windows on Arm.
- Run a Phi-3 model using ONNX Runtime on a Windows on Arm laptop.

- Build ONNX Runtime and enable the Generate() API for Windows on Arm.
- Run inference with a Phi-3 model using ONNX Runtime with KleidiAI acceleration.
prerequisites:
- A Windows on Arm computer such as the Lenovo Thinkpad X13 running Windows 11 or a Windows on Arm [virtual machine](https://learn.arm.com/learning-paths/cross-platform/woa_azure/)
- A Windows on Arm computer such as a Lenovo Thinkpad X13 running Windows 11, or a Windows on Arm [virtual machine](/learning-paths/cross-platform/woa_azure/).

author: Barbara Corriero

Expand All @@ -30,6 +26,7 @@ tools_software_languages:
- Python
- Git
- cmake
- ONNX Runtime
operatingsystems:
- Windows

Expand All @@ -39,7 +36,7 @@ further_reading:
link: https://onnxruntime.ai/docs/
type: documentation
- resource:
title: ONNX Runtime generate() API
title: ONNX Runtime Generate() API
link: https://onnxruntime.ai/docs/genai/
type: documentation
- resource:
Expand Down
Loading