Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
title: Create a development environment
weight: 2

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Set up your development environment

In this learning path, you will learn how to build and deploy a simple LLM-based tutorial on a Windows-on-ARM (WoA) laptop using ONNX Runtime for inference.

You will first learn how to build the ONNX Runtime and ONNX Runtime Generate() API library and then how to download the Phi-3 model and run the tutorial. This tutorial runs the short context (4k) mini (3.3B) variant of Phi 3 model. The short context version accepts a shorter (4K) prompts and produces shorter output text compared to the long (128K) context version. The short version will consume less memory.

Your first task is to prepare a development environment with the required software:

- Visual Studio 2022 IDE (latest version recommended)
- Python 3.10+ (tested with version 3.11.9)
- CMake 3.28 or higher (tested with version 3.30.5)

The following instructions were tested on an WoA 64-bit Windows machine with at least 16GB of RAM.

## Install Visual Studio 2022 IDE

Follow these steps to install and configure Visual Studio 2022 IDE:

1. Download and install the latest version of [Visual Studio IDE](https://visualstudio.microsoft.com/downloads/).

2. Select the **Community Version**. An installer called *VisualStudioSetup.exe* will be downloaded.

3. From your Downloads folder, double-click the installer to start the installation.

4. Follow the prompts and acknowledge **License Terms** and **Privacy Statement**.

5. Once "Downloaded" and "Installed" complete select your workloads. As a minimum you should select **Desktop Development with C++**. This will install the **Microsoft Visual Studio Compiler** or **MSVC**.

## Install Python 3.10+ (Tested with version 3.11.9)

Download and install [Python 3.110+](https://www.python.org/downloads/)

Tested version [Python 3.11.9](https://www.python.org/downloads/release/python-3119/)

## Install CMake

CMake is an open-source tool that automates the build process for software projects, helping to generate platform-specific build configurations.

[Download and install CMake](https://cmake.org/download/)

{{% notice Note %}}
The instructions were tested with version 3.30.5
{{% /notice %}}

You now have the required development tools installed to follow this learning path.
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: Build ONNX Runtime
weight: 3

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Compile ONNX Runtime for Windows ARM64 CPU
Now that you have your environment set up correctly, you can build the ONNX Runtime inference engine.

ONNX Runtime is an open-source inference engine designed to accelerate the deployment of machine learning models, particularly those in the Open Neural Network Exchange (ONNX) format. ONNX Runtime is optimized for high performance and low latency, making it popular for production deployment of AI models. You can learn more by reading the [ONNX Runtime Overview](https://onnxruntime.ai/).

### Clone ONNX Runtime Repo

Open a Developer Command Prompt for Visual Studio to properly setup the environment including path to compiler, linker, utilities and header files. Create your workspace and check out the source tree:

```bash
cd C:\Users\%USERNAME%
mkdir repos\lp
cd repos\lp
git clone --recursive https://github.com/Microsoft/onnxruntime.git
cd onnxruntime
git checkout 4eeefd7260b7fa42a71dd1a08b423d5e7c722050
```

{{% notice Note %}}
You might be able to use a later commit. These steps have been tested with the commit `4eeefd7260b7fa42a71dd1a08b423d5e7c722050`.
{{% /notice %}}

### Build for Windows CPU

You can build "Release" for a build type that aims to provide an
a build optimized for performance but without debug information.


```bash
.\build.bat --config Release --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests
```


As an alternative, you can build "RelWithDebInfo" for a build type that aims to provide a release-optimized build with debug information.

```bash
.\build.bat --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests
```


### Resulting Dynamic Link Library
When the build is complete, onnxruntime.dll dynamic linked library can be found in:

```
dir .\build\Windows\Release\Release\onnxruntime.dll
```

or if you build with debug information it can be found in:

```
dir .\build\Windows\RelWithDebInfo\RelWithDebInfo\onnxruntime.dll
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Build ONNX Runtime Generate() API
weight: 4

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Compile the ONNX Runtime Generate() API for Windows ARM64 CPU

The Generate() API in ONNX Runtime is designed for text generation tasks using models like Phi-3. It implements the generative AI loop for ONNX models, including:
- pre- and post-processing
- inference with ONNX Runtime- logits processing
- search and sampling
- KV cache management.

You can learn more by reading the [ONNX Runtime Generate() API page](https://onnxruntime.ai/docs/genai/).

In this page you will learn how to build the Generate API() from source (C/C++ build).


### Clone onnxruntime-genai Repo
Within your Windows Developer Command Prompt for Visual Studio, checkout the source repo:

```bash
cd C:\Users\%USERNAME%
cd repos\lp
git clone https://github.com/microsoft/onnxruntime-genai
cd onnxruntime-genai
git checkout b2e8176c99473afb726d364454dc827d2181cbb2
```

{{% notice Note %}}
You might be able to use later commits. These steps have been tested with the commit `b2e8176c99473afb726d364454dc827d2181cbb2`.
{{% /notice %}}

### Build for Windows ARM64 CPU
The build command below has a ---config argument, which takes the following options:
- ```Release``` builds release build
- ```Debug``` builds binaries with debug symbols
- ```RelWithDebInfo``` builds release binaries with debug info

Below are the instruction to build ```Release```:
```bash
python build.py --config Release --skip_tests
```

When the build is complete, confirm the ONNX Runtime Generate() API Dynamic Link Library has been created:

```output
dir build\Windows\Release\Release\onnxruntime-genai.dll
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
title: Run Phi3 model on an ARM Windows Device
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Run a Phi-3 model on your ARM Windows Device

In this section you will learn how to obtain and run on your ARM Windows device (or virtual device) the Phi3-mini model. To do so you will be using a simple model runner program which provides performance metrics.

The Phi-3-mini (3.3B) model has a short (4k) context version and a long (128k) context version. The long context version can accept much longer prompts and produces longer output text, but it does consume more memory.
In this learning path, you will use the short context version, which is quantized to 4-bits.

The Phi-3-mini model used here is in an ONNX format.

### Setup

Phi-3 ONNX models are hosted on HuggingFace.
Hugging Face uses Git for version control and to download ONNX model files, which can be quite large.
You will first need to get and install the Git Large File Storage (LFS) extension.

``` bash
winget install -e --id GitHub.GitLFS
git lfs install
```
If you don’t have winget, download and run the exe from the [official source](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage?platform=windows).
If the extension is already installed for you when you run the above ``git`` command it will say ``Git LFS initialized``.

You then need to install the ``HuggingFace CLI``.

``` bash
pip install huggingface-hub[cli]
```

### Download the Phi-3-mini (4k) model for CPU and Mobile

``` bash
cd C:\Users\%USERNAME%
cd repos\lp
huggingface-cli download microsoft/Phi-3-mini-4k-instruct-onnx --include cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4/* --local-dir .
```
This command downloads the model into a folder called `cpu_and_mobile`.

### Build model runner (ONNX Runtime GenAI C Example)
In the previous section you built ONNX RUntime Generate() API from source.
The headers and dynamic linked libraries that are built need to be copied over to appropriate folders (``lib`` and ``inclue``).
Building from source is a better practice because the examples usually are updated to run with the latest changes.

``` bash
copy onnxruntime\build\Windows\Release\Release\onnxruntime.* onnxruntime-genai\examples\c\lib
cd onnxruntime-genai
copy build\Windows\Release\Release\onnxruntime-genai.* examples\c\lib
copy src\ort_genai.h examples\c\include\
copy src\ort_genai_c.h examples\c\include\
```

you can now build the model runner executable in the ''onnxruntime-genai'' folder using the commands below:

``` bash
cd examples/c
cmake -A arm64 -S . -B build -DPHI3=ON
cd build
cmake --build . --config Release
```

After a successful build, a binary program called `phi3` will be created in the ''onnxruntime-genai'' folder.
```output
dir examples\c\build\Release\phi3.exe
```

#### Run the model

Use the runner you just built to execute the model with the following commands:

``` bash
cd C:\Users\%USERNAME%
cd repos\lp
.\onnxruntime-genai\examples\c\build\Release\phi3.exe .\cpu_and_mobile\cpu-int4-rtn-block-32-acc-level-4\ cpu
```

This will allow the runner program to load the model. It will then prompt you to input the text prompt to the model. After you enter your input prompt, the text output by the model will be displayed. On completion, performance metrics similar to those shown below should be displayed:

```
Prompt length: 64, New tokens: 931, Time to first: 1.79s, Prompt tokens per second: 35.74 tps, New tokens per second: 6.34 tps
```

You have successfully run the Phi-3 model on your Windows device powered by ARM.
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: Powering Phi-3 on Arm PC with ONNX Runtime on Windows

draft: true
cascade:
draft: true

minutes_to_complete: 60

who_is_this_for: A deep-dive for advanced developers looking to build ONNX Runtime on Windows ARM (WoA) and leverage the Generate() API to run Phi-3 inference with KleidiAI acceleration.

learning_objectives:
- Build ONNX Runtime and ONNX Runtime Generate() API for Windows on ARM.
- Run a Phi-3 model using ONNX Runtime on an Arm-based Windows laptop.

prerequisites:
- A Windows on Arm computer such as the Lenovo Thinkpad X13 running Windows 11 or a Windows on Arm [virtual machine](https://learn.arm.com/learning-paths/cross-platform/woa_azure/)

author: Barbara Corriero

### Tags
skilllevels: Advanced
subjects: ML
armips:
- Cortex-A
- Cortex-X
tools_software_languages:
- Visual Studio IDE - 2022+ Community Version
- C++
- Python 3.10+
- Git
- CMake-3.28 or higher
operatingsystems:
- Windows

further_reading:
- resource:
title: ONNX Runtime
link: https://onnxruntime.ai/docs/
type: documentation
- resource:
title: ONNX Runtime generate() API
link: https://onnxruntime.ai/docs/genai/
type: documentation
- resource:
title: Accelerating AI Developer Innovation Everywhere with New Arm Kleidi
link: https://newsroom.arm.com/blog/arm-kleidi
type: blog

### FIXED, DO NOT MODIFY
# ================================================================================
weight: 1 # _index.md always has weight of 1 to order correctly
layout: "learningpathall" # All files under learning paths have this same wrapper
learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content.
---
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
# ================================================================================
# FIXED, DO NOT MODIFY THIS FILE
# ================================================================================
weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation.
title: "Next Steps" # Always the same, html page title.
layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing.
---
Loading