Skip to content

Commit be2cd83

Browse files
dev env done
1 parent 389641d commit be2cd83

File tree

1 file changed

+24
-14
lines changed
  • content/learning-paths/laptops-and-desktops/win_on_arm_build_onnxruntime

1 file changed

+24
-14
lines changed

content/learning-paths/laptops-and-desktops/win_on_arm_build_onnxruntime/1-dev-env-setup.md

Lines changed: 24 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,29 +10,39 @@ layout: learningpathall
1010

1111
In this Learning Path, you'll learn build and deploy a large language model (LLM) on a Windows on Arm (WoA) laptop using ONNX Runtime for inference.
1212

13-
You'll first learn how to build the ONNX Runtime and ONNX Runtime Generate() API library and then how to download the Phi-3 model and run the inference. You'll run the short context (4k) mini (3.3B) variant of Phi 3 model. The short context version accepts a shorter (4K) prompts and produces shorter output text compared to the long (128K) context version. The short version consumes less memory.
13+
You'll learn how to:
1414

15-
Your first task is to prepare a development environment with the required software:
15+
* Build ONNX Runtime and the Generate() API library.
16+
* Download the Phi-3 model and run inference.
17+
* Run the short-context (4K) Mini (3.3B) variant of Phi 3 model.
1618

17-
- Visual Studio 2022 IDE (latest version recommended)
18-
- Python 3.10 or higher
19-
- CMake 3.28 or higher
19+
{{% notice Note %}}
20+
The short-context version accepts shorter (4K) prompts and generates shorter outputs than the long-context (128K) version. It also consumes less memory.
21+
{{% /notice %}}
22+
23+
## Set up your Development Environment
24+
25+
Your first task is to prepare a development environment with the required software. Start by installing the required tools:
26+
27+
- Visual Studio 2022 IDE (latest version recommended).
28+
- Python 3.10 or higher.
29+
- CMake 3.28 or higher.
2030

21-
The following instructions were tested on a WoA 64-bit Windows machine with at least 16GB of RAM.
31+
These instructions were tested on a 64-bit WoA machine with at least 16GB of RAM.
2232

23-
## Install Visual Studio 2022 IDE
33+
## Install and Configure Visual Studio 2022
2434

25-
Follow these steps to install and configure Visual Studio 2022 IDE:
35+
Follow these steps:
2636

2737
1. Download the latest [Visual Studio IDE](https://visualstudio.microsoft.com/downloads/).
2838

29-
2. Select the **Community** edition. An installer called *VisualStudioSetup.exe* will be downloaded.
39+
2. Select the **Community** edition. This downloads an installer called *VisualStudioSetup.exe*.
3040

31-
3. Run the downloaded installer (*VisualStudioSetup.exe*) from your **Downloads** folder.
41+
3. Run the installer (*VisualStudioSetup.exe*) from your **Downloads** folder.
3242

33-
4. Follow the installation prompts and accept the **License Terms** and **Privacy Statement**.
43+
4. Follow the prompts and accept the **License Terms** and **Privacy Statement**.
3444

35-
5. When prompted to select your workloads, select **Desktop Development with C++**. This includes **Microsoft Visual Studio Compiler** (**MSVC**).
45+
5. When prompted to select workloads, select **Desktop Development with C++**. This installs the **Microsoft Visual Studio Compiler** (**MSVC**).
3646

3747
## Install Python
3848

@@ -44,12 +54,12 @@ You'll need Python version 3.10 or higher. This Learning Path was tested with ve
4454

4555
## Install CMake
4656

47-
CMake is an open-source tool that automates the build process and helps generate platform-specific build configurations.
57+
CMake is an open-source tool that automates the build process and generates platform-specific build configurations.
4858

4959
Download and install [CMake for Windows on Arm](/install-guides/cmake).
5060

5161
{{% notice Note %}}
5262
The instructions were tested with version 3.30.5.
5363
{{% /notice %}}
5464

55-
You’re now ready to move on to building the ONNX Runtime and running inference with Phi-3.
65+
You’re now ready to build ONNX Runtime and run inference using the Phi-3 model.

0 commit comments

Comments
 (0)