You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/laptops-and-desktops/win_on_arm_build_onnxruntime/1-dev-env-setup.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ weight: 2
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Set up your development environment
9
+
## Overview
10
10
11
-
In this learning path, you'll learn how to build and deploy an LLM on a Windows on Arm (WoA) laptop using ONNX Runtime for inference.
11
+
In this Learning Path, you'll learn build and deploy a large language model (LLM) on a Windows on Arm (WoA) laptop using ONNX Runtime for inference.
12
12
13
13
You'll first learn how to build the ONNX Runtime and ONNX Runtime Generate() API library and then how to download the Phi-3 model and run the inference. You'll run the short context (4k) mini (3.3B) variant of Phi 3 model. The short context version accepts a shorter (4K) prompts and produces shorter output text compared to the long (128K) context version. The short version consumes less memory.
0 commit comments