Skip to content

Commit d66c299

Browse files
committed
Edit pass
1 parent 3456114 commit d66c299

File tree

1 file changed

+33
-21
lines changed

1 file changed

+33
-21
lines changed

docs/intelligentapps/modelconversion.md

Lines changed: 33 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,16 @@ MetaDescription: Model Conversion Quickstart in AI Toolkit.
55
---
66

77
# Convert a model with AI Toolkit for VS Code (Preview)
8+
89
Model conversion is an integrated development environment designed to help developers and AI engineers to convert, quantize, optimize and evaluate the pre-built machine learning models on your local Windows platform. It offers a streamlined, end-to-end experience for models converted from sources like Hugging Face, optimizing them and enabling inference on local devices powered by NPUs, GPUs, and CPUs.
910

1011
## Prerequisites
12+
1113
- VS Code must be installed. Follow these steps to [set up VS Code](https://code.visualstudio.com/docs/setup/setup-overview).
1214
- AI Toolkit extension must be installed. For more information, see [install AI Toolkit](./overview.md#install-and-setup)
1315

1416
## Create project
17+
1518
Creating a project in model conversion is the first step toward converting, optimizing, quantizing and evaluating machine learning models.
1619

1720
1. Launch model conversion
@@ -21,7 +24,7 @@ Creating a project in model conversion is the first step toward converting, opti
2124
2. Start a new project
2225

2326
Select **New Model Project**.
24-
![Screenshot that shows view for creating model project, including primary sider bar and create project button.](./images/modelconversion/create_project_default.png)
27+
![Screenshot that shows view for creating model project, including Primary Side Bar and create project button.](./images/modelconversion/create_project_default.png)
2528

2629
3. Choose a base model
2730
- `Hugging Face Model`: choose the base model with predefined recipes from the supported model list.
@@ -32,20 +35,22 @@ Creating a project in model conversion is the first step toward converting, opti
3235
Enter a unique **Project Location** and a **Project Name**. A new folder with the specified project name is created in the location you selected for storing the project files.
3336

3437
- Select or create a folder as model project folder.
35-
![Screenshot that shows how to select workspace folder. It conatins a dropdown window with selection.](./images/modelconversion/create_project_select_folder.png)
38+
![Screenshot that shows how to select workspace folder. It contains a dropdown window with selection.](./images/modelconversion/create_project_select_folder.png)
3639

37-
- Enter model project name. Press `Enter`.
38-
![Screenshot that shows how to input project name. It conatins a input textbox.](./images/modelconversion/create_project_input_name.png)
40+
- Enter model project name. Press `kbstyle(Enter)`.
41+
![Screenshot that shows how to input project name. It contains a input textbox.](./images/modelconversion/create_project_input_name.png)
3942

4043
> Note:
4144
> - The first time you create a model project, it might take a while to set up the environment.
4245
> - **ReadMe Access**: A README file is included in each project. If you close it, you can reopen it via the workspace.
4346
> ![Screenshot that shows model readme.](./images/modelconversion/create_project_readme.png)
4447
4548
### Supported models
49+
4650
Model Conversion currently supports a growing list of models, including top Hugging Face models in PyTorch format.
4751

4852
#### LLM models
53+
4954
| Model Name | Hugging Face Path |
5055
|----------------------------------------|-------------------------------------------------|
5156
| Qwen2.5 1.5B Instruct | `Qwen/Qwen2.5-1.5B-Instruct` |
@@ -54,17 +59,19 @@ Model Conversion currently supports a growing list of models, including top Hugg
5459
| Phi-3.5 Mini Instruct | `Phi-3.5-mini-instruct` |
5560

5661
#### Non-LLM models
62+
5763
| Model Name | Hugging Face Path |
5864
|----------------------------------------|-------------------------------------------------|
5965
| Intel BERT Base Uncased (MRPC) | `Intel/bert-base-uncased-mrpc` |
6066
| BERT Multilingual Cased | `google-bert/bert-base-multilingual-cased` |
6167
| ViT Base Patch16-224 | `google/vit-base-patch16-224` |
6268
| ResNet-50 | `resnet-50` |
63-
| CLIP ViT-B-32 (laion) | `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` |
69+
| CLIP ViT-B-32 (LAION) | `laion/CLIP-ViT-B-32-laion2B-s34B-b79K` |
6470
| CLIP ViT Base Patch16 | `clip-vit-base-patch16` |
6571
| CLIP ViT Base Patch32 | `clip-vit-base-patch32` |
6672

6773
### (Optional) Add model into existing project
74+
6875
- If you already opened the model project, select **Models** -> **Conversion**. Select **Add Models** on right panel. Or you need to open the model project and then select **Add Models** on the right panel.
6976

7077
![Screenshot that shows how to add model. It contains a button to add models.](./images/modelconversion/create_project_add_models.png)
@@ -73,6 +80,7 @@ Model Conversion currently supports a growing list of models, including top Hugg
7380
- A folder contains new model files will be created in current project folder.
7481

7582
### (Optional) Create a new model project
83+
7684
- If you already opened the model project, select **Models** -> **Conversion**. On right panel, Select **New Project**.
7785

7886
![Screenshot that shows how to create a new project. It contains a button to create a new project.](./images/modelconversion/create_project_add_models.png)
@@ -81,21 +89,21 @@ Model Conversion currently supports a growing list of models, including top Hugg
8189

8290
Select or create a folder as model project folder.
8391

84-
Enter model project name. Press `Enter`.
92+
Enter model project name. Press `kbstyle(Enter)`.
8593

86-
![Screenshot that shows how to select project folder. It conatins a dropdown window with selection.](./images/modelconversion/create_project_select_folder.png)
87-
88-
![Screenshot that shows how to input project name. It conatins a input textbox.](./images/modelconversion/create_project_input_name.png)
94+
![Screenshot that shows how to select project folder. It contains a dropdown window with selection.](./images/modelconversion/create_project_select_folder.png)
8995

96+
![Screenshot that shows how to input project name. It contains a input textbox.](./images/modelconversion/create_project_input_name.png)
9097

9198
## Run workflow
92-
Running a workflow in model conversion is the core step that transform the pre-built ML model into an optimized and quantized onnx model.
99+
100+
Running a workflow in model conversion is the core step that transform the pre-built ML model into an optimized and quantized ONNX model.
93101

94102
1. Open model project
95103
- Ensure that the model project is open. If it isn't, navigate to File -> Open Folder in VS Code to open the model project.
96104

97105
2. Review workflow configuration
98-
- Navigate to primary sider bar **Models**-> **Conversion**
106+
- Navigate to Primary Side Bar **Models**-> **Conversion**
99107
- Select the workflow template to view the conversion recipe.
100108

101109
![Screenshot that shows running a workflow. There is a workflow configuration section containing Conversion, Quantization and Evaluation.](./images/modelconversion/Run.png)
@@ -118,15 +126,15 @@ Running a workflow in model conversion is the core step that transform the pre-b
118126
> Note:
119127
>
120128
> If your workflow uses a dataset that requires license agreement approval on Hugging Face (e.g., ImageNet-1k), you’ll be prompted to accept the terms on the dataset page before proceeding. This is required for legal compliance.
121-
> 1. To get your Hugging Face Access Token, select button on poped out window.
129+
> 1. Select the **HuggingFace Access Token** button to get your Hugging Face Access Token.
122130
>
123-
> ![Screenshot that shows input token step 1: start to get Huggingface Access Token.](./images/modelconversion/run_token_1.png)
131+
> ![Screenshot that shows input token step 1: start to get Hugging Face Access Token.](./images/modelconversion/run_token_1.png)
124132
>
125-
> 2. Select open.
133+
> 2. Select **Open** to open the Hugging Face website.
126134
>
127-
> ![Screenshot that shows input token step 2: open huggingface websites.](./images/modelconversion/run_token_2.png)
135+
> ![Screenshot that shows input token step 2: open Hugging Face websites.](./images/modelconversion/run_token_2.png)
128136
>
129-
> 3. Get token on Hugging Face portal. Paste on the top window. Press `Enter`.
137+
> 3. Get your token on Hugging Face portal and paste it Quick Pick. Press `kbstyle(Enter)`.
130138
>
131139
> ![Screenshot that shows input token step 3: input token on dropdown textbox.](./images/modelconversion/run_token_3.png)
132140
@@ -155,10 +163,10 @@ Running a workflow in model conversion is the core step that transform the pre-b
155163
- If the workflow configuration meet your needs, select **Run** to begin the job.
156164
- A default job name will be generated using the workflow name and timestamp (e.g., `bert_qdq_2025-05-06_20-45-00`) for easy tracking.
157165
- During the job running, you can **Cancel** the job by selecting on the status indicator or the three-dot menu under **Action** in History board and select **Stop Running**.
158-
- **Hugging Face compliance alerts**: During the quantization, we need the calibration datasets. You may be prompted to accept license terms before proceeding. If you missed the notification, the running process will be paused, waiting for your input. Please make sure notifications are enabled and that you accept the required licenses.
166+
- **Hugging Face compliance alerts**: During the quantization, we need the calibration datasets. You may be prompted to accept license terms before proceeding. If you missed the notification, the running process will be paused, waiting for your input. Please make sure notifications are enabled and that you accept the required licenses.
159167

160168
> Note:
161-
> - **Model conversion and quantization**: you can run workflow on any device expect for LLM models. The **Quantization** configuration is optimized for NPU only. It's recommaneded to uncheck this step if target system is not NPU.
169+
> - **Model conversion and quantization**: you can run workflow on any device expect for LLM models. The **Quantization** configuration is optimized for NPU only. It's recommended to uncheck this step if the target system is not NPU.
162170
> - **LLM model quantization**: If you want to quantize the [LLM models](#llm-models), a Nvidia GPU is required.
163171
>
164172
> If you want to quantize the model on another device with GPU, you can setup environment by yourselves, please refer [ManualConversionOnGPU](./reference/ManualConversionOnGPU.md). Please note that only "Quantization" step need the GPU. After quantization, you can evaluate the model on NPU or CPU.
@@ -180,15 +188,18 @@ Running a workflow in model conversion is the core step that transform the pre-b
180188
> If your job is canceled or failed, you can select job name to adjust the workflow and run job again. To avoid accidental overwrites, each execution creates a new history folder with its own configuration and results.
181189
182190
## View results
191+
183192
The History Board in **Conversion** is your central dashboard for tracking, reviewing, and managing all workflow runs. Each time you run a model conversion and evaluation, a new entry is created in the History Board—ensuring full traceability and reproducibility.
184193

185194
- Find the workflow run that you want to review. Each run is listed with a status indicator (e.g. Succeeded, Cancelled)
186195
- Select the run name to view the conversion configurations
187-
- Select the **logs** under Status indicator to to view logs and detailed execution results
196+
- Select the **logs** under Status indicator to view logs and detailed execution results
188197
- Once the model converted successfully, you can view the evaluation results under Metrics. Metrics such as accuracy, latency and throughput are displayed alongside each run
198+
189199
![Screenshot that shows history, including name, time, parameters and so on.](./images/modelconversion/history.png)
190200

191201
## Use sample notebook for model inference
202+
192203
- Go to the History board. Select the three-dot menu under **Action**.
193204

194205
Select **Inference in Samples** from the dropdown.
@@ -202,15 +213,16 @@ The default runtime is: `C:\Users\{user_name}\.aitk\bin\model_lab_runtime\Python
202213
- The sample will launch in a Jupyter Notebook. You can customize the input data or parameters to test different scenarios.
203214

204215
> [!TIP]
205-
>
206216
> **Model compatibility:** Ensure the converted model supports the specified EPs in the inference samples
207217
>
208218
> **Sample location:** Inference samples are stored alongside the run artifacts in the history folder.
209219
210220
## Export and share with others
211-
Go to the History board. Select **Export** to share the model project with others. This only copy the model project without history folder. If you want to share models with others, please select the corresponding jobs, This will copy the selected history folder conaining the model and its configuration.
221+
222+
Go to the History board. Select **Export** to share the model project with others. This copies the model project without history folder. If you want to share models with others, select the corresponding jobs. This copies the selected history folder containing the model and its configuration.
212223

213224
## See also
225+
214226
- [How to manually setup GPU conversion](./reference/ManualConversionOnGPU.md)
215227
- [How to manually setup environment](./reference/SetupWithoutAITK.md)
216228
- [How to customize model template](./reference/TemplateProject.md)

0 commit comments

Comments
 (0)