Skip to content

Commit 1dc87b4

Browse files
authored
AI Toolkit docs migration (#7827)
Add AI Toolkit docset
1 parent 9034aa6 commit 1dc87b4

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+859
-0
lines changed

docs/intelligentapps/bulkrun.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
---
2+
Order: 4
3+
Area: intelligentapps
4+
TOCTitle: Bulk Run
5+
ContentId:
6+
PageTitle: Bulk Run Prompts
7+
DateApproved:
8+
MetaDescription: Run a set of prompts in an imported dataset, individually or in a full batch towards the selected genAI models and parameters.
9+
MetaSocialImage:
10+
---
11+
12+
# Run multiple prompts in bulk
13+
14+
The bulk run feature in AI Toolkit allows you to run multiple prompts in batch. When you use the playground, you can only run one prompt manually at a time, in the order they're listed. Bulk run takes a dataset as input, where each row in the dataset has a prompt as the minimal requirement. Typically, the dataset has multiple rows. Once imported, you can select any prompt to run or run all prompts on the selected model. The responses will be displayed in the same dataset view. The results from running the dataset can be exported.
15+
16+
To start a bulk run:
17+
18+
1. In the AI Toolkit view, select **TOOLS** > **Bulk Run** to open the Bulk Run view.
19+
20+
21+
1. Select either a sample dataset or import a local JSONL file that has a `query` field to use as prompts.
22+
23+
![Select dataset](./images/bulkrun/dataset.png)
24+
25+
1. Once the dataset is loaded, select **Run** or **Rerun** on any prompt to run a single prompt.
26+
27+
28+
Like in the playground, you can select AI model, add context for your prompt, and change inference parameters.
29+
30+
![Bulk run prompts](./images/bulkrun/bulkrun_one.png)
31+
32+
1. Select **Run all** on the top of the Bulk Run view to automatically run through queries. The responses are shown in the **response** column.
33+
34+
There is an option to only run the remaining queries that have not yet been run.
35+
36+
![Run all](./images/bulkrun/runall.png)
37+
38+
1. Select the **Export** button to export the results to a JSONL format.
39+
40+
1. Select **Import** to import another dataset in JSONL format for the bulk run.

docs/intelligentapps/evaluation.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
Order: 5
3+
Area: intelligentapps
4+
TOCTitle: Evaluation
5+
ContentId:
6+
PageTitle: AI Evaluation
7+
DateApproved:
8+
MetaDescription: Import a dataset with LLMs or SLMs output or rerun it for the queries. Run evaluation job for the popular evaluators like F1 score, relevance, coherence, similarity... find, visualize, and compare the evaluation results in tables or charts.
9+
MetaSocialImage:
10+
---
11+
12+
# Model evaluation
13+
14+
AI engineers often need to evaluate models with different parameters or prompts in a dataset for comparing to ground truth and compute evaluator values from the comparisons. AI Toolkit allows you to perform evaluations with minimal effort.
15+
16+
![Start evaluation](./images/evaluation/evaluation.png)
17+
18+
## Start an evaluation job
19+
20+
1. In AI Toolkit view, select **TOOLS** > **Evaluation** to open the Evaluation view.
21+
1. Select the **Create Evaluation** button and provide the following information:
22+
23+
- **Evaluation job name:** default or a name you can specify
24+
- **Evaluator:** currently the built-in evaluators can be selected.
25+
![Evaluators](./images/evaluation/evaluators.png)
26+
- **Judging model:** a model from the list that can be selected as judging model to evaluate for some evaluators.
27+
- **Dataset:** you can start with a sample dataset for learning purpose, or import a JSONL file with fields `query`,`response`,`ground truth`.
28+
1. Once you provide all necessary information for evaluation, a new evaluation job is created. You will be promoted to open your new evaluation job details.
29+
30+
![Open evaluation](./images/evaluation/openevaluation.png)
31+
32+
1. Verify your dataset and select **Run Evaluation** to start the evaluation.
33+
34+
![Run Evaluation](./images/evaluation/runevaluation.png)
35+
36+
## Monitor the evaluation job
37+
38+
Once an evaluation job is started, you can find its status from the evaluation job view.
39+
40+
![Running evaluation](./images/evaluation/running.png)
41+
42+
Each evaluation job has a link to the dataset that was used, logs from the evaluation process, timestamp, and a link to the details of the evaluation.
43+
44+
## Find results of evaluation
45+
46+
Select the evaluation job detail, the view has columns of selected evaluators with the numerical values. Some may have aggregate values.
47+
48+
You can also select **Open In Data Wrangler** to open the data with the Data Wrangler extension.
49+
50+
> <a class="install-extension-btn" href="vscode:extension/ms-toolsai.datawrangler">Install Data Wrangler</a>
51+
52+
![Data Wrangler](./images/evaluation/datawrangler.png)

docs/intelligentapps/faq.md

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
---
2+
Order: 7
3+
Area: intelligentapps
4+
TOCTitle: FAQ
5+
ContentId:
6+
PageTitle: FAQ for AI Toolkit
7+
DateApproved:
8+
MetaDescription: Find answers to frequently asked questions (FAQ) using AI Toolkit. Get troubleshooting recommendations.
9+
MetaSocialImage:
10+
---
11+
12+
# AI Toolkit FAQ
13+
14+
## Models
15+
16+
### How can I find my remote model endpoint and authentication header?
17+
18+
Here are some examples about how to find your endpoint and authentication headers in common OpenAI service providers. For other providers, you can check out their documentation about the chat completion endpoint and authentication header.
19+
20+
#### Example 1: Azure OpenAI
21+
22+
1. Go to the `Deployments` blade in Azure OpenAI Studio and select a deployment, for example, `gpt-4o`. If you don't have a deployment yet, you can checkout [the documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal) about how to create a deployment.
23+
24+
![Select model deployment](./images/faq/6-aoai-deployments.png)
25+
26+
![Find model endpoint](./images/faq/7-aoai-model.png)
27+
28+
2. As in the last screenshot, you can retrieve your chat completion endpoint in the `Target URI` property in the `Endpoint` section.
29+
30+
3. You can retrieve your API key from the `Key` property in the `Endpoint` section. After you copy the API key, **fill it in the format of `api-key: <YOUR_API_KEY>` for authentication header** in AI Toolkit. See [Azure OpenAI service documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#request-header-2) to learn more about the authentication header.
31+
32+
#### Example 2: OpenAI
33+
34+
1. For now, the chat completion endpoint is fixed as `https://api.openai.com/v1/chat/completions`. See [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create) to learn more about it.
35+
36+
2. Go to [OpenAI documentation](https://platform.openai.com/docs/api-reference/authentication) and click `API Keys` or `Project API Keys` to create or retrieve your API key. After you copy the API key, **fill it in the format of `Authorization: Bearer <YOUR_API_KEY>` for authentication header** in AI Toolkit. See the OpenAI documentation for more information.
37+
38+
![Find model access key](./images/faq/8-openai-key.png)
39+
40+
41+
### How to edit endpoint URL or authentication header?
42+
43+
If you enter the wrong endpoint or authenticatin header, you may encounter errors when inferencing. Click `Edit settings.json` to open Visual Studio Code settings. You may also type the command `Open User Settings (JSON)` in Visual Studio Code command palette to open it and go to the `windowsaistudio.remoteInfereneEndpoints` section.
44+
45+
![Edit](./images/faq/9-edit.png)
46+
47+
Here, you can edit or remove existing endpoint URLs or authentication headers. After you save the settings, the models list in tree view or playground will automatically refresh.
48+
49+
![Edit nedpoint in settings](./images/faq/10-edit-settings.png)
50+
51+
### How can I join the waitlist for OpenAI o1-mini or OpenAI o1-preview?
52+
53+
The OpenAI o1 series models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, math and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows.
54+
55+
IMPORTANT: o1-preview model is available for limited access. To try the model in the playground, registration is required, and access will be granted based on Microsoft’s eligibility criteria.
56+
57+
You can visit the [GitHub model market](https://aka.ms/github-model-marketplace) to find OpenAI o1-mini or OpenAI o1-preview and join the waitlist.
58+
59+
### Can I use my own models or other models from Hugging Face?
60+
61+
If your own model supports OpenAI API contract, you can host the model in the cloud and add it to AI Toolkit as custom model. You need to provide key information such as model endpoint url, access key and model name.
62+
63+
## Finetune
64+
65+
### There are too many fine-tune settings do I need to worry about all of them?
66+
67+
No, you can just run with the default settings and our current dataset in the project to test. If you want you can also pick your own dataset but you will need to tweak some setting see [this](walkthrough-hf-dataset.md) tutorial for more info.
68+
69+
### AI Toolkit would not scaffold the fine-tuning project
70+
71+
Make sure to check for the prerequisites before installing the extension. More details at [Prerequisites](README.md#prerequisites).
72+
73+
### I have the NVIDIA GPU device but the prerequisites check fails
74+
75+
If you have the NVIDIA GPU device but the prerequisites check fails with "GPU is not detected", make sure that the latest driver is installed. You can check and download the driver at [NVIDIA site](https://www.nvidia.com/Download/index.aspx?lang=en-us).
76+
Also, make sure that it is installed in the path. To check, run run nvidia-smi from the command line.
77+
78+
### I generated the project but Conda activate fails to find the environment
79+
80+
There might have been an issue setting the environment you can manually initialize the environment using `bash /mnt/[PROJECT_PATH]/setup/first_time_setup.sh` from inside the workspace.
81+
82+
### When using a Hugging Face dataset how do I get it?
83+
84+
Make sure before you start the `python finetuning/invoke_olive.py` command you run `huggingface-cli login` this will ensure the dataset can be downloaded on your behalf.
85+
86+
## Environment
87+
88+
### Does the extension work in Linux or other systems?
89+
90+
Yes, AI Toolkit runs on Windows, Mac and Linux.
91+
92+
### How can I disable the Conda auto activation from my WSL
93+
94+
To disable the conda install in WSL you can run `conda config --set auto_activate_base false` this will disable the base environment.
95+
96+
### Do you support containers today?
97+
98+
We are currently working on the container support and it will be enable in a future release.
99+
100+
### Why do you need GitHub and Hugging Face credentials?
101+
102+
We host all the project templates in GitHub and the base models are hosted in Azure or Hugging Face which requires accounts to get access to them from the APIs.
103+
104+
### I am getting an error downloading Llama2
105+
106+
Please ensure you request access to Llama through this form [Llama 2 sign up page](https://github.com/llama2-onnx/signup) this is needed to comply with Meta's trade compliance.
107+
108+
### Can't save project inside WSL instance
109+
Because the remote sessions are currently not supported when running the AI Toolkit Actions, you cannot save your project while being connected to WSL. To close remote connections, click on "WSL" at the bottom left of the screen and choose "Close Remote Connections".
110+
111+
### Error: GitHub API forbidden
112+
113+
We host the project templates in GitHub repositry *microsoft/windows-ai-studio-templates*, and the extension will call GitHub API to load the repo content. If you are in Microsoft, you may need to authorize Microsoft organization to avoid such forbidden issue.
114+
115+
See [this issue](https://github.com/microsoft/vscode-ai-toolkit/issues/70#issuecomment-2126089884) for workaround. The detailed steps are:
116+
- Sign out GitHub account from VS Code
117+
- Reload VS Code and AI Toolkit and you will be asked to sign in GitHub again
118+
- [Important] In browser's authorize page, make sure to authorize the app to access "Microsoft" org
119+
![Authorize Access](./images/faq/faq-github-api-forbidden.png)
120+
121+
### Cannot list, load, or download ONNX model
122+
123+
Check the 'AI Toolkit' log from output panel. If seeing *Agent* error or something like:
124+
125+
![Agent Failure](./images/faq/faq-onnx-agent.png)
126+
127+
Please close all VS Code instances and reopen VS Code.
128+
129+
(*It's caused by underlying ONNX agent unexpectedly closed and above step is to restart the agent.*)

0 commit comments

Comments
 (0)