|
| 1 | +--- |
| 2 | +Order: 7 |
| 3 | +Area: intelligentapps |
| 4 | +TOCTitle: FAQ |
| 5 | +ContentId: |
| 6 | +PageTitle: FAQ for AI Toolkit |
| 7 | +DateApproved: |
| 8 | +MetaDescription: Find answers to frequently asked questions (FAQ) using AI Toolkit. Get troubleshooting recommendations. |
| 9 | +MetaSocialImage: |
| 10 | +--- |
| 11 | + |
| 12 | +# AI Toolkit FAQ |
| 13 | + |
| 14 | +## Models |
| 15 | + |
| 16 | +### How can I find my remote model endpoint and authentication header? |
| 17 | + |
| 18 | +Here are some examples about how to find your endpoint and authentication headers in common OpenAI service providers. For other providers, you can check out their documentation about the chat completion endpoint and authentication header. |
| 19 | + |
| 20 | +#### Example 1: Azure OpenAI |
| 21 | + |
| 22 | +1. Go to the `Deployments` blade in Azure OpenAI Studio and select a deployment, for example, `gpt-4o`. If you don't have a deployment yet, you can checkout [the documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal) about how to create a deployment. |
| 23 | + |
| 24 | +  |
| 25 | + |
| 26 | +  |
| 27 | + |
| 28 | +2. As in the last screenshot, you can retrieve your chat completion endpoint in the `Target URI` property in the `Endpoint` section. |
| 29 | + |
| 30 | +3. You can retrieve your API key from the `Key` property in the `Endpoint` section. After you copy the API key, **fill it in the format of `api-key: <YOUR_API_KEY>` for authentication header** in AI Toolkit. See [Azure OpenAI service documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#request-header-2) to learn more about the authentication header. |
| 31 | + |
| 32 | +#### Example 2: OpenAI |
| 33 | + |
| 34 | +1. For now, the chat completion endpoint is fixed as `https://api.openai.com/v1/chat/completions`. See [OpenAI documentation](https://platform.openai.com/docs/api-reference/chat/create) to learn more about it. |
| 35 | + |
| 36 | +2. Go to [OpenAI documentation](https://platform.openai.com/docs/api-reference/authentication) and click `API Keys` or `Project API Keys` to create or retrieve your API key. After you copy the API key, **fill it in the format of `Authorization: Bearer <YOUR_API_KEY>` for authentication header** in AI Toolkit. See the OpenAI documentation for more information. |
| 37 | + |
| 38 | +  |
| 39 | + |
| 40 | + |
| 41 | +### How to edit endpoint URL or authentication header? |
| 42 | + |
| 43 | +If you enter the wrong endpoint or authenticatin header, you may encounter errors when inferencing. Click `Edit settings.json` to open Visual Studio Code settings. You may also type the command `Open User Settings (JSON)` in Visual Studio Code command palette to open it and go to the `windowsaistudio.remoteInfereneEndpoints` section. |
| 44 | + |
| 45 | + |
| 46 | + |
| 47 | +Here, you can edit or remove existing endpoint URLs or authentication headers. After you save the settings, the models list in tree view or playground will automatically refresh. |
| 48 | + |
| 49 | + |
| 50 | + |
| 51 | +### How can I join the waitlist for OpenAI o1-mini or OpenAI o1-preview? |
| 52 | + |
| 53 | +The OpenAI o1 series models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, math and similar fields. For example, o1 can be used by healthcare researchers to annotate cell sequencing data, by physicists to generate complicated mathematical formulas needed for quantum optics, and by developers in all fields to build and execute multi-step workflows. |
| 54 | + |
| 55 | +IMPORTANT: o1-preview model is available for limited access. To try the model in the playground, registration is required, and access will be granted based on Microsoft’s eligibility criteria. |
| 56 | + |
| 57 | +You can visit the [GitHub model market](https://aka.ms/github-model-marketplace) to find OpenAI o1-mini or OpenAI o1-preview and join the waitlist. |
| 58 | + |
| 59 | +### Can I use my own models or other models from Hugging Face? |
| 60 | + |
| 61 | +If your own model supports OpenAI API contract, you can host the model in the cloud and add it to AI Toolkit as custom model. You need to provide key information such as model endpoint url, access key and model name. |
| 62 | + |
| 63 | +## Finetune |
| 64 | + |
| 65 | +### There are too many fine-tune settings do I need to worry about all of them? |
| 66 | + |
| 67 | +No, you can just run with the default settings and our current dataset in the project to test. If you want you can also pick your own dataset but you will need to tweak some setting see [this](walkthrough-hf-dataset.md) tutorial for more info. |
| 68 | + |
| 69 | +### AI Toolkit would not scaffold the fine-tuning project |
| 70 | + |
| 71 | +Make sure to check for the prerequisites before installing the extension. More details at [Prerequisites](README.md#prerequisites). |
| 72 | + |
| 73 | +### I have the NVIDIA GPU device but the prerequisites check fails |
| 74 | + |
| 75 | +If you have the NVIDIA GPU device but the prerequisites check fails with "GPU is not detected", make sure that the latest driver is installed. You can check and download the driver at [NVIDIA site](https://www.nvidia.com/Download/index.aspx?lang=en-us). |
| 76 | +Also, make sure that it is installed in the path. To check, run run nvidia-smi from the command line. |
| 77 | + |
| 78 | +### I generated the project but Conda activate fails to find the environment |
| 79 | + |
| 80 | +There might have been an issue setting the environment you can manually initialize the environment using `bash /mnt/[PROJECT_PATH]/setup/first_time_setup.sh` from inside the workspace. |
| 81 | + |
| 82 | +### When using a Hugging Face dataset how do I get it? |
| 83 | + |
| 84 | +Make sure before you start the `python finetuning/invoke_olive.py` command you run `huggingface-cli login` this will ensure the dataset can be downloaded on your behalf. |
| 85 | + |
| 86 | +## Environment |
| 87 | + |
| 88 | +### Does the extension work in Linux or other systems? |
| 89 | + |
| 90 | +Yes, AI Toolkit runs on Windows, Mac and Linux. |
| 91 | + |
| 92 | +### How can I disable the Conda auto activation from my WSL |
| 93 | + |
| 94 | +To disable the conda install in WSL you can run `conda config --set auto_activate_base false` this will disable the base environment. |
| 95 | + |
| 96 | +### Do you support containers today? |
| 97 | + |
| 98 | +We are currently working on the container support and it will be enable in a future release. |
| 99 | + |
| 100 | +### Why do you need GitHub and Hugging Face credentials? |
| 101 | + |
| 102 | +We host all the project templates in GitHub and the base models are hosted in Azure or Hugging Face which requires accounts to get access to them from the APIs. |
| 103 | + |
| 104 | +### I am getting an error downloading Llama2 |
| 105 | + |
| 106 | +Please ensure you request access to Llama through this form [Llama 2 sign up page](https://github.com/llama2-onnx/signup) this is needed to comply with Meta's trade compliance. |
| 107 | + |
| 108 | +### Can't save project inside WSL instance |
| 109 | +Because the remote sessions are currently not supported when running the AI Toolkit Actions, you cannot save your project while being connected to WSL. To close remote connections, click on "WSL" at the bottom left of the screen and choose "Close Remote Connections". |
| 110 | + |
| 111 | +### Error: GitHub API forbidden |
| 112 | + |
| 113 | +We host the project templates in GitHub repositry *microsoft/windows-ai-studio-templates*, and the extension will call GitHub API to load the repo content. If you are in Microsoft, you may need to authorize Microsoft organization to avoid such forbidden issue. |
| 114 | + |
| 115 | +See [this issue](https://github.com/microsoft/vscode-ai-toolkit/issues/70#issuecomment-2126089884) for workaround. The detailed steps are: |
| 116 | +- Sign out GitHub account from VS Code |
| 117 | +- Reload VS Code and AI Toolkit and you will be asked to sign in GitHub again |
| 118 | +- [Important] In browser's authorize page, make sure to authorize the app to access "Microsoft" org |
| 119 | +  |
| 120 | + |
| 121 | +### Cannot list, load, or download ONNX model |
| 122 | + |
| 123 | +Check the 'AI Toolkit' log from output panel. If seeing *Agent* error or something like: |
| 124 | + |
| 125 | + |
| 126 | + |
| 127 | +Please close all VS Code instances and reopen VS Code. |
| 128 | + |
| 129 | +(*It's caused by underlying ONNX agent unexpectedly closed and above step is to restart the agent.*) |
0 commit comments