Skip to content

Commit 50424aa

Browse files
authored
Updated AI Toolkit for Visual Studio Code with "How to" steps.
The "AI Toolkit for Visual Studio Code" section under "Foundry Local architecture" mentions that it is possible to setup Foundry local model in AI Toolkit, but no actual steps or instructions. In this PR I've added Prerequisites and instructions for how to setup AI Toolkit against your locally running Foundry Local service.
1 parent 7eafb7a commit 50424aa

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

articles/ai-foundry/foundry-local/concepts/foundry-local-architecture.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,11 +123,23 @@ Foundry Local supports integration with various SDKs in most languages, such as
123123

124124
The AI Toolkit for Visual Studio Code provides a user-friendly interface for developers to interact with Foundry Local. It allows users to run models, manage the local cache, and visualize results directly within the IDE.
125125

126-
- **Features**:
126+
**Features**:
127127
- Model management: Download, load, and run models from within the IDE.
128128
- Interactive console: Send requests and view responses in real-time.
129129
- Visualization tools: Graphical representation of model performance and results.
130130

131+
**Prerequisites:**
132+
- You have installed [Foundry Local](../get-started.md) and have a model service running.
133+
- You have installed the [AI Toolkit for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-windows-ai-studio.windows-ai-studio) extension.
134+
135+
**Connect Foundry Local model to AI Toolkit:**
136+
1. **Add model in AI Toolkit**: Open AI Toolkit from the activity bar of Visual Studio Code. In the 'My Models' panel, click the 'Add model for remote interface' button and then select 'Add a custom model' from the dropdown menu.
137+
2. **Enter the chat compatible endpoint URL**: Enter `http://localhost:PORT/v1/chat/completions` where PORT is replaced with the port number of your Foundry Local service endpoint. You can see the port of your locally runnig service using the CLI command `foundry service status`. Foundry Local dynamically assigns a port, so it might not always the same.
138+
3. **Provide model name**: Enter the exact model name you which to use from Foundry Local, for example `phi-3.5-mini`. You can list all previously downloaded and locally cached models using the CLI command `foundry cache list` or use `foundry model list` to see all available models for local use. You’ll also be asked to enter a display name, which is only for your own local use, so to avoid confusion it’s recommended to enter the same name as the exact model name.
139+
4. **Authentication**: If your local setup doesn't require authentication *(which is the default for a Foundry Local setup)*, you can leave the authentication headers field blank and press Enter.
140+
141+
After completing these steps, your Foundry Local model will appear in the 'My Models' list in AI Toolkit and is ready to be used by right-clicking on your model and select 'Load in Playground'.
142+
131143
## Next Steps
132144

133145
- [Get started with Foundry Local](../get-started.md)

0 commit comments

Comments
 (0)