Skip to content

Commit 4c424ed

Browse files
docs: update deployment guide to clarify VS Code Dev Containers and bash command usage
1 parent 3a357e5 commit 4c424ed

File tree

1 file changed

+4
-23
lines changed

1 file changed

+4
-23
lines changed

docs/DeploymentGuide.md

Lines changed: 4 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ You can run this solution using [GitHub Codespaces](https://docs.github.com/en/c
8686
</details>
8787

8888
<details>
89-
<summary><b>Deploy in VS Code</b></summary>
89+
<summary><b>Deploy in VS Code Dev Containers</b></summary>
9090

9191
### VS Code Dev Containers
9292

@@ -136,26 +136,7 @@ Consider the following settings during your deployment to modify specific settin
136136
<details>
137137
<summary><b>Configurable Deployment Settings</b></summary>
138138

139-
When you start the deployment, most parameters will have **default values**, but you can update the below settings by following the steps [here](CustomizingAzdParameters.md):
140-
141-
142-
| **Setting** | **Description** | **Default value** |
143-
| ------------------------------------ | -------------------------------------------------------------------------------------------------- | ------------------------ |
144-
| **Environment Name** | A **3-20 character alphanumeric value** used to generate a unique ID to prefix the resources. | `azdtemp` |
145-
| **Cosmos Location** | A **less busy** region for **CosmosDB**, useful in case of availability constraints. | `eastus2` |
146-
| **Deployment Type** | Select from a drop-down list (`Standard`, `GlobalStandard`). | `GlobalStandard` |
147-
| **GPT Model** | Azure OpenAI GPT model to deploy. | `gpt-4o-mini` |
148-
| **GPT Model Deployment Capacity** | Configure capacity for **GPT models**. Choose based on Azure OpenAI quota. | `30` |
149-
| **Embedding Model** | OpenAI embedding model used for vector similarity. | `text-embedding-ada-002` |
150-
| **Embedding Model Capacity** | Set the capacity for **embedding models**. Choose based on usage and quota. | `80` |
151-
| **Image Tag** | The version of the Docker image to use (e.g., `latest_waf`, `dev`, `hotfix`). | `latest_waf` |
152-
| **Azure OpenAI API Version** | Set the API version for OpenAI model deployments. | `2025-04-01-preview` |
153-
| **AZURE_LOCATION** | Sets the Azure region for resource deployment. | `<User selects during deployment>` |
154-
| **Existing Log Analytics Workspace** | To reuse an existing Log Analytics Workspace ID instead of creating a new one. | *(empty)* |
155-
| **Existing AI Foundry Project Resource ID** | To reuse an existing AI Foundry Project Resource ID instead of creating a new one. | *(empty)* |
156-
157-
158-
139+
When you start the deployment, most parameters will have **default values**, but you can update the below settings by following the steps [here](CustomizingAzdParameters.md)
159140

160141
</details>
161142

@@ -258,9 +239,9 @@ This will rebuild the source code, package it into a container, and push it to t
258239
## Post Deployment Steps
259240
260241
1. **Import Sample Data**
261-
-Run bash command printed in the terminal. The bash command will look like the following:
242+
-please open a **Git Bash** terminal and run the bash command printed below. The bash command will look like the following ( need to replace with newly created "**Azure Resource Group Name**" with "**<AZURE_RESOURCE_GROUP>**" ):
262243
```shell
263-
bash ./infra/scripts/process_sample_data.sh
244+
bash ./infra/scripts/process_sample_data.sh <AZURE_RESOURCE_GROUP>
264245
```
265246
if you don't have azd env then you need to pass parameters along with the command. Then the command will look like the following:
266247
```shell

0 commit comments

Comments
 (0)