Skip to content

Commit 30df27a

Browse files
authored
Merge pull request #350 from Blake-Madden/main
Fix a few typos
2 parents 16d84ed + 6b6b101 commit 30df27a

23 files changed

+25
-25
lines changed

articles/ai-foundry/how-to/costs-plan-manage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ Here's an example of how to monitor costs for a project. The costs are used as a
111111
1. Under the **Project** heading, select **Overview**.
112112
1. Select **View cost for resources** from the **Total cost** section. The [Azure portal](https://portal.azure.com) opens to the resource group for your project.
113113

114-
:::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Foundry portal portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
114+
:::image type="content" source="../media/cost-management/project-costs/project-settings-go-view-costs.png" alt-text="Screenshot of the Azure AI Foundry portal showing how to see project settings." lightbox="../media/cost-management/project-costs/project-settings-go-view-costs.png":::
115115

116116
1. Expand the **Resource** column to see the costs for each service that's underlying your [project](../concepts/ai-resources.md#organize-work-in-projects-for-customization). But this view doesn't include costs for all resources that you use in a project.
117117

articles/ai-foundry/how-to/develop/evaluate-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -233,7 +233,7 @@ import os
233233
from azure.identity import DefaultAzureCredential
234234
credential = DefaultAzureCredential()
235235

236-
# Initialize Azure AI project and Azure OpenAI conncetion with your environment variables
236+
# Initialize Azure AI project and Azure OpenAI connection with your environment variables
237237
azure_ai_project = {
238238
"subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"),
239239
"resource_group_name": os.environ.get("AZURE_RESOURCE_GROUP"),
@@ -250,7 +250,7 @@ model_config = {
250250

251251
from azure.ai.evaluation import GroundednessProEvaluator, GroundednessEvaluator
252252

253-
# Initialzing Groundedness and Groundedness Pro evaluators
253+
# Initializing Groundedness and Groundedness Pro evaluators
254254
groundedness_eval = GroundednessEvaluator(model_config)
255255
groundedness_pro_eval = GroundednessProEvaluator(azure_ai_project=azure_ai_project, credential=credential)
256256

articles/ai-foundry/how-to/develop/run-scans-ai-red-teaming-agent.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ More advanced users can specify the desired attack strategies instead of using d
224224

225225
Each new attack strategy specified will be applied to the set of baseline adversarial queries used in addition to the baseline adversarial queries.
226226

227-
This following example would generate one attack objective per each of the four risk categories specified. This will first, generate four baseline adversarial prompts which would be sent to your target. Then, each baseline query would get converted into each of the four attack strategies. This will result in a total of 20 attack-response pairs from your AI system. The last attack stratgy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
227+
This following example would generate one attack objective per each of the four risk categories specified. This will first, generate four baseline adversarial prompts which would be sent to your target. Then, each baseline query would get converted into each of the four attack strategies. This will result in a total of 20 attack-response pairs from your AI system. The last attack strategy is an example of a composition of two attack strategies to create a more complex attack query: the `AttackStrategy.Compose()` function takes in a list of two supported attack strategies and chains them together. The example's composition would first encode the baseline adversarial query into Base64 then apply the ROT13 cipher on the Base64-encoded query. Compositions only support chaining two attack strategies together.
228228

229229
```python
230230
red_team_agent = RedTeam(

articles/ai-foundry/how-to/develop/visualize-traces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ For more information on how to send Azure AI Inference traces to Azure Monitor a
101101

102102
From Azure AI Foundry project, you can also open your custom dashboard that provides you with insights specifically to help you monitor your generative AI application.
103103

104-
In this Azure Workbook, you can view your Gen AI spans and jump into the Azure Monitor **End-to-end transaction details view** view to deep dive and investigate.
104+
In this Azure Workbook, you can view your Gen AI spans and jump into the Azure Monitor **End-to-end transaction details view** to deep dive and investigate.
105105

106106
Learn more about using this workbook to monitor your application, see [Azure Workbook documentation](/azure/azure-monitor/visualize/workbooks-create-workbook).
107107

articles/ai-foundry/model-inference/concepts/endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ To learn more about how to create deployments see [Add and configure model deplo
3838

3939
## Azure AI inference endpoint
4040

41-
The Azure AI inference endpoint allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. This endpoint follows the [Azure AI model inference API](.././reference/reference-model-inference-api.md) which all the models in Azure AI model inference support. It support the following modalidities:
41+
The Azure AI inference endpoint allows customers to use a single endpoint with the same authentication and schema to generate inference for the deployed models in the resource. This endpoint follows the [Azure AI model inference API](.././reference/reference-model-inference-api.md) which all the models in Azure AI model inference support. It support the following modalities:
4242

4343
* Text embeddings
4444
* Image embeddings

articles/ai-foundry/model-inference/how-to/configure-deployment-policies.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Follow these steps to create and assign an example custom policy to control mode
3131

3232
2. From the left side of the Azure Policy Dashboard, select **Authoring**, **Definitions**, and then select **+ Policy definition** from the top of the page.
3333

34-
:::image type="content" source="../media/configure-deployment-policies/create-new-policy.png" alt-text="An screenshot showing how to create a new policy definition in Azure Policies." lightbox="../media/configure-deployment-policies/create-new-policy.png":::
34+
:::image type="content" source="../media/configure-deployment-policies/create-new-policy.png" alt-text="A screenshot showing how to create a new policy definition in Azure Policies." lightbox="../media/configure-deployment-policies/create-new-policy.png":::
3535

3636
3. In the **Policy Definition** form, use the following values:
3737

@@ -157,7 +157,7 @@ To monitor compliance with the policy, follow these steps:
157157

158158
1. From the left side of the Azure Policy Dashboard, select **Compliance**. Each policy assignment is listed with the compliance status. To view more details, select the policy assignment. The following example shows the compliance report for a policy that blocks deployments of type *Global standard*.
159159

160-
:::image type="content" source="../media/configure-deployment-policies/policy-compliance.png" alt-text="An screenshot showing an example of a policy compliance report for a policy that blocks Global standard deployment SKUs." lightbox="../media/configure-deployment-policies/policy-compliance.png":::
160+
:::image type="content" source="../media/configure-deployment-policies/policy-compliance.png" alt-text="A screenshot showing an example of a policy compliance report for a policy that blocks Global standard deployment SKUs." lightbox="../media/configure-deployment-policies/policy-compliance.png":::
161161

162162
## Update the policy assignment
163163

articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ foreach (ChatCompletionsToolCall tool in toolsCall)
337337
Dictionary<string, object> toolArguments = JsonSerializer.Deserialize<Dictionary<string, object>>(toolArgumentsString);
338338

339339
// Here you have to call the function defined. In this particular example we use
340-
// reflection to find the method we definied before in an static class called
340+
// reflection to find the method we definied before in a static class called
341341
// `ChatCompletionsExamples`. Using reflection allows us to call a function
342342
// by string name. Notice that this is just done for demonstration purposes as a
343343
// simple way to get the function callable from its string name. Then we can call

articles/ai-services/computer-vision/how-to/model-customization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ To train a custom model, you need to associate it with a **Dataset** where you p
261261

262262
To create a new dataset, select **add new dataset**. In the popup window, enter a name and select a dataset type for your use case. **Image classification** models apply content labels to the entire image, while **Object detection** models apply object labels to specific locations in the image. **Product recognition** models are a subcategory of object detection models that are optimized for detecting retail products.
263263

264-
:::image type="content" source="../media/customization/create-dataset.png" alt-text="Screenshoot of dialog box to Create new dataset.":::
264+
:::image type="content" source="../media/customization/create-dataset.png" alt-text="Screenshot of dialog box to Create new dataset.":::
265265

266266
Then, select the container from the Azure Blob Storage account where you stored the training images. Check the box to allow Vision Studio to read and write to the blob storage container. This is a necessary step to import labeled data. Create the dataset.
267267

articles/ai-services/computer-vision/includes/quickstarts-sdk/csharp-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ Amount Per Serving
128128
Trans Fat 0g
129129
Calories 190
130130
Cholesterol 0mg
131-
ories from Fat 110
131+
Calories from Fat 110
132132
Sodium 20mg
133133
nt Daily Values are based on Vitamin A 50%
134134
calorie diet.

articles/ai-services/content-safety/how-to/containers/text-container.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ DownloadLicense=True \
119119
Mounts:License={CONTAINER_LICENSE_DIRECTORY}
120120
```
121121

122-
The `DownloadLicense=True` parameter in your `docker run` command downloads a license file to enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use a license file with the appropriate container that you're approved for. For example, you can't use a license file for a `text-analyze` container with a `image-analyze` container.
122+
The `DownloadLicense=True` parameter in your `docker run` command downloads a license file to enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file is invalid to run the container. You can only use a license file with the appropriate container that you're approved for. For example, you can't use a license file for a `text-analyze` container with an `image-analyze` container.
123123

124124
Once the license file is downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you use, with placeholder values. Replace these values with your own values.
125125

0 commit comments

Comments
 (0)