Skip to content

Commit 910f91d

Browse files
authored
Merge pull request #3 from sdgilley/sdg-ai-studio-gh
Fix blocking issues
2 parents f467c74 + 9eed127 commit 910f91d

File tree

11 files changed

+12
-30
lines changed

11 files changed

+12
-30
lines changed

articles/ai-studio/ai-services/concepts/deployment-types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: conceptual
99
ms.date: 10/24/2024
1010
ms.author: fasantia
1111
ms.reviewer: fasantia
12-
ms.custom: ignite-2024, github-universe-2024
12+
ms.custom: github-universe-2024
1313
---
1414

1515
# Deployment types in Azure AI model inference

articles/ai-studio/ai-services/concepts/endpoints.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: scottpolly
99
ms.date: 10/24/2024
1010
ms.author: sgilley
1111
ms.reviewer: fasantia
12-
ms.custom: ignite-2024, github-universe-2024
12+
ms.custom: github-universe-2024
1313
---
1414

1515
# Use the Azure AI model inference endpoint
@@ -43,7 +43,7 @@ The Azure AI inference endpoint allows customers to use a single endpoint with t
4343

4444
You can see the endpoint URL and credentials in the **Overview** section. The endpoint usually has the form `https://<resource-name>.services.ai.azure.com/models`:
4545

46-
:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="An screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
46+
:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="A screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
4747

4848
### Routing
4949

articles/ai-studio/ai-services/concepts/quotas-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure AI model inference quotas and limits
33
titleSuffix: Azure AI services
44
description: Quick reference, detailed description, and best practices on the quotas and limits for the Azure AI models service in Azure AI services.
55
ms.service: azure-ai-studio
6-
ms.custom: ignite-2024, github-universe-2024
6+
ms.custom: github-universe-2024
77
ms.topic: conceptual
88
author: sdgilley
99
manager: scottpolly

articles/ai-studio/ai-services/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ sections:
9191
- question: |
9292
Does Azure AI model inference service support custom API headers? We append other custom headers to our API requests and are seeing HTTP 431 failure errors.
9393
answer: |
94-
Our current APIs allow up to 10 custom headers, which are passed through the pipeline, and returned. We notice some customers now exceed this header count resulting in HTTP 431 errors. There's no solution for this error, other than to reduce header volume. In future API versions we'll no longer pass through custom headers. We recommend customers not depend on custom headers in future system architectures.
94+
Our current APIs allow up to 10 custom headers, which are passed through the pipeline, and returned. We notice some customers now exceed this header count resulting in HTTP 431 errors. There's no solution for this error, other than to reduce header volume. We recommend customers not depend on custom headers in future system architectures.
9595
- name: Pricing and Billing
9696
questions:
9797
- question: |

articles/ai-studio/ai-services/how-to/create-model-deployments.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ To use it:
3838

3939
1. Get the Azure AI model's inference endpoint URL and keys from the **deployment page** or the **Overview** page. If you're using Microsoft Entra ID authentication, you don't need a key.
4040

41-
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="An screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
41+
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="A screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
4242

4343
2. Use the model inference endpoint URL and the keys from before when constructing your client. The following example uses the Azure AI Inference package:
4444

articles/ai-studio/ai-services/how-to/quickstart-github-models.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn how to upgrade your endpoint from GitHub Models to Azure AI M
55
ms.service: azure-ai-studio
66
ms.topic: how-to
77
ms.date: 10/01/2024
8-
ms.custom: ignite-2024, github-universe-2024
8+
ms.custom: github-universe-2024
99
manager: nitinme
1010
author: mrbullwinkle
1111
ms.author: fasantia
@@ -50,7 +50,7 @@ To obtain the key and endpoint:
5050

5151
8. Once it's deployed, your model's API Key and endpoint are shown in the Overview. Use these values in your code to use the model in your production environment.
5252

53-
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="An screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
53+
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png" alt-text="A screenshot showing how to get the URL and key associated with the deployment." lightbox="../../media/ai-services/add-model-deployments/models-deploy-endpoint-url.png":::
5454

5555
At this point, the model you selected is ready to consume.
5656

@@ -61,9 +61,9 @@ At this point, the model you selected is ready to consume.
6161

6262
Once your Azure AI Services resource is configured, you can start consuming it from your code. You need the endpoint URL and key for it, which can be found in the **Overview** section:
6363

64-
:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="An screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
64+
:::image type="content" source="../../media/ai-services/overview/overview-endpoint-and-key.png" alt-text="A screenshot showing how to get the URL and key associated with the resource." lightbox="../../media/ai-services/overview/overview-endpoint-and-key.png":::
6565

66-
You can use any of the supported SDK's to get predictions out from the endpoint. The following SDK's are officially supported:
66+
You can use any of the supported SDKs to get predictions out from the endpoint. The following SDKs are officially supported:
6767

6868
* OpenAI SDK
6969
* Azure OpenAI SDK

articles/ai-studio/ai-services/model-inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: fasantia
88
ms.service: azure-ai-studio
99
ms.topic: overview
1010
ms.date: 08/14/2024
11-
ms.custom: ignite-2024, github-universe-2024
11+
ms.custom: github-universe-2024
1212
recommendations: false
1313
---
1414

articles/ai-studio/includes/ai-services/add-model-deployments.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ You can add all the models you need in the endpoint by using [Azure AI Studio fo
2424

2525
5. For models providers that require extra terms of contract, you're asked to accept those terms. For instance, Mistral models ask you to accept other terms. Accept the terms on those cases by selecting **Subscribe and deploy**.
2626

27-
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-agree.png" alt-text="An screenshot showing how to agree the terms and conditions of a Mistral-Large model." lightbox="../../media/ai-services/add-model-deployments/models-deploy-agree.png":::
27+
:::image type="content" source="../../media/ai-services/add-model-deployments/models-deploy-agree.png" alt-text="A screenshot showing how to agree the terms and conditions of a Mistral-Large model." lightbox="../../media/ai-services/add-model-deployments/models-deploy-agree.png":::
2828

2929
6. You can configure the deployment settings at this time. By default, the deployment receives the name of the model you're deploying. The deployment name is used in the `model` parameter for request to route to this particular model deployment. This setting allows you to also configure specific names for your models when you attach specific configurations. For instance, `o1-preview-safe` for a model with a strict content safety content filter.
3030

articles/ai-studio/includes/ai-services/how-to-prerequisites.md

Lines changed: 0 additions & 18 deletions
This file was deleted.
-40.5 KB
Loading

0 commit comments

Comments
 (0)