Skip to content

Commit 098bed6

Browse files
Merge pull request #1734 from sdgilley/sdg-rebrand-add-azure
add Azure or portal
2 parents 62653dd + 0f1cb0c commit 098bed6

File tree

7 files changed

+19
-19
lines changed

7 files changed

+19
-19
lines changed

articles/ai-studio/how-to/create-projects.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ For more information about the projects and hubs model, see [Azure AI Foundry hu
3434

3535
Use the following tabs to select the method you plan to use to create a project:
3636

37-
# [Azure AI Foundry](#tab/ai-studio)
37+
# [AI Foundry portal](#tab/ai-studio)
3838

3939
[!INCLUDE [Create Azure AI Foundry project](../includes/create-projects.md)]
4040

@@ -85,7 +85,7 @@ The code in this section assumes you have an existing hub. If you don't have a
8585

8686
## View project settings
8787

88-
# [Azure AI Foundry](#tab/ai-studio)
88+
# [AI Foundry portal](#tab/ai-studio)
8989

9090
On the project **Overview** page you can find information about the project.
9191

articles/ai-studio/how-to/deploy-models-managed.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ In this article, you learn how to deploy models using the Azure Machine Learning
2626

2727
You can deploy managed compute models using the Azure Machine Learning SDK, but first, let's browse the model catalog and get the model ID you need for deployment.
2828

29-
1. Sign in to [AI Foundry](https://ai.azure.com) and go to the **Home** page.
29+
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and go to the **Home** page.
3030
1. Select **Model catalog** from the left sidebar.
3131
1. In the **Deployment options** filter, select **Managed compute**.
3232

@@ -161,5 +161,5 @@ To deploy and perform inferencing with real-time endpoints, you consume Virtual
161161

162162
## Next steps
163163

164-
- Learn more about what you can do in [AI Foundry](../what-is-ai-studio.md)
164+
- Learn more about what you can do in [Azure AI Foundry](../what-is-ai-studio.md)
165165
- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)

articles/ai-studio/how-to/deploy-models-serverless-connect.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ The need to consume a serverless API endpoint in a different project or hub than
4141

4242
- You need to install the following software to work with Azure AI Foundry:
4343

44-
# [AI Foundry](#tab/azure-ai-studio)
44+
# [AI Foundry portal](#tab/azure-ai-studio)
4545

4646
You can use any compatible web browser to navigate [Azure AI Foundry](https://ai.azure.com).
4747

@@ -88,7 +88,7 @@ Follow these steps to create a connection:
8888
8989
1. Connect to the project or hub where the endpoint is deployed:
9090
91-
# [AI Foundry](#tab/azure-ai-studio)
91+
# [AI Foundry portal](#tab/azure-ai-studio)
9292
9393
Go to [Azure AI Foundry](https://ai.azure.com) and navigate to the project where the endpoint you want to connect to is deployed.
9494
@@ -116,7 +116,7 @@ Follow these steps to create a connection:
116116
117117
1. Get the endpoint's URL and credentials for the endpoint you want to connect to. In this example, you get the details for an endpoint name **meta-llama3-8b-qwerty**.
118118
119-
# [AI Foundry](#tab/azure-ai-studio)
119+
# [AI Foundry portal](#tab/azure-ai-studio)
120120
121121
1. From the left sidebar of your project in AI Foundry portal, go to **My assets** > **Models + endpoints** to see the list of deployments in the project.
122122
@@ -141,7 +141,7 @@ Follow these steps to create a connection:
141141
142142
1. Now, connect to the project or hub **where you want to create the connection**:
143143
144-
# [AI Foundry](#tab/azure-ai-studio)
144+
# [AI Foundry portal](#tab/azure-ai-studio)
145145
146146
Go to the project where the connection needs to be created to.
147147
@@ -169,7 +169,7 @@ Follow these steps to create a connection:
169169
170170
1. Create the connection in the project:
171171
172-
# [AI Foundry](#tab/azure-ai-studio)
172+
# [AI Foundry portal](#tab/azure-ai-studio)
173173
174174
1. From the left sidebar of your project in AI Foundry portal, select **Management center**.
175175

articles/ai-studio/how-to/deploy-models-serverless.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ This article uses a Meta Llama model deployment for illustration. However, you c
3535

3636
- You need to install the following software to work with Azure AI Foundry:
3737

38-
# [AI Foundry](#tab/azure-ai-studio)
38+
# [AI Foundry portal](#tab/azure-ai-studio)
3939

4040
You can use any compatible web browser to navigate [Azure AI Foundry](https://ai.azure.com).
4141

@@ -132,7 +132,7 @@ Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
132132
133133
1. Create the model's marketplace subscription. When you create a subscription, you accept the terms and conditions associated with the model offer.
134134
135-
# [AI Foundry](#tab/azure-ai-studio)
135+
# [AI Foundry portal](#tab/azure-ai-studio)
136136
137137
1. On the model's **Details** page, select **Deploy**. A **Deployment options** window opens up, giving you the choice between serverless API deployment and deployment using a managed compute.
138138
@@ -259,7 +259,7 @@ Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
259259
260260
1. At any point, you can see the model offers to which your project is currently subscribed:
261261
262-
# [AI Foundry](#tab/azure-ai-studio)
262+
# [AI Foundry portal](#tab/azure-ai-studio)
263263
264264
1. Go to the [Azure portal](https://portal.azure.com).
265265
@@ -314,7 +314,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
314314
315315
1. Create the serverless endpoint
316316
317-
# [AI Foundry](#tab/azure-ai-studio)
317+
# [AI Foundry portal](#tab/azure-ai-studio)
318318
319319
1. To deploy a Microsoft model that doesn't require subscribing to a model offering:
320320
1. Select **Deploy** and then select **Serverless API with Azure AI Content Safety (preview)** to open the deployment wizard.
@@ -466,7 +466,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
466466
467467
1. At any point, you can see the endpoints deployed to your project:
468468
469-
# [AI Foundry](#tab/azure-ai-studio)
469+
# [AI Foundry portal](#tab/azure-ai-studio)
470470
471471
1. Go to your project.
472472
@@ -515,7 +515,7 @@ In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
515515
516516
1. The created endpoint uses key authentication for authorization. Use the following steps to get the keys associated with a given endpoint.
517517
518-
# [AI Foundry](#tab/azure-ai-studio)
518+
# [AI Foundry portal](#tab/azure-ai-studio)
519519
520520
You can select the deployment, and note the endpoint's _Target URI_ and _Key_. Use them to call the deployment and generate predictions.
521521
@@ -573,7 +573,7 @@ To set the PNA flag for the Azure AI Foundry hub:
573573
574574
You can delete model subscriptions and endpoints. Deleting a model subscription makes any associated endpoint become *Unhealthy* and unusable.
575575
576-
# [AI Foundry](#tab/azure-ai-studio)
576+
# [AI Foundry portal](#tab/azure-ai-studio)
577577
578578
To delete a serverless API endpoint:
579579

articles/ai-studio/how-to/healthcare-ai/deploy-cxrreportgen.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ To use the CXRReportGen model, you need the following prerequisites:
4343

4444
**Deployment to a self-hosted managed compute**
4545

46-
CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
46+
CXRReportGen model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
4747

4848
To __deploy the model through the UI__:
4949

articles/ai-studio/how-to/healthcare-ai/deploy-medimageinsight.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ To use the MedImageInsight model, you need the following prerequisites:
4141

4242
**Deployment to a self-hosted managed compute**
4343

44-
MedImageInsight model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
44+
MedImageInsight model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
4545

4646
To __deploy the model through the UI__:
4747

articles/ai-studio/how-to/healthcare-ai/deploy-medimageparse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ To use the MedImageParse model, you need the following prerequisites:
4444

4545
**Deployment to a self-hosted managed compute**
4646

47-
MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
47+
MedImageParse model can be deployed to our self-hosted managed inference solution, which allows you to customize and control all the details about how the model is served. You can deploy the model through the catalog UI (in [Azure AI Foundry](https://aka.ms/healthcaremodelstudio) or [Azure Machine Learning studio](https://ml.azure.com/model/catalog)) or deploy programmatically.
4848

4949
To __deploy the model through the UI__:
5050

0 commit comments

Comments
 (0)