Skip to content

Commit 0a9f368

Browse files
Update AI Foundry branding (Azure#38947)
* update azure ai foundry branding * restore AI projects files
1 parent 51870e8 commit 0a9f368

File tree

5 files changed

+9
-10
lines changed

5 files changed

+9
-10
lines changed

sdk/ai/azure-ai-generative/azure/ai/generative/evaluate/_evaluate.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,7 @@ def evaluate(
168168
:paramtype data_mapping: Optional[Dict[str, str]]
169169
:keyword output_path: The local folder path to save evaluation artifacts to if set
170170
:paramtype output_path: Optional[str]
171-
:keyword tracking_uri: Tracking uri to log evaluation results to AI Studio
171+
:keyword tracking_uri: Tracking uri to log evaluation results to AI Foundry
172172
:paramtype tracking_uri: Optional[str]
173173
:return: A EvaluationResult object.
174174
:rtype: ~azure.ai.generative.evaluate.EvaluationResult

sdk/ai/azure-ai-inference/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ Use the Inference client library (in preview) to:
1111
The Inference client library supports AI models deployed to the following services:
1212

1313
* [GitHub Models](https://github.com/marketplace/models) - Free-tier endpoint for AI models from different providers
14-
* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Studio](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Studio](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).
15-
* Azure OpenAI Service - OpenAI models deployed from [Azure OpenAI Studio](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.
14+
* Serverless API endpoints and Managed Compute endpoints - AI models from different providers deployed from [Azure AI Foundry](https://ai.azure.com). See [Overview: Deploy models, flows, and web apps with Azure AI Foundry](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview).
15+
* Azure OpenAI Service - OpenAI models deployed from [Azure AI Foundry](https://oai.azure.com/). See [What is Azure OpenAI Service?](https://learn.microsoft.com/azure/ai-services/openai/overview). Although we recommend you use the official [OpenAI client library](https://pypi.org/project/openai/) in your production code for this service, you can use the Azure AI Inference client library to easily compare the performance of OpenAI models to other models, using the same client library and Python code.
1616

1717
The Inference client library makes services calls using REST API version `2024-05-01-preview`, as documented in [Azure AI Model Inference API](https://aka.ms/azureai/modelinference).
1818

@@ -27,18 +27,17 @@ The Inference client library makes services calls using REST API version `2024-0
2727
### Prerequisites
2828

2929
* [Python 3.8](https://www.python.org/) or later installed, including [pip](https://pip.pypa.io/en/stable/).
30-
Studio.
3130
* For GitHub models
3231
* The AI model name, such as "gpt-4o" or "mistral-large"
3332
* A GitHub personal access token. [Create one here](https://github.com/settings/tokens). You do not need to give any permissions to the token. The token is a string that starts with `github_pat_`.
3433
* For Serverless API endpoints or Managed Compute endpoints
3534
* An [Azure subscription](https://azure.microsoft.com/free).
36-
* An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Studio.
35+
* An [AI Model from the catalog](https://ai.azure.com/explore/models) deployed through Azure AI Foundry.
3736
* The endpoint URL of your model, in of the form `https://<your-host-name>.<your-azure-region>.models.ai.azure.com`, where `your-host-name` is your unique model deployment host name and `your-azure-region` is the Azure region where the model is deployed (e.g. `eastus2`).
3837
* Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
3938
* For Azure OpenAI (AOAI) service
4039
* An [Azure subscription](https://azure.microsoft.com/free).
41-
* An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure OpenAI Studio.
40+
* An [OpenAI Model from the catalog](https://oai.azure.com/resource/models) deployed through Azure AI Foundry.
4241
* The endpoint URL of your model, in the form `https://<your-resouce-name>.openai.azure.com/openai/deployments/<your-deployment-name>`, where `your-resource-name` is your globally unique AOAI resource name, and `your-deployment-name` is your AI Model deployment name.
4342
* Depending on your authentication preference, you either need an API key to authenticate against the service, or Entra ID credentials. The API key is a 32-character string.
4443
* An api-version. Latest preview or GA version listed in the `Data plane - inference` row in [the API Specs table](https://aka.ms/azsdk/azure-ai-inference/azure-openai-api-versions). At the time of writing, latest GA version was "2024-06-01".

sdk/ai/azure-ai-inference/tests/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ The instructions below are for running tests locally, on a Windows machine, agai
44

55
## Prerequisites
66

7-
The live tests were written against the AI models mentioned below. You will need to deploy these two in [Azure AI Studio](https://ai.azure.com/) and have the endpoint and key for each one of them.
7+
The live tests were written against the AI models mentioned below. You will need to deploy these two in [Azure AI Foundry](https://ai.azure.com/) and have the endpoint and key for each one of them.
88

99
- `Mistral-Large` for chat completion tests, including tool tests
1010
- `Cohere-embed-v3-english` for embedding tests
1111
<!-- - `TBD` for image generation tests -->
1212

13-
In addition, you will need to deploy a gpt-4o model in the Azure OpenAI Studio, and have the endpoint and key for it:
13+
In addition, you will need to deploy a gpt-4o model in the Azure AI Foundry, and have the endpoint and key for it:
1414

1515
- `gpt-4o` on Azure OpenAI (AOAI), for chat completions tests with image input
1616

sdk/ai/azure-ai-resources/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ For a more complete set of Azure libraries, see https://aka.ms/azsdk/python/all.
2020
- Python 3.7 or later is required to use this package.
2121
- You must have an [Azure subscription][azure_subscription].
2222
- An [Azure Machine Learning Workspace][workspace].
23-
- An [Azure AI Studio project][ai_project].
23+
- An [Azure AI Foundry project][ai_project].
2424

2525
### Install the package
2626
Install the Azure AI generative package for Python with pip:

sdk/ai/azure-ai-resources/azure/ai/resources/client/_ai_client.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ def from_config(
160160
) -> "AIClient":
161161
"""Returns a client from an existing Azure AI project using a file configuration.
162162
163-
To get the required details, you can go to the Project Overview page in the AI Studio.
163+
To get the required details, you can go to the Project Overview page in AI Foundry.
164164
165165
You can save a project's details in a JSON configuration file using this format:
166166

0 commit comments

Comments
 (0)