You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl-data-ai/ai-foundry-sdk/includes/02-azure-ai-foundry-sdk.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ The core package for working with projects in the Azure AI Foundry SDK is the **
13
13
14
14
To use the Azure AI Projects library in Python, you can use the **pip** package installation utility to install the **azure-ai-projects** package from PyPi:
15
15
16
-
```python
16
+
```
17
17
pip install azure-ai-projects
18
18
```
19
19
@@ -23,7 +23,7 @@ pip install azure-ai-projects
23
23
24
24
To use the Azure AI Projects library in C#, add the **Azure.AI.Projects** package to your C# project:
25
25
26
-
```csharp
26
+
```
27
27
dotnet add package Azure.AI.Projects --prerelease
28
28
```
29
29
@@ -87,7 +87,7 @@ var projectClient = new AIProjectClient(
87
87
> [!NOTE]
88
88
> The code uses the default Azure credentials to authenticate when accessing the project. To enable this authentication, in addition to the **Azure.AI.Projects** package, you need to install the **Azure.Identity** package:
A common scenario in an AI application is to connect to a generative AI model and use *prompts* to engage in a chat-based dialog with it. You can use the Azure AI Foundry SDK to chat with models that you have deployed in your Azure AI Foundry project.
1
+
A common scenario in an AI application is to connect to a generative AI model and use *prompts* to engage in a chat-based dialog with it.
2
2
3
-
The specific libraries and code used to build a chat client depends on how the target model has been deployed in the Azure AI Foundry project. You can deploy models to the following model hosting solutions:
3
+
While you can use the Azure OpenAI SDK, to connect "directly" to a model using key-based or Microsoft Entra ID authentication; when your model is deployed in an Azure AI Foundry project, you can also use the Azure AI Foundry SDK to retrieve a project client, from which you can then get an authenticated OpenAI chat client for any models deployed in the project's Azure AI Foundry resource. This approach makes it easy to write code that consumes models deployed in your project, switching between them easily by changing the model deployment name parameter.
4
4
5
-
-**Azure AI Foundry Models**: A single endpoint for multiple models of different types, including OpenAI models and others from the Azure AI Foundry model catalog. Models are consumed through an **Azure AI Foundry** resource connection in the project (either the default **Azure AI Foundry** resource for the project or another resource connection that has been added to the project).
6
-
-**Azure OpenAI**: A single endpoint for OpenAI models hosted in Azure. Models are consumed through an **Azure OpenAI** resource connection in the project.
7
-
-**Serverless API**: A model-as-a-service solution in which each deployed model is accessed through a unique endpoint and hosted in the Azure AI Foundry project.
8
-
-**Managed compute**: A model-as-a-service solution in which each deployed model is accessed through a unique endpoint hosted in custom compute.
9
-
10
-
> [!NOTE]
11
-
> To deploy models to an Azure AI model inference endpoint, you must enable the **Deploy models to Azure AI model inference service** option in Azure AI Foundry.
12
-
13
-
In this module, we'll focus on models deployed to the **Azure AI Foundry Models** endpoint.
14
-
15
-
## Building a client app for Azure AI Foundry Models
16
-
17
-
When you have deployed models to the Azure AI model inference service, you can use the Azure AI Foundry SDK to write code that creates a **ChatCompletionsClient** object, which you can then use to chat with a deployed model. One of the benefits of using this model deployment type is that you can easily switch between deployed models by changing one parameter in your code (the model deployment name), making it a great way to test against multiple models while developing an app.
5
+
> [!TIP]
6
+
> You can use the OpenAI chat client provided by an Azure AI Foundry project to chat with any model deployed in the associated Azure AI Foundry resource - even non-OpenAI models, such as Microsoft Phi models.
18
7
19
8
::: zone pivot="python"
20
9
21
-
The following Python code sample uses a **ChatCompletionsClient** object to chat with a model deployment named **phi-4-model**.
22
-
23
-
```python
24
-
from azure.identity import DefaultAzureCredential
25
-
from azure.ai.inference.models import SystemMessage, UserMessage
# Get a chat completion based on a user-provided prompt
40
-
user_prompt =input("Enter a question:")
41
-
42
-
response = chat_client.complete(
43
-
model="phi-4-model",
44
-
messages=[
45
-
SystemMessage("You are a helpful AI assistant that answers questions."),
46
-
UserMessage(user_prompt)
47
-
],
48
-
)
49
-
print(response.choices[0].message.content)
50
-
51
-
exceptExceptionas ex:
52
-
print(ex)
53
-
```
54
-
55
-
> [!NOTE]
56
-
> The **ChatCompletionsClient** class uses **Azure AI Inference** library. In addition to the **azure-identity** package discussed previously, the sample code shown here assumes that the **azure-ai-inference** package has been installed:
57
-
>
58
-
> `pip install azure-ai-inference`
59
-
60
-
### Using the Azure OpenAI SDK
61
-
62
-
In the Azure AI Foundry SDK for Python, the **AIProjectClient** class provides a **get_azure_openai_client()** method that you can use to create an Azure OpenAI client object. You can then use the classes and methods defined in the Azure OpenAI SDK to consume an OpenAI model deployed to Azure Foundry Models.
63
-
64
-
The following Python code sample uses the Azure AI Foundry and Azure OpenAI SDKs to chat with a model deployment named **gpt-4o-model**.
10
+
The following Python code sample uses the **get_azure_openai_client()** method in the Azure AI project's **inference** operations object to get an OpenAI client with which to chat with a model that has been deployed in the project'a Azure AI Foundry resource.
# Get a chat completion based on a user-provided prompt
83
30
user_prompt =input("Enter a question:")
84
-
response = openai_client.chat.completions.create(
85
-
model="gpt-4o-model",
86
-
messages=[
87
-
{"role": "system", "content": "You are a helpful AI assistant that answers questions."},
88
-
{"role": "user", "content": user_prompt},
31
+
32
+
response = chat_client.complete(
33
+
model=model_deployment_name,
34
+
[
35
+
{"role": "system", "content": "You are a helpful AI assistant."},
36
+
{"role": "user", "content": user_prompt}
89
37
]
38
+
],
90
39
)
91
40
print(response.choices[0].message.content)
92
41
@@ -103,14 +52,13 @@ except Exception as ex:
103
52
104
53
::: zone pivot="csharp"
105
54
106
-
The following C# code sample uses a**ChatCompletionsClient** object to chat with a model deployment named **phi-4-model**.
55
+
The following C# code sample uses the**GetAzureOpenAIChatClient()**method of the Azure AI project object to get an OpenAI client with which to chat with a model that has been deployed in the project'a Azure AI Foundry resource.
> The **ChatCompletionsClient** class uses **Azure AI Inference** library. In addition to the **Azure.AI.Projects** and **Azure.Identity** packages discussed previously, the sample code shown here assumes that the **Azure.AI.Inference** package has been installed:
106
+
> In addition to the **azure-ai-projects** and **azure-identity** packages discussed previously, the sample code shown here assumes that the **Azure.AI.OpenAI** package has been installed:
0 commit comments