You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Go to the [Azure AI Foundry](https://ai.azure.com/).
54
+
54
55
1. Open the project where the model is deployed, if it isn't already open.
56
+
55
57
1. Go to **Models + endpoints** and selectthe model you deployed as indicated in the prerequisites.
58
+
56
59
1. Copy the endpoint URL and the key.
57
60
58
61
:::image type="content" source="../../media/how-to/inference/serverless-endpoint-url-keys.png" alt-text="Screenshot of the option to copy endpoint URI and keys from an endpoint." lightbox="../../media/how-to/inference/serverless-endpoint-url-keys.png":::
@@ -63,11 +66,19 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
63
66
In this scenario, we placed both the endpoint URL and key in the following environment variables:
Once configured, create a client to connect to the endpoint. In this case, we're working with a chat completions model hence we import the class `AzureAIChatCompletionsModel`.
73
+
Once configured, create a client to connect with the chat model by using the `init_chat_model`. For Azure OpenAI models, configure the client as indicated at [Using Azure OpenAI models](#using-azure-openai-models).
0 commit comments