Skip to content

Commit a0ad6ce

Browse files
authored
Update langchain.md
1 parent b72c048 commit a0ad6ce

File tree

1 file changed

+3
-19
lines changed

1 file changed

+3
-19
lines changed

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 3 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -316,31 +316,15 @@ for doc in results:
316316
317317
## Using Azure OpenAI models
318318
319-
If you're using Azure OpenAI in Foundry Models or Foundry Models service with OpenAI models with `langchain-azure-ai` package, you might need to use `api_version` parameter to select a specific API version. The following example shows how to connect to an Azure OpenAI in Foundry Models deployment:
319+
If you're using Azure OpenAI models with `langchain-azure-ai` package, use the following URL:
320320
321321
```python
322322
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
323323
324324
llm = AzureAIChatCompletionsModel(
325-
endpoint="https://<resource>.openai.azure.com/openai/deployments/<deployment-name>",
325+
endpoint="https://<resource>.openai.azure.com/openai/v1",
326326
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
327-
api_version="2024-05-01-preview",
328-
)
329-
```
330-
331-
> [!IMPORTANT]
332-
> Check which is the API version that your deployment is using. Using a wrong `api_version` or one not supported by the model results in a `ResourceNotFound` exception.
333-
334-
If the deployment is hosted in Azure AI Services, you can use the Foundry Models service:
335-
336-
```python
337-
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
338-
339-
llm = AzureAIChatCompletionsModel(
340-
endpoint="https://<resource>.services.ai.azure.com/models",
341-
credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
342-
model="<model-name>",
343-
api_version="2024-05-01-preview",
327+
model="gpt-4o"
344328
)
345329
```
346330

0 commit comments

Comments
 (0)