Skip to content

Commit c90a1b4

Browse files
authored
Update langchain.md
1 parent 3dfb487 commit c90a1b4

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/ai-studio/how-to/develop/langchain.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ model = AzureAIChatCompletionsModel(
9797
)
9898
```
9999

100-
If your endpoint support Microsoft Entra ID, you can use the following code to create the client:
100+
You can use the following code to create the client if your endpoint supports Microsoft Entra ID:
101101

102102
```python
103103
import os
@@ -182,7 +182,7 @@ chain.invoke({"language": "italian", "text": "hi"})
182182
183183
Models deployed to Azure AI Foundry support the Azure AI model inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
184184
185-
In the following example, we create 2 model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../ai-services/model-inference.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
185+
In the following example, we create two model clients, one is a producer and another one is a verifier. To make the distinction clear, we are using a multi-model endpoint like the [Azure AI model inference service](../../ai-services/model-inference.md) and hence we are passing the parameter `model_name` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
186186
187187
```python
188188
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -335,9 +335,9 @@ llm = AzureAIChatCompletionsModel(
335335
)
336336
```
337337
338-
## Debugging and torubleshooting
338+
## Debugging and troubleshooting
339339
340-
If you need to debug your application and understand which parameters are being sent to the models in Azure AI Foundry, you can use the debug capabilities of the integration as follows:
340+
If you need to debug your application and understand the requests sent to the models in Azure AI Foundry, you can use the debug capabilities of the integration as follows:
341341
342342
First, configure logging to the level you are interested in:
343343
@@ -418,7 +418,7 @@ tracer = AzureAIInferenceTracer(
418418
)
419419
```
420420
421-
To configure tracing with your chain, indicate the value config in the invoke operation as a callback:
421+
To configure tracing with your chain, indicate the value config in the `invoke` operation as a callback:
422422
423423
```python
424424
chain.invoke({"topic": "living in a foreign country"}, config={"callbacks": [tracer]})

0 commit comments

Comments
 (0)