Skip to content

Commit a2a1217

Browse files
committed
switch to gh code - test
1 parent 7a50a25 commit a2a1217

File tree

2 files changed

+36
-11
lines changed

2 files changed

+36
-11
lines changed

.openpublishing.publish.config.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@
113113
{
114114
"path_to_root": "azureai-samples-main",
115115
"url": "https://github.com/Azure-Samples/azureai-samples",
116-
"branch": "main",
116+
"branch": "sgilley-doc-updates",
117117
"branch_mapping": {}
118118
},
119119
{

articles/ai-foundry/how-to/develop/langchain.md

Lines changed: 35 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ To use LLMs deployed in Azure AI Foundry portal, you need the endpoint and crede
6666
In this scenario, we placed both the endpoint URL and key in the following environment variables.
6767
6868
> [!TIP]
69-
> The endpoint you copied might have extra text after /models. Delete that and stop ad /models as shown here.
69+
> The endpoint you copied might have extra text after /models. Delete that and stop at /models as shown here.
7070
7171
```bash
7272
export AZURE_INFERENCE_ENDPOINT="https://<resource>.services.ai.azure.com/models"
@@ -83,6 +83,9 @@ llm = init_chat_model(model="mistral-medium-2505", model_provider="azure_ai")
8383
8484
You can also use the class `AzureAIChatCompletionsModel` directly.
8585
86+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client)]
87+
88+
8689
```python
8790
import os
8891
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -145,6 +148,8 @@ model = AzureAIChatCompletionsModel(
145148

146149
Let's first use the model directly. `ChatModels` are instances of LangChain `Runnable`, which means they expose a standard interface for interacting with them. To call the model, we can pass in a list of messages to the `invoke` method.
147150
151+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=human_message)]
152+
148153
```python
149154
from langchain_core.messages import HumanMessage, SystemMessage
150155
@@ -170,6 +175,8 @@ prompt_template = ChatPromptTemplate.from_messages(
170175

171176
As you can see from the prompt template, this chain has a `language` and `text` input. Now, let's create an output parser:
172177
178+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_output_parser)]
179+
173180
```python
174181
from langchain_core.prompts import ChatPromptTemplate
175182
parser = StrOutputParser()
@@ -195,7 +202,9 @@ chain.invoke({"language": "italian", "text": "hi"})
195202
196203
Models deployed to Azure AI Foundry support the Foundry Models API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
197204
198-
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Large` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
205+
In the following example, we create two model clients. One is a producer and another one is a verifier. To make the distinction clear, we're using a multi-model endpoint like the [Foundry Models API](../../model-inference/overview.md) and hence we're passing the parameter `model` to use a `Mistral-Medium` and a `Mistral-Small` model, quoting the fact that **producing content is more complex than verifying it**.
206+
207+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
199208
200209
```python
201210
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -218,6 +227,8 @@ verifier = AzureAIChatCompletionsModel(
218227
219228
The following example generates a poem written by an urban poet:
220229
230+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=generate_poem)]
231+
221232
```python
222233
from langchain_core.prompts import PromptTemplate
223234
@@ -241,11 +252,16 @@ verifier_template = PromptTemplate(
241252
242253
Now let's chain the pieces:
243254

255+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_chain)]
256+
244257
```python
245258
chain = producer_template | producer | parser | verifier_template | verifier | parser
246259
```
247260

248-
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step. The following code shows how to do it:
261+
The previous chain returns the output of the step `verifier` only. Since we want to access the intermediate result generated by the `producer`, in LangChain you need to use a `RunnablePassthrough` object to also output that intermediate step.
262+
263+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_chain_with_passthrough)]
264+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_multiple_outputs_chain)]
249265

250266
```python
251267
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
@@ -258,16 +274,12 @@ chain = generate_poem | RunnableParallel(poem=RunnablePassthrough(), verificatio
258274

259275
To invoke the chain, identify the inputs required and provide values using the `invoke` method:
260276

277+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=invoke_chain)]
278+
261279
```python
262280
chain.invoke({"topic": "living in a foreign country"})
263281
```
264282

265-
```output
266-
{
267-
"peom": "...",
268-
"verification: "false"
269-
}
270-
```
271283

272284
## Use embeddings models
273285

@@ -280,6 +292,9 @@ export AZURE_INFERENCE_CREDENTIAL="<your-key-goes-here>"
280292
281293
Then create the client:
282294
295+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb.ipynb?name=
296+
create_embed_model_client)]
297+
283298
```python
284299
from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
285300
@@ -292,6 +307,8 @@ embed_model = AzureAIEmbeddingsModel(
292307
293308
The following example shows a simple example using a vector store in memory:
294309
310+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb.ipynb?name=create_vector_store)]
311+
295312
```python
296313
from langchain_core.vectorstores import InMemoryVectorStore
297314
@@ -300,6 +317,8 @@ vector_store = InMemoryVectorStore(embed_model)
300317
301318
Let's add some documents:
302319

320+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb.ipynb?name="add_documents)]
321+
303322
```python
304323
from langchain_core.documents import Document
305324
@@ -312,6 +331,8 @@ vector_store.add_documents(documents=documents)
312331
313332
Let's search by similarity:
314333
334+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb.ipynb?name=search_similarity)]
335+
315336
```python
316337
results = vector_store.similarity_search(query="thud",k=1)
317338
for doc in results:
@@ -338,6 +359,8 @@ If you need to debug your application and understand the requests sent to the mo
338359
339360
First, configure logging to the level you are interested in:
340361
362+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=configure_logging)]
363+
341364
```python
342365
import sys
343366
import logging
@@ -362,6 +385,8 @@ handler.setFormatter(formatter)
362385
363386
To see the payloads of the requests, when instantiating the client, pass the argument `logging_enable`=`True` to the `client_kwargs`:
364387
388+
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client_with_logging)]
389+
365390
```python
366391
import os
367392
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
@@ -400,7 +425,7 @@ You can configure your application to send telemetry to Azure Application Insigh
400425
application_insights_connection_string = "instrumentation...."
401426
```
402427
403-
2. Using the Azure AI Foundry SDK and the project connection string.
428+
2. Using the Azure AI Foundry SDK and the project connection string ([!INCLUDE [hub-project-name](../../includes/hub-project-name.md)]s only).
404429
405430
1. Ensure you have the package `azure-ai-projects` installed in your environment.
406431

0 commit comments

Comments
 (0)