@@ -87,17 +87,6 @@ You can also use the class `AzureAIChatCompletionsModel` directly.
8787[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client)]
8888
8989
90- ```python
91- import os
92- from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
93-
94- model = AzureAIChatCompletionsModel(
95- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
96- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
97- model="mistral-medium-2505",
98- )
99- ```
100-
10190> [!CAUTION]
10291> **Breaking change:** Parameter `model_name` was renamed `model` in version `0.1.3`.
10392
@@ -151,17 +140,6 @@ Let's first use the model directly. `ChatModels` are instances of LangChain `Run
151140
152141[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=human_message)]
153142
154- ```python
155- from langchain_core.messages import HumanMessage, SystemMessage
156-
157- messages = [
158- SystemMessage(content="Translate the following from English into Italian"),
159- HumanMessage(content="hi!"),
160- ]
161-
162- model.invoke(messages)
163- ```
164-
165143You can also compose operations as needed in **chains**. Let' s now use a prompt template to translate sentences:
166144
167145` ` ` python
@@ -178,11 +156,6 @@ As you can see from the prompt template, this chain has a `language` and `text`
178156
179157[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_output_parser)]
180158
181- ```python
182- from langchain_core.prompts import ChatPromptTemplate
183- parser = StrOutputParser()
184- ```
185-
186159We can now combine the template, model, and the output parser from above using the pipe (`|`) operator:
187160
188161```python
@@ -195,10 +168,6 @@ To invoke the chain, identify the inputs required and provide values using the `
195168chain.invoke({"language": "italian", "text": "hi"})
196169```
197170
198- ```output
199- ' ciao'
200- ```
201-
202171### Chaining multiple LLMs together
203172
204173Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
@@ -207,21 +176,6 @@ In the following example, we create two model clients. One is a producer and ano
207176
208177[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
209178
210- ```python
211- from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
212-
213- producer = AzureAIChatCompletionsModel(
214- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
215- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
216- model="mistral-medium-2505",
217- )
218-
219- verifier = AzureAIChatCompletionsModel(
220- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
221- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
222- model="mistral-small",
223- )
224- ```
225179
226180> [!TIP]
227181> Explore the model card of each of the models to understand the best use cases for each model.
@@ -230,40 +184,12 @@ The following example generates a poem written by an urban poet:
230184
231185[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=generate_poem)]
232186
233- ```python
234- from langchain_core.prompts import PromptTemplate
235-
236- producer_template = PromptTemplate(
237- template="You are an urban poet, your job is to come up \
238- verses based on a given topic.\n\
239- Here is the topic you have been asked to generate a verse on:\n\
240- {topic}",
241- input_variables=["topic"],
242- )
243-
244- verifier_template = PromptTemplate(
245- template="You are a verifier of poems, you are tasked\
246- to inspect the verses of poem. If they consist of violence and abusive language\
247- report it. Your response should be only one word either True or False.\n \
248- Here is the lyrics submitted to you:\n\
249- {input}",
250- input_variables=["input"],
251- )
252- ```
253-
254187Now let' s chain the pieces:
255188
256189[! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_chain)]
257190
258- ` ` ` python
259- chain = producer_template | producer | parser | verifier_template | verifier | parser
260- ` ` `
261-
262191The previous chain returns the output of the step ` verifier` only. Since we want to access the intermediate result generated by the ` producer` , in LangChain you need to use a ` RunnablePassthrough` object to also output that intermediate step.
263192
264- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_chain_with_passthrough)]
265- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_multiple_outputs_chain)]
266-
267193` ` ` python
268194from langchain_core.runnables import RunnablePassthrough, RunnableParallel
269195
@@ -277,10 +203,6 @@ To invoke the chain, identify the inputs required and provide values using the `
277203
278204[! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=invoke_chain)]
279205
280- ` ` ` python
281- chain.invoke({" topic" : " living in a foreign country" })
282- ` ` `
283-
284206
285207# # Use embeddings models
286208
@@ -318,7 +240,7 @@ vector_store = InMemoryVectorStore(embed_model)
318240
319241Let' s add some documents:
320242
321- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb? name=" add_documents)]
243+ [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb? name=add_documents)]
322244
323245` ` ` python
324246from langchain_core.documents import Document
0 commit comments