@@ -87,17 +87,6 @@ You can also use the class `AzureAIChatCompletionsModel` directly.
87
87
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_client)]
88
88
89
89
90
- ```python
91
- import os
92
- from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
93
-
94
- model = AzureAIChatCompletionsModel(
95
- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
96
- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
97
- model="mistral-medium-2505",
98
- )
99
- ```
100
-
101
90
> [!CAUTION]
102
91
> **Breaking change:** Parameter `model_name` was renamed `model` in version `0.1.3`.
103
92
@@ -151,17 +140,6 @@ Let's first use the model directly. `ChatModels` are instances of LangChain `Run
151
140
152
141
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=human_message)]
153
142
154
- ```python
155
- from langchain_core.messages import HumanMessage, SystemMessage
156
-
157
- messages = [
158
- SystemMessage(content="Translate the following from English into Italian"),
159
- HumanMessage(content="hi!"),
160
- ]
161
-
162
- model.invoke(messages)
163
- ```
164
-
165
143
You can also compose operations as needed in **chains**. Let' s now use a prompt template to translate sentences:
166
144
167
145
` ` ` python
@@ -178,11 +156,6 @@ As you can see from the prompt template, this chain has a `language` and `text`
178
156
179
157
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_output_parser)]
180
158
181
- ```python
182
- from langchain_core.prompts import ChatPromptTemplate
183
- parser = StrOutputParser()
184
- ```
185
-
186
159
We can now combine the template, model, and the output parser from above using the pipe (`|`) operator:
187
160
188
161
```python
@@ -195,10 +168,6 @@ To invoke the chain, identify the inputs required and provide values using the `
195
168
chain.invoke({"language": "italian", "text": "hi"})
196
169
```
197
170
198
- ```output
199
- ' ciao'
200
- ```
201
-
202
171
### Chaining multiple LLMs together
203
172
204
173
Models deployed to Azure AI Foundry support the Model Inference API, which is standard across all the models. Chain multiple LLM operations based on the capabilities of each model so you can optimize for the right model based on capabilities.
@@ -207,21 +176,6 @@ In the following example, we create two model clients. One is a producer and ano
207
176
208
177
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=create_producer_verifier)]
209
178
210
- ```python
211
- from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
212
-
213
- producer = AzureAIChatCompletionsModel(
214
- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
215
- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
216
- model="mistral-medium-2505",
217
- )
218
-
219
- verifier = AzureAIChatCompletionsModel(
220
- endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
221
- credential=os.environ["AZURE_INFERENCE_CREDENTIAL"],
222
- model="mistral-small",
223
- )
224
- ```
225
179
226
180
> [!TIP]
227
181
> Explore the model card of each of the models to understand the best use cases for each model.
@@ -230,40 +184,12 @@ The following example generates a poem written by an urban poet:
230
184
231
185
[!notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb?name=generate_poem)]
232
186
233
- ```python
234
- from langchain_core.prompts import PromptTemplate
235
-
236
- producer_template = PromptTemplate(
237
- template="You are an urban poet, your job is to come up \
238
- verses based on a given topic.\n\
239
- Here is the topic you have been asked to generate a verse on:\n\
240
- {topic}",
241
- input_variables=["topic"],
242
- )
243
-
244
- verifier_template = PromptTemplate(
245
- template="You are a verifier of poems, you are tasked\
246
- to inspect the verses of poem. If they consist of violence and abusive language\
247
- report it. Your response should be only one word either True or False.\n \
248
- Here is the lyrics submitted to you:\n\
249
- {input}",
250
- input_variables=["input"],
251
- )
252
- ```
253
-
254
187
Now let' s chain the pieces:
255
188
256
189
[! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_chain)]
257
190
258
- ` ` ` python
259
- chain = producer_template | producer | parser | verifier_template | verifier | parser
260
- ` ` `
261
-
262
191
The previous chain returns the output of the step ` verifier` only. Since we want to access the intermediate result generated by the ` producer` , in LangChain you need to use a ` RunnablePassthrough` object to also output that intermediate step.
263
192
264
- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_chain_with_passthrough)]
265
- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=create_multiple_outputs_chain)]
266
-
267
193
` ` ` python
268
194
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
269
195
@@ -277,10 +203,6 @@ To invoke the chain, identify the inputs required and provide values using the `
277
203
278
204
[! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-chat-models.ipynb? name=invoke_chain)]
279
205
280
- ` ` ` python
281
- chain.invoke({" topic" : " living in a foreign country" })
282
- ` ` `
283
-
284
206
285
207
# # Use embeddings models
286
208
@@ -318,7 +240,7 @@ vector_store = InMemoryVectorStore(embed_model)
318
240
319
241
Let' s add some documents:
320
242
321
- [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb? name=" add_documents)]
243
+ [! notebook-python[](~/azureai-samples-main/scenarios/langchain/getting-started-with-langchain-embeddings.ipynb? name=add_documents)]
322
244
323
245
` ` ` python
324
246
from langchain_core.documents import Document
0 commit comments