@@ -29,6 +29,10 @@ In this article, you learn how to use [Semantic Kernel](/semantic-kernel/overvie
29
29
``` bash
30
30
pip install semantic-kernel
31
31
```
32
+ - In this example, we are working with the Azure AI model inference API, hence we install the relevant azure dependencies. You can do it with:
33
+ ` ` ` bash
34
+ pip install semantic-kernel[azure]
35
+ ` ` `
32
36
33
37
# # Configure the environment
34
38
@@ -148,7 +152,7 @@ Alternatively, you can stream the response from the service:
148
152
chat_history = ChatHistory ()
149
153
chat_history.add_user_message(" Hello, how are you?" )
150
154
151
- response = chat_completion .get_streaming_chat_message_content(
155
+ response = chat_completion_service .get_streaming_chat_message_content(
152
156
chat_history=chat_history,
153
157
settings=execution_settings,
154
158
)
@@ -167,7 +171,7 @@ You can create a long-running conversation by using a loop:
167
171
168
172
` ` ` python
169
173
while True:
170
- response = await chat_completion .get_chat_message_content(
174
+ response = await chat_completion_service .get_chat_message_content(
171
175
chat_history=chat_history,
172
176
settings=execution_settings,
173
177
)
@@ -180,7 +184,7 @@ If you're streaming the response, you can use the following code:
180
184
181
185
```python
182
186
while True:
183
- response = chat_completion .get_streaming_chat_message_content(
187
+ response = chat_completion_service .get_streaming_chat_message_content(
184
188
chat_history=chat_history,
185
189
settings=execution_settings,
186
190
)
@@ -209,7 +213,7 @@ The following code shows how to get embeddings from the service:
209
213
210
214
```python
211
215
embeddings = await embedding_generation_service.generate_embeddings(
212
- text =["My favorite color is blue.", "I love to eat pizza."],
216
+ texts =["My favorite color is blue.", "I love to eat pizza."],
213
217
)
214
218
215
219
for embedding in embeddings:
0 commit comments