@@ -29,6 +29,10 @@ In this article, you learn how to use [Semantic Kernel](/semantic-kernel/overvie
2929 ``` bash
3030 pip install semantic-kernel
3131 ```
32+ - In this example, we are working with the Azure AI model inference API, hence we install the relevant azure dependencies. You can do it with:
33+ ` ` ` bash
34+ pip install semantic-kernel[azure]
35+ ` ` `
3236
3337# # Configure the environment
3438
@@ -148,7 +152,7 @@ Alternatively, you can stream the response from the service:
148152chat_history = ChatHistory ()
149153chat_history.add_user_message(" Hello, how are you?" )
150154
151- response = chat_completion .get_streaming_chat_message_content(
155+ response = chat_completion_service .get_streaming_chat_message_content(
152156 chat_history=chat_history,
153157 settings=execution_settings,
154158)
@@ -167,7 +171,7 @@ You can create a long-running conversation by using a loop:
167171
168172` ` ` python
169173while True:
170- response = await chat_completion .get_chat_message_content(
174+ response = await chat_completion_service .get_chat_message_content(
171175 chat_history=chat_history,
172176 settings=execution_settings,
173177 )
@@ -180,7 +184,7 @@ If you're streaming the response, you can use the following code:
180184
181185```python
182186while True:
183- response = chat_completion .get_streaming_chat_message_content(
187+ response = chat_completion_service .get_streaming_chat_message_content(
184188 chat_history=chat_history,
185189 settings=execution_settings,
186190 )
@@ -209,7 +213,7 @@ The following code shows how to get embeddings from the service:
209213
210214```python
211215embeddings = await embedding_generation_service.generate_embeddings(
212- text =["My favorite color is blue.", "I love to eat pizza."],
216+ texts =["My favorite color is blue.", "I love to eat pizza."],
213217)
214218
215219for embedding in embeddings:
0 commit comments