You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -89,35 +68,11 @@ The script uses hardcoded input and output messages. In a real app you'd take in
89
68
90
69
Let's change the script to take input from a client application and generate a system message using a prompt template.
91
70
92
-
1.Remove the last line of the script that prints a response.
71
+
1. Remove the last line of the script that prints a response.
93
72
94
73
1. Now define a `get_chat_response` function that takes messages and context, generates a system message using a prompt template, and calls a model. Add this code to your **chat.py** file:
95
74
96
-
```python
97
-
from azure.ai.inference.prompts import PromptTemplate
98
-
99
-
defget_chat_response(messages, context):
100
-
# create a prompt template from an inline string (using mustache syntax)
You are an AI assistant that speaks like a techno punk rocker from 2350. Be cool but not too cool. Ya dig? Refer to the user by their first name, try to work their last name into a pun.
104
-
105
-
The user's first name is {{first_name}} and their last name is {{last_name}}.
106
-
""")
107
-
108
-
# generate system message from the template, passing in the context as variables
@@ -126,16 +81,7 @@ Let's change the script to take input from a client application and generate a s
126
81
127
82
1. Now simulate passing information from a frontend application to this function. Add the following code to the end of your **chat.py** file. Feel free to play with the message and add your own name.
128
83
129
-
```python
130
-
response = get_chat_response(
131
-
messages=[{"role": "user", "content": "what city has the best food in the world?"}],
0 commit comments