Skip to content

Commit bc29ba9

Browse files
committed
update
1 parent f790160 commit bc29ba9

File tree

1 file changed

+145
-2
lines changed

1 file changed

+145
-2
lines changed

articles/ai-services/openai/how-to/function-calling.md

Lines changed: 145 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle #dereklegenzoff
66
ms.author: mbullwin #delegenz
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 07/20/2023
9+
ms.date: 11/06/2023
1010
manager: nitinme
1111
---
1212

@@ -29,6 +29,8 @@ To use function calling with the Chat Completions API, you need to include two n
2929

3030
When functions are provided, by default the `function_call` will be set to `"auto"` and the model will decide whether or not a function should be called. Alternatively, you can set the `function_call` parameter to `{"name": "<insert-function-name>"}` to force the API to call a specific function or you can set the parameter to `"none"` to prevent the model from calling any functions.
3131

32+
# [OpenAI Python 0.28.1](#tab/python)
33+
3234
```python
3335
# Note: The openai-python library support for Azure OpenAI is in preview.
3436
import os
@@ -69,7 +71,7 @@ functions= [
6971
]
7072

7173
response = openai.ChatCompletion.create(
72-
engine="gpt-35-turbo-0613",
74+
engine="gpt-35-turbo-0613", # engine = "deployment_name"
7375
messages=messages,
7476
functions=functions,
7577
function_call="auto",
@@ -92,6 +94,74 @@ The response from the API includes a `function_call` property if the model deter
9294

9395
In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
9496

97+
# [OpenAI Python 1.x](#tab/python-new)
98+
99+
```python
100+
import os
101+
from openai import AzureOpenAI
102+
103+
client = AzureOpenAI(
104+
api_key=os.getenv("AZURE_OPENAI_KEY"),
105+
api_version="2023-10-01-preview",
106+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"
107+
)
108+
109+
messages= [
110+
{"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
111+
]
112+
113+
functions= [
114+
{
115+
"name": "search_hotels",
116+
"description": "Retrieves hotels from the search index based on the parameters provided",
117+
"parameters": {
118+
"type": "object",
119+
"properties": {
120+
"location": {
121+
"type": "string",
122+
"description": "The location of the hotel (i.e. Seattle, WA)"
123+
},
124+
"max_price": {
125+
"type": "number",
126+
"description": "The maximum price for the hotel"
127+
},
128+
"features": {
129+
"type": "string",
130+
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
131+
}
132+
},
133+
"required": ["location"]
134+
}
135+
}
136+
]
137+
138+
response = client.chat.completions.create(
139+
model="gpt-35-turbo-0613", # model = "deployment_name"
140+
messages= messages,
141+
functions = functions,
142+
function_call="auto",
143+
)
144+
145+
print(response.choices[0].message.model_dump_json(indent=2))
146+
```
147+
148+
The response from the API includes a `function_call` property if the model determines that a function should be called. The `function_call` property includes the name of the function to call and the arguments to pass to the function. The arguments are a JSON string that you can parse and use to call your function.
149+
150+
```json
151+
{
152+
"content": null,
153+
"role": "assistant",
154+
"function_call": {
155+
"arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront, free breakfast\"\n}",
156+
"name": "search_hotels"
157+
}
158+
}
159+
```
160+
161+
In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
162+
163+
---
164+
95165
## Working with function calling
96166

97167
The following section goes into additional detail on how to effectively use functions with the Chat Completions API.
@@ -106,7 +176,11 @@ If you want to describe a function that doesn't accept any parameters, use `{"ty
106176

107177
### Managing the flow with functions
108178

179+
# [OpenAI Python 0.28.1](#tab/python)
180+
109181
```python
182+
# This is only a partial code example we aren't defining an actual search_hotels function, so without further modification this code will not execute successfully. For a fully functioning example visit out samples.
183+
110184
response = openai.ChatCompletion.create(
111185
deployment_id="gpt-35-turbo-0613",
112186
messages=messages,
@@ -159,6 +233,75 @@ else:
159233
print(response["choices"][0]["message"])
160234
```
161235

236+
# [OpenAI Python 1.x](#tab/python-new)
237+
238+
```python
239+
# This is only a partial code example we aren't defining an actual search_hotels function, so without further modification this code will not execute successfully.
240+
241+
import os
242+
from openai import AzureOpenAI
243+
244+
client = AzureOpenAI(
245+
api_key= os.getenv("AZURE_OPENAI_KEY"),
246+
api_version="2023-10-01-preview",
247+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"
248+
)
249+
250+
response = client.chat.completions.create(
251+
model="gpt-35-turbo-0613", # model = deployment_name
252+
messages=messages,
253+
functions=functions,
254+
function_call="auto",
255+
)
256+
response_message = response.choices[0].message
257+
258+
# Check if the model wants to call a function
259+
if response_message.get("function_call"):
260+
261+
# Call the function. The JSON response may not always be valid so make sure to handle errors
262+
function_name = response_message["function_call"]["name"]
263+
264+
available_functions = {
265+
"search_hotels": search_hotels,
266+
}
267+
function_to_call = available_functions[function_name]
268+
269+
function_args = json.loads(response_message["function_call"]["arguments"])
270+
function_response = function_to_call(**function_args)
271+
272+
# Add the assistant response and function response to the messages
273+
messages.append( # adding assistant response to messages
274+
{
275+
"role": response_message["role"],
276+
"function_call": {
277+
"name": function_name,
278+
"arguments": response_message["function_call"]["arguments"],
279+
},
280+
"content": None
281+
}
282+
)
283+
messages.append( # adding function response to messages
284+
{
285+
"role": "function",
286+
"name": function_name,
287+
"content": function_response,
288+
}
289+
)
290+
291+
# Call the API again to get the final response from the model
292+
second_response = client.chat.completions.create(
293+
messages=messages,
294+
model="gpt-35-turbo-0613" #model = deployment_name
295+
# optionally, you could provide functions in the second call as well
296+
)
297+
print(second_response.choices[0].message)
298+
else:
299+
print(response.choices[0].message)
300+
301+
```
302+
303+
---
304+
162305
In the example above, we don't do any validation or error handling so you'll want to make sure to add that to your code.
163306

164307
For a full example of working with functions, see the [sample notebook on function calling](https://aka.ms/oai/functions-samples). You can also apply more complex logic to chain multiple function calls together, which is covered in the sample as well.

0 commit comments

Comments
 (0)