Skip to content

Commit 1d7756a

Browse files
committed
update
1 parent bc29ba9 commit 1d7756a

File tree

1 file changed

+0
-72
lines changed

1 file changed

+0
-72
lines changed

articles/ai-services/openai/how-to/function-calling.md

Lines changed: 0 additions & 72 deletions
Original file line numberDiff line numberDiff line change
@@ -176,10 +176,7 @@ If you want to describe a function that doesn't accept any parameters, use `{"ty
176176

177177
### Managing the flow with functions
178178

179-
# [OpenAI Python 0.28.1](#tab/python)
180-
181179
```python
182-
# This is only a partial code example we aren't defining an actual search_hotels function, so without further modification this code will not execute successfully. For a fully functioning example visit out samples.
183180

184181
response = openai.ChatCompletion.create(
185182
deployment_id="gpt-35-turbo-0613",
@@ -233,75 +230,6 @@ else:
233230
print(response["choices"][0]["message"])
234231
```
235232

236-
# [OpenAI Python 1.x](#tab/python-new)
237-
238-
```python
239-
# This is only a partial code example we aren't defining an actual search_hotels function, so without further modification this code will not execute successfully.
240-
241-
import os
242-
from openai import AzureOpenAI
243-
244-
client = AzureOpenAI(
245-
api_key= os.getenv("AZURE_OPENAI_KEY"),
246-
api_version="2023-10-01-preview",
247-
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"
248-
)
249-
250-
response = client.chat.completions.create(
251-
model="gpt-35-turbo-0613", # model = deployment_name
252-
messages=messages,
253-
functions=functions,
254-
function_call="auto",
255-
)
256-
response_message = response.choices[0].message
257-
258-
# Check if the model wants to call a function
259-
if response_message.get("function_call"):
260-
261-
# Call the function. The JSON response may not always be valid so make sure to handle errors
262-
function_name = response_message["function_call"]["name"]
263-
264-
available_functions = {
265-
"search_hotels": search_hotels,
266-
}
267-
function_to_call = available_functions[function_name]
268-
269-
function_args = json.loads(response_message["function_call"]["arguments"])
270-
function_response = function_to_call(**function_args)
271-
272-
# Add the assistant response and function response to the messages
273-
messages.append( # adding assistant response to messages
274-
{
275-
"role": response_message["role"],
276-
"function_call": {
277-
"name": function_name,
278-
"arguments": response_message["function_call"]["arguments"],
279-
},
280-
"content": None
281-
}
282-
)
283-
messages.append( # adding function response to messages
284-
{
285-
"role": "function",
286-
"name": function_name,
287-
"content": function_response,
288-
}
289-
)
290-
291-
# Call the API again to get the final response from the model
292-
second_response = client.chat.completions.create(
293-
messages=messages,
294-
model="gpt-35-turbo-0613" #model = deployment_name
295-
# optionally, you could provide functions in the second call as well
296-
)
297-
print(second_response.choices[0].message)
298-
else:
299-
print(response.choices[0].message)
300-
301-
```
302-
303-
---
304-
305233
In the example above, we don't do any validation or error handling so you'll want to make sure to add that to your code.
306234

307235
For a full example of working with functions, see the [sample notebook on function calling](https://aka.ms/oai/functions-samples). You can also apply more complex logic to chain multiple function calls together, which is covered in the sample as well.

0 commit comments

Comments
 (0)