You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/function-calling/function_tool.py
+31-1Lines changed: 31 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,8 @@
3
3
importinspect
4
4
importre
5
5
6
+
importjson
7
+
6
8
# Extract OpenAI function calling style definitions from functions
7
9
#
8
10
# Generated with: Create a python function to to generate the OpenAI function calling definition from a given function, getting the description, parameter type and parameter description from the function documentation, assuming the function documentation contains sphynx style parameter descriptions, marked with :param.
@@ -36,7 +38,7 @@ def get_type(s):
36
38
# Generate function definition schema from function definitions
37
39
#
38
40
# This is from llama-cpp-python, llama_chat_format.py
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 60.0 seconds. The drive at '/mnt/data' can be used to save and persist user files.<|eot_id|><|start_header_id|>user<|end_header_id|>
When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 60.0 seconds. The drive at '/mnt/data' can be used to save and persist user files.<|eot_id|><|start_header_id|>user<|end_header_id|>
34
-
"""
35
18
36
19
defmain():
37
20
importargparse
38
21
39
22
parser=argparse.ArgumentParser(epilog='For more options: llama-cli --help')
0 commit comments