Skip to content

Commit 0e55110

Browse files
committed
[0.5.10] disable parallel tool calls for OpenAI
1 parent 725d405 commit 0e55110

File tree

5 files changed

+17
-4
lines changed

5 files changed

+17
-4
lines changed

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
# Tune API
22

3-
Python package for building GenAI apps tightly integrated to Tune Studio.
3+
This is **not** the official SDK for Tune AI. This is an open source python package used to build GenAI apps at Tune AI.
44

5-
> The largest utils for any GenAI package ever! See [tuneapi.utils](./tuneapi/utils/__init__.py)

docs/changelog.rst

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,16 @@ minor versions.
77

88
All relevant steps to be taken will be mentioned here.
99

10+
0.5.10
11+
-----
12+
13+
- Remove redundant prints.
14+
15+
0.5.9
16+
-----
17+
18+
- By default set the value ``parallel_tool_calls`` in OpenAI to ``False``.
19+
1020
0.5.8
1121
-----
1222

docs/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
project = "tuneapi"
1414
copyright = "2024, Frello Technologies"
1515
author = "Frello Technologies"
16-
release = "0.5.8"
16+
release = "0.5.10"
1717

1818
# -- General configuration ---------------------------------------------------
1919
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[tool.poetry]
22
name = "tuneapi"
3-
version = "0.5.8"
3+
version = "0.5.10"
44
description = "Tune AI APIs."
55
authors = ["Frello Technology Private Limited <[email protected]>"]
66
license = "MIT"

tuneapi/apis/model_openai.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,7 @@ def chat(
9999
model: Optional[str] = None,
100100
max_tokens: int = 1024,
101101
temperature: float = 1,
102+
parallel_tool_calls: bool = False,
102103
token: Optional[str] = None,
103104
extra_headers: Optional[Dict[str, str]] = None,
104105
**kwargs,
@@ -109,6 +110,7 @@ def chat(
109110
model=model,
110111
max_tokens=max_tokens,
111112
temperature=temperature,
113+
parallel_tool_calls=parallel_tool_calls,
112114
token=token,
113115
extra_headers=extra_headers,
114116
raw=False,
@@ -126,6 +128,7 @@ def stream_chat(
126128
model: Optional[str] = None,
127129
max_tokens: int = 1024,
128130
temperature: float = 1,
131+
parallel_tool_calls: bool = False,
129132
token: Optional[str] = None,
130133
timeout=(5, 60),
131134
extra_headers: Optional[Dict[str, str]] = None,
@@ -142,6 +145,7 @@ def stream_chat(
142145
"model": model or self.model_id,
143146
"stream": True,
144147
"max_tokens": max_tokens,
148+
"parallel_tool_calls": parallel_tool_calls,
145149
}
146150
if isinstance(chats, tt.Thread) and len(chats.tools):
147151
data["tools"] = [

0 commit comments

Comments
 (0)