Skip to content

Commit 831e69f

Browse files
JarbasAlgithub-actions[bot]Delfshkrimmcoderabbitai[bot]
authored
Release 2.0.2a3 (#30)
* Update downstream dependencies for ovos-openai-plugin * Update downstream dependencies for ovos-openai-plugin * Update downstream dependencies for ovos-openai-plugin * Update downstream dependencies for ovos-openai-plugin * Update downstream dependencies for ovos-openai-plugin * Merge pull request #19 from Delfshkrimm/patch-1 Use system_prompt in solver configuration (breaking change) * 📝 Add docstrings to `patch-1` (#26) * 📝 Add docstrings to `patch-1` Docstrings generation was requested by @JarbasAl. * #19 (comment) The following files were modified: * `ovos_solver_openai_persona/__init__.py` * `ovos_solver_openai_persona/dialog_transformers.py` * `ovos_solver_openai_persona/engines.py` * Update engines.py --------- Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: JarbasAI <33701864+JarbasAl@users.noreply.github.com> * Increment Version to 2.0.2a1 * Update Changelog * Update README.md * Update downstream dependencies for ovos-openai-plugin * Update README.md * Update README.md * Update README.md * Update release_workflow.yml * Increment Version to 2.0.2a2 * fix: setup.py (#29) * Increment Version to 2.0.2a3 * Update Changelog * Update __init__.py --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Delfshkrimm <mehdi.domec@live.fr> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: JarbasAI <33701864+JarbasAl@users.noreply.github.com> Co-authored-by: JarbasAl <JarbasAl@users.noreply.github.com>
2 parents c34ff73 + 5d22d16 commit 831e69f

File tree

9 files changed

+152
-78
lines changed

9 files changed

+152
-78
lines changed

.github/workflows/release_workflow.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ on:
44
pull_request:
55
types: [closed]
66
branches: [dev]
7+
workflow_dispatch:
78

89
jobs:
910
publish_alpha:
10-
if: github.event.pull_request.merged == true
1111
uses: TigreGotico/gh-automations/.github/workflows/publish-alpha.yml@master
1212
secrets: inherit
1313
with:

CHANGELOG.md

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,32 @@
11
# Changelog
22

3-
## [2.0.1a1](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/2.0.1a1) (2025-04-10)
3+
## [2.0.2a3](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/2.0.2a3) (2025-05-02)
44

5-
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/V2.0.0...2.0.1a1)
5+
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/2.0.2a2...2.0.2a3)
66

77
**Merged pull requests:**
88

9-
- fix: params from config [\#24](https://github.com/OpenVoiceOS/ovos-openai-plugin/pull/24) ([JarbasAl](https://github.com/JarbasAl))
9+
- fix: setup.py [\#29](https://github.com/OpenVoiceOS/ovos-openai-plugin/pull/29) ([JarbasAl](https://github.com/JarbasAl))
1010

11-
## [V2.0.0](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/V2.0.0) (2025-02-26)
11+
## [2.0.2a2](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/2.0.2a2) (2025-05-02)
1212

13-
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/2.0.0...V2.0.0)
13+
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/2.0.2a1...2.0.2a2)
14+
15+
## [2.0.2a1](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/2.0.2a1) (2025-05-02)
16+
17+
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/V2.0.1...2.0.2a1)
18+
19+
**Breaking changes:**
20+
21+
- Use system\_prompt in solver configuration \(breaking change\) [\#19](https://github.com/OpenVoiceOS/ovos-openai-plugin/pull/19) ([Delfshkrimm](https://github.com/Delfshkrimm))
22+
23+
**Merged pull requests:**
24+
25+
- 📝 Add docstrings to `patch-1` [\#26](https://github.com/OpenVoiceOS/ovos-openai-plugin/pull/26) ([coderabbitai[bot]](https://github.com/apps/coderabbitai))
26+
27+
## [V2.0.1](https://github.com/OpenVoiceOS/ovos-openai-plugin/tree/V2.0.1) (2025-04-10)
28+
29+
[Full Changelog](https://github.com/OpenVoiceOS/ovos-openai-plugin/compare/2.0.1...V2.0.1)
1430

1531

1632

README.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,10 @@ To create your own persona using a OpenAI compatible server create a .json in `~
1818
"solvers": [
1919
"ovos-solver-openai-plugin"
2020
],
21-
"ovos-openai-plugin": {
21+
"ovos-solver-openai-plugin": {
2222
"api_url": "https://llama.smartgic.io/v1",
2323
"key": "sk-xxxx",
24-
"persona": "helpful, creative, clever, and very friendly."
24+
"system_prompt": "You are helping assistant who gives very short and factual answers in maximum twenty words and you don't use emojis"
2525
}
2626
}
2727
```
@@ -35,11 +35,11 @@ This plugins also provides a default "Remote LLama" demo persona, it points to a
3535
you can rewrite text dynamically based on specific personas, such as simplifying explanations or mimicking a specific tone.
3636

3737
#### Example Usage:
38-
- **Persona:** `"rewrite the text as if you were explaining it to a 5-year-old"`
38+
- **`rewrite_prompt`:** `"rewrite the text as if you were explaining it to a 5-year-old"`
3939
- **Input:** `"Quantum mechanics is a branch of physics that describes the behavior of particles at the smallest scales."`
4040
- **Output:** `"Quantum mechanics is like a special kind of science that helps us understand really tiny things."`
4141

42-
Examples of `persona` Values:
42+
Examples of `rewrite_prompt` Values:
4343
- `"rewrite the text as if it was an angry old man speaking"`
4444
- `"Add more 'dude'ness to it"`
4545
- `"Explain it like you're teaching a child"`
@@ -49,11 +49,14 @@ To enable this plugin, add the following to your `mycroft.conf`:
4949
```json
5050
"dialog_transformers": {
5151
"ovos-dialog-transformer-openai-plugin": {
52+
"system_prompt": "Your task is to rewrite text as if it was spoken by a different character",
5253
"rewrite_prompt": "rewrite the text as if you were explaining it to a 5-year-old"
5354
}
5455
}
5556
```
5657

58+
> 💡 the user utterance will be appended after `rewrite_prompt` for the actual query
59+
5760
## Direct Usage
5861

5962
```python

downstream_report.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
ovos-openai-plugin==2.0.0
2-
└── ovos-persona==0.6.16 [requires: ovos-openai-plugin>=2.0.0,<3.0.0]
3-
└── ovos-core==1.2.3 [requires: ovos-persona>=0.4.4,<1.0.0]
1+
ovos-openai-plugin==2.0.2a1
2+
└── ovos-persona==0.6.20a4 [requires: ovos-openai-plugin>=2.0.0,<3.0.0]
3+
└── ovos-core==1.2.6a1 [requires: ovos-persona>=0.4.4,<1.0.0]

ovos_solver_openai_persona/__init__.py

Lines changed: 14 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,43 +1,19 @@
1-
from typing import Optional
2-
1+
import warnings
32
from ovos_solver_openai_persona.engines import OpenAIChatCompletionsSolver
43

5-
64
class OpenAIPersonaSolver(OpenAIChatCompletionsSolver):
7-
"""default "Persona" engine"""
8-
9-
def __init__(self, config=None):
10-
# defaults to gpt-3.5-turbo
11-
super().__init__(config=config)
12-
self.default_persona = config.get("persona") or "helpful, creative, clever, and very friendly."
13-
14-
def get_chat_history(self, persona=None):
15-
persona = persona or self.default_persona
16-
initial_prompt = f"You are a helpful assistant. " \
17-
f"You give short and factual answers. " \
18-
f"You are {persona}"
19-
return super().get_chat_history(initial_prompt)
20-
21-
# officially exported Solver methods
22-
def get_spoken_answer(self, query: str,
23-
lang: Optional[str] = None,
24-
units: Optional[str] = None) -> Optional[str]:
25-
"""
26-
Obtain the spoken answer for a given query.
27-
28-
Args:
29-
query (str): The query text.
30-
lang (Optional[str]): Optional language code. Defaults to None.
31-
units (Optional[str]): Optional units for the query. Defaults to None.
32-
33-
Returns:
34-
str: The spoken answer as a text response.
35-
"""
36-
answer = super().get_spoken_answer(query, lang, units)
37-
if not answer or not answer.strip("?") or not answer.strip("_"):
38-
return None
39-
return answer
40-
5+
def __init__(self, *args, **kwargs):
6+
"""
7+
Initializes the solver and issues a deprecation warning.
8+
9+
A DeprecationWarning is raised advising to use OpenAIChatCompletionsSolver instead.
10+
"""
11+
warnings.warn(
12+
"use OpenAIChatCompletionsSolver instead",
13+
DeprecationWarning,
14+
stacklevel=2,
15+
)
16+
super().__init__(*args, **kwargs)
4117

4218
# for ovos-persona
4319
LLAMA_DEMO = {
@@ -51,9 +27,8 @@ def get_spoken_answer(self, query: str,
5127
}
5228
}
5329

54-
5530
if __name__ == "__main__":
56-
bot = OpenAIPersonaSolver(LLAMA_DEMO["ovos-solver-openai-plugin"])
31+
bot = OpenAIChatCompletionsSolver(LLAMA_DEMO["ovos-solver-openai-plugin"])
5732
#for utt in bot.stream_utterances("describe quantum mechanics in simple terms"):
5833
# print(utt)
5934
# Quantum mechanics is a branch of physics that studies the behavior of atoms and particles at the smallest scales.

ovos_solver_openai_persona/dialog_transformers.py

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,31 @@
77

88
class OpenAIDialogTransformer(DialogTransformer):
99
def __init__(self, name="ovos-dialog-transformer-openai-plugin", priority=10, config=None):
10+
"""
11+
Initializes the OpenAIDialogTransformer with a name, priority, and configuration.
12+
13+
Creates an OpenAIChatCompletionsSolver using the provided API key, API URL, and a system prompt from the configuration or a default prompt if not specified.
14+
"""
1015
super().__init__(name, priority, config)
1116
self.solver = OpenAIChatCompletionsSolver({
1217
"key": self.config.get("key"),
1318
'api_url': self.config.get('api_url', 'https://api.openai.com/v1'),
1419
"enable_memory": False,
15-
"initial_prompt": "your task is to rewrite text as if it was spoken by a different character"
20+
"system_prompt": self.config.get("system_prompt") or "Your task is to rewrite text as if it was spoken by a different character"
1621
})
1722

1823
def transform(self, dialog: str, context: dict = None) -> Tuple[str, dict]:
1924
"""
20-
Optionally transform passed dialog and/or return additional context
21-
:param dialog: str utterance to mutate before TTS
22-
:returns: str mutated dialog
25+
Transforms the dialog string using a character-specific prompt if available.
26+
27+
If a prompt is provided in the context or configuration, rewrites the dialog as if spoken by a different character using the solver; otherwise, returns the original dialog unchanged.
28+
29+
Args:
30+
dialog: The dialog string to be transformed.
31+
context: Optional dictionary containing transformation context, such as a prompt or language.
32+
33+
Returns:
34+
A tuple containing the transformed (or original) dialog and the unchanged context.
2335
"""
2436
prompt = context.get("prompt") or self.config.get("rewrite_prompt")
2537
if not prompt:

ovos_solver_openai_persona/engines.py

Lines changed: 87 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -19,12 +19,18 @@ def __init__(self, config=None,
1919
enable_tx: bool = False,
2020
enable_cache: bool = False,
2121
internal_lang: Optional[str] = None):
22+
"""
23+
Initializes the OpenAICompletionsSolver with API configuration and credentials.
24+
25+
Raises:
26+
ValueError: If the API key is not provided in the configuration.
27+
"""
2228
super().__init__(config=config, translator=translator,
23-
detector=detector, priority=priority,
24-
enable_tx=enable_tx, enable_cache=enable_cache,
25-
internal_lang=internal_lang)
29+
detector=detector, priority=priority,
30+
enable_tx=enable_tx, enable_cache=enable_cache,
31+
internal_lang=internal_lang)
2632
self.api_url = f"{self.config.get('api_url', 'https://api.openai.com/v1')}/completions"
27-
self.engine = self.config.get("model", "text-davinci-002") # "ada" cheaper and faster, "davinci" better
33+
self.engine = self.config.get("model", "gpt-4o-mini")
2834
self.key = self.config.get("key")
2935
if not self.key:
3036
LOG.error("key not set in config")
@@ -94,10 +100,16 @@ def __init__(self, config=None,
94100
enable_tx: bool = False,
95101
enable_cache: bool = False,
96102
internal_lang: Optional[str] = None):
103+
"""
104+
Initializes the OpenAIChatCompletionsSolver with API configuration, memory settings, and system prompt.
105+
106+
Raises:
107+
ValueError: If the API key is not provided in the configuration.
108+
"""
97109
super().__init__(config=config, translator=translator,
98-
detector=detector, priority=priority,
99-
enable_tx=enable_tx, enable_cache=enable_cache,
100-
internal_lang=internal_lang)
110+
detector=detector, priority=priority,
111+
enable_tx=enable_tx, enable_cache=enable_cache,
112+
internal_lang=internal_lang)
101113
self.api_url = f"{self.config.get('api_url', 'https://api.openai.com/v1')}/chat/completions"
102114
self.engine = self.config.get("model", "gpt-4o-mini")
103115
self.key = self.config.get("key")
@@ -107,10 +119,29 @@ def __init__(self, config=None,
107119
self.memory = config.get("enable_memory", True)
108120
self.max_utts = config.get("memory_size", 3)
109121
self.qa_pairs = [] # tuple of q+a
110-
self.initial_prompt = config.get("initial_prompt", "You are a helpful assistant.")
122+
if "persona" in config:
123+
LOG.warning("'persona' config option is deprecated, use 'system_prompt' instead")
124+
if "initial_prompt" in config:
125+
LOG.warning("'initial_prompt' config option is deprecated, use 'system_prompt' instead")
126+
self.system_prompt = config.get("system_prompt") or config.get("initial_prompt")
127+
if not self.system_prompt:
128+
self.system_prompt = "You are a helpful assistant."
129+
LOG.error(f"system prompt not set in config! defaulting to '{self.system_prompt}'")
111130

112131
# OpenAI API integration
113132
def _do_api_request(self, messages):
133+
"""
134+
Sends a chat completion request to the OpenAI API and returns the assistant's reply.
135+
136+
Args:
137+
messages: A list of message dictionaries representing the conversation history.
138+
139+
Returns:
140+
The content of the assistant's reply as a string.
141+
142+
Raises:
143+
RequestException: If the OpenAI API returns an error in the response.
144+
"""
114145
s = requests.Session()
115146
headers = {
116147
"Content-Type": "application/json",
@@ -141,6 +172,17 @@ def _do_api_request(self, messages):
141172

142173
def _do_streaming_api_request(self, messages):
143174

175+
"""
176+
Streams response content from the OpenAI chat completions API.
177+
178+
Sends a POST request with the provided chat messages and yields content chunks as they are received from the streaming API. Stops iteration if an error is encountered or the response is finished.
179+
180+
Args:
181+
messages: A list of chat message dictionaries to send as context.
182+
183+
Yields:
184+
str: Segments of the assistant's reply as they arrive from the API.
185+
"""
144186
s = requests.Session()
145187
headers = {
146188
"Content-Type": "application/json",
@@ -179,36 +221,60 @@ def _do_streaming_api_request(self, messages):
179221
continue
180222
yield chunk["choices"][0]["delta"]["content"]
181223

182-
def get_chat_history(self, initial_prompt=None):
224+
def get_chat_history(self, system_prompt=None):
225+
"""
226+
Builds the chat history as a list of messages, starting with a system prompt.
227+
228+
Args:
229+
system_prompt: Optional override for the system prompt message.
230+
231+
Returns:
232+
A list of message dictionaries representing the system prompt and the most recent user-assistant exchanges.
233+
"""
183234
qa = self.qa_pairs[-1 * self.max_utts:]
184-
initial_prompt = initial_prompt or self.initial_prompt or "You are a helpful assistant."
235+
system_prompt = system_prompt or self.system_prompt or "You are a helpful assistant."
185236
messages = [
186-
{"role": "system", "content": initial_prompt},
237+
{"role": "system", "content": system_prompt},
187238
]
188239
for q, a in qa:
189240
messages.append({"role": "user", "content": q})
190241
messages.append({"role": "assistant", "content": a})
191242
return messages
192243

193-
def get_messages(self, utt, initial_prompt=None) -> MessageList:
194-
messages = self.get_chat_history(initial_prompt)
244+
def get_messages(self, utt, system_prompt=None) -> MessageList:
245+
"""
246+
Builds a list of chat messages including the system prompt, recent conversation history, and the current user utterance.
247+
248+
Args:
249+
utt: The current user input to be appended as the latest message.
250+
system_prompt: Optional system prompt to use as the initial message.
251+
252+
Returns:
253+
A list of message dictionaries representing the chat context for the API.
254+
"""
255+
messages = self.get_chat_history(system_prompt)
195256
messages.append({"role": "user", "content": utt})
196257
return messages
197258

198259
# abstract Solver methods
199260
def continue_chat(self, messages: MessageList,
200261
lang: Optional[str],
201262
units: Optional[str] = None) -> Optional[str]:
202-
"""Generate a response based on the chat history.
263+
"""
264+
Generates a chat response using the provided message history and updates memory if enabled.
265+
266+
If the first message is not a system prompt, prepends the system prompt. Processes the API response and returns a cleaned answer, or None if the answer is empty or only punctuation/underscores. Updates internal memory with the latest question and answer if memory is enabled.
203267
204268
Args:
205-
messages (List[Dict[str, str]]): List of chat messages, each containing 'role' and 'content'.
206-
lang (Optional[str]): The language code for the response. If None, will be auto-detected.
207-
units (Optional[str]): Optional unit system for numerical values.
269+
messages: List of chat messages with 'role' and 'content' keys.
270+
lang: Optional language code for the response.
271+
units: Optional unit system for numerical values.
208272
209273
Returns:
210-
Optional[str]: The generated response or None if no response could be generated.
274+
The generated response as a string, or None if no valid response is produced.
211275
"""
276+
if messages[0]["role"] != "system":
277+
messages = [{"role": "system", "content": self.system_prompt }] + messages
212278
response = self._do_api_request(messages)
213279
answer = post_process_sentence(response)
214280
if not answer or not answer.strip("?") or not answer.strip("_"):
@@ -218,7 +284,7 @@ def continue_chat(self, messages: MessageList,
218284
self.qa_pairs.append((query, answer))
219285
return answer
220286

221-
def stream_chat_utterances(self, messages: List[Dict[str, str]],
287+
def stream_chat_utterances(self, messages: MessageList,
222288
lang: Optional[str] = None,
223289
units: Optional[str] = None) -> Iterable[str]:
224290
"""
@@ -232,6 +298,8 @@ def stream_chat_utterances(self, messages: List[Dict[str, str]],
232298
Returns:
233299
Iterable[str]: An iterable of utterances.
234300
"""
301+
if messages[0]["role"] != "system":
302+
messages = [{"role": "system", "content": self.system_prompt }] + messages
235303
answer = ""
236304
query = messages[-1]["content"]
237305
if self.memory:
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# START_VERSION_BLOCK
22
VERSION_MAJOR = 2
33
VERSION_MINOR = 0
4-
VERSION_BUILD = 1
5-
VERSION_ALPHA = 0
4+
VERSION_BUILD = 2
5+
VERSION_ALPHA = 3
66
# END_VERSION_BLOCK

0 commit comments

Comments
 (0)