@@ -9,21 +9,20 @@ As long as your system can accept an input and stream an output (and can be inte
9
9
Simply replace the OpenAI-compatible ` completions ` function in your language model with one of your own:
10
10
11
11
``` python
12
- def custom_language_model (openai_message ):
12
+ def custom_language_model (messages , model , stream , max_tokens ):
13
13
"""
14
14
OpenAI-compatible completions function (this one just echoes what the user said back).
15
+ To make it OpenAI-compatible and parsable, `choices` has to be the root property.
16
+ The property `delta` is used to signify streaming.
15
17
"""
16
- users_content = openai_message[- 1 ].get(" content" ) # Get last message's content
17
-
18
- # To make it OpenAI-compatible, we yield this first:
19
- yield {" delta" : {" role" : " assistant" }}
18
+ users_content = messages[- 1 ].get(" content" ) # Get last message's content
20
19
21
20
for character in users_content:
22
- yield {" delta" : {" content" : character}}
21
+ yield {" choices " : [{ " delta" : {" content" : character}}] }
23
22
24
23
# Tell Open Interpreter to power the language model with this function
25
24
26
- interpreter.llm.completion = custom_language_model
25
+ interpreter.llm.completions = custom_language_model
27
26
```
28
27
29
28
Then, set the following settings:
@@ -39,4 +38,4 @@ And start using it:
39
38
40
39
```
41
40
interpreter.chat("Hi!") # Returns/displays "Hi!" character by character
42
- ```
41
+ ```
0 commit comments