You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
✨ feat(botservice.ts): Changed to respond to system notifications collectively
Because of splitting messages results in slow response.
✨ feat(package.json): add bugs and homepage URLs for issue reporting and project homepage
🐛 fix(package.json): add npm dedup to upgrade script
🔧 chore(package.json): update versions of dev dependencies
🔧 fix(botservice.ts): filter and modify messages to remove system messages and their lines
🐛 fix(botservice.ts): change variable name case from lowercase limit_tokes to uppercase LIMIT_TOKENS to improve semantics
✨ feat(botservice.ts): add expireMessages function to remove old messages
✨ feat: added functionality to add completion and modify last line when continuing a thread
🐛 fix: removed unnecessary function call and added debug information to log
🔧 fix(mm-client.ts): fix import order of packages
🔧 fix(openai-thread-completion.ts): change variable name to uppercase MAX_TOKENS
✨ feat(openai-thread-completion.ts): add usage statistics for the response
🔧 fix(process-graph-response.ts): fix import order
Added bugs and homepage URLs to package.json for issue reporting and accessing the project homepage, allowing users to report issues and access the project's homepage.
Added npm dedup to the upgrade script, which resolves dependency duplication and keeps package versions up to date.
Updated versions of dev dependencies, allowing the use of the latest versions of tools such as TypeScript and ESLint.
Filtered and modified the messages to remove system messages and their lines. This results in a cleaner display of messages within threads.
By changing the variable name to LIMIT_TOKENS, the limit value for tokens becomes more clear. The addition of the expireMessages function allows for the removal of old messages. This ensures that the total token count of the messages does not exceed the limit.
When continuing a thread, the continueThread function is used to add completion to the reply message. The modifyLastLine function is used to modify the last line. The modified answer is then passed to the newPost function to be posted, allowing the thread to continue. Additionally, debug information is added to the log.
Copy file name to clipboardExpand all lines: README.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,16 +25,17 @@ or when [running the docker image](#using-the-ready-made-image) or when configur
25
25
| MATTERMOST_TOKEN | yes |`abababacdcdcd`| The authentication token from the logged in mattermost bot |
26
26
| OPENAI_API_KEY | yes |`sk-234234234234234234`| The OpenAI API key to authenticate with OpenAI |
27
27
| OPENAI_MODEL_NAME | no |`gpt-3.5-turbo`| The OpenAI language model to use, defaults to `gpt-3.5-turbo`|
28
-
| OPENAI_MAX_TOKENS | no |`2000`| The maximum number of tokens to pass to the OpenAI API, defaults to 2000 |
28
+
| OPENAI_MAX_TOKENS | no |`2000`| The max_tokens parameter to pass to the OpenAI API, with a default value of 2000. API will answer up to this number of tokens|
29
29
| OPENAI_TEMPERATURE | no |`0.2`| The sampling temperature to use, between 0 and 2, defaults to 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
30
30
| AZURE_OPENAI_API_KEY | no |`0123456789abcdefghijklmno`| The Azure OpenAI Service API key to authoenticate |
31
31
| AZURE_OPENAI_API_INSTANCE_NAME | no |`example-name`| The instance name on the Azure OpenAI Service |
32
32
| AZURE_OPENAI_API_DEPLOYMENT_NAME | no |`gpt-35-turbo`| The name of the deployed model on the Azure OpenAI Service |
33
33
| AZURE_OPENAI_API_VERSION | no |`2023-03-15-preview`| The Azure OpenAI version |
34
-
| YFILES_SERVER_URL | no |`http://localhost:3835`| The URL to the yFiles graph service for embedding auto-generated diagrams. |
35
-
| NODE_EXTRA_CA_CERTS | no |`/file/to/cert.crt`| a link to a certificate file to pass to node.js for authenticating self-signed certificates |
36
-
| MATTERMOST_BOTNAME | no |`"@chatgpt"`| the name of the bot user in Mattermost, defaults to '@chatgpt' |
37
-
| DEBUG_LEVEL | no |`TRACE`| a debug level used for logging activity, defaults to `INFO`|
34
+
| YFILES_SERVER_URL | no |`http://localhost:3835`| The URL to the yFiles graph service for embedding auto-generated diagrams. |
35
+
| NODE_EXTRA_CA_CERTS | no |`/file/to/cert.crt`| a link to a certificate file to pass to node.js for authenticating self-signed certificates |
36
+
| MATTERMOST_BOTNAME | no |`"@chatgpt"`| the name of the bot user in Mattermost, defaults to '@chatgpt' |
37
+
| MAX_PROMPT_TOKENS | no |`2000`| Maximum token count of the prompt passed to the OpenAI API. default is 2000 |
38
+
| DEBUG_LEVEL | no |`TRACE`| a debug level used for logging activity, defaults to `INFO`|
38
39
39
40
> **Note**
40
41
> The `YFILES_SERVER_URL` is used for automatically converting text information created by the bot into diagrams.
@@ -53,17 +54,17 @@ or when [running the docker image](#using-the-ready-made-image) or when configur
53
54
* Splitting message that are too long
54
55
* Support GitLab AutoDevOps by test dummy
55
56
56
-
## Using the docker image
57
+
## Using the ready-made image
57
58
58
-
Use your builted and pushed image.
59
+
Use the prebuilt image from [`gitlab.on-o.com/docker/chatgpt-mattermost-bot/release`](https://gitlab.on-o.com/Docker/chatgpt-mattermost-bot/container_registry/150)
0 commit comments