Skip to content

Commit 029231c

Browse files
committed
🔧 chore(devcontainer.json, README.md, package.json, CohereAdapter.ts, GoogleGeminiAdapter.ts, botservice.ts, openai-wrapper.ts, postMessage.ts): update dependencies and improve code
Updated the development environment Docker image to the latest 1-22-bookworm and upgraded dependency packages to their latest versions, enhancing security and performance. Added comments and function descriptions to improve code readability and maintainability. Improved token management in OpenAI API calls and organized log outputs. @anthropic-ai/sdk ^0.21.1 → ^0.32.1 @eslint/js ^9.3.0 → ^9.16.0 @google/generative-ai ^0.7.1 → ^0.21.0 @mattermost/client ^9.6.0 → ^10.2.0 @mattermost/types ^9.6.0 → ^10.2.0 @swc/core ^1.4.14 → ^1.10.1 @swc/helpers ^0.5.10 → ^0.5.15 @types/node ^20.12.7 → ^22.10.2 @types/node-fetch ^2.6.11 → ^2.6.12 @types/ws ^8.5.11 → ^8.5.13 cohere-ai ^7.9.4 → ^7.15.0 debug-level 3.1.4 → 3.2.1 esbuild ^0.20.2 → ^0.24.0 eslint ^9.0.0 → ^9.16.0 form-data ^4.0.0 → ^4.0.1 openai ^4.35.0 → ^4.76.1 prettier ^3.3.3 → ^3.4.2 sharp ^0.33.3 → ^0.33.5 textlint ^14.0.4 → ^14.4.0 textlint-rule-preset-jtf-style ^2.3.14 → ^3.0.0 tsx ^4.7.2 → ^4.19.2 typescript ^5.5.3 → ^5.7.2 typescript-eslint ^7.7.0 → ^8.18.0 vitest ^1.5.2 → ^2.1.8
1 parent f50687b commit 029231c

File tree

10 files changed

+4161
-2223
lines changed

10 files changed

+4161
-2223
lines changed

.devcontainer/devcontainer.json

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,7 @@
33
{
44
"name": "Node.js & TypeScript",
55
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
6-
"image": "mcr.microsoft.com/devcontainers/typescript-node:1-20-bookworm"
7-
6+
"image": "mcr.microsoft.com/devcontainers/typescript-node:1-22-bookworm"
87
// Features to add to the dev container. More info: https://containers.dev/features.
98
// "features": {},
109

@@ -22,7 +21,7 @@
2221
}
2322
// 新しいイメージを持ってきたら
2423
// ID=$(docker ps |awk '$2~/^vsc-/{print $1}')
25-
// docker exec -u=root $ID sh -c "apt update && apt install git-secrets connect-proxy"
24+
// docker exec -u=root $ID sh -c "apt update && apt install git-secrets connect-proxy netcat-openbsd"
2625
// docker cp ~/.ssh/config $ID:/home/node/.ssh/
2726
// が必要
2827
// 鍵についてはssh-add -l で確認

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020

2121
![A chat window in Mattermost showing the chat between the OpenAI bot and "yGuy"](./mattermost-chat.png)
2222

23-
The bot can talk to you like a regular mattermost user. It's like having chat.openai.com built collaboratively built into Mattermost!
23+
The bot can talk to you like a regular mattermost user. It's like having ``chat.openai.com`` built collaboratively built into Mattermost!
2424
But that's not all, you can also use it to generate images via Dall-E or diagram visualizations via a yFiles plugin!
2525

2626
Here's how to get the bot running - it's easy if you have a Docker host.
@@ -65,7 +65,7 @@ or when [running the docker image](#using-the-ready-made-docker-image) or when c
6565
| COHERE_API_KEY | no | `0123456789abcdefghijklmno` | The Cohere API key to authenticate. If OPENAI_API_KEY is also set, the original OpenAI is used for vision or image generation. |
6666
| GOOGLE_API_KEY | no | `0123456789abcdefghijklmno` | The Gemini API key to authenticate. If OPENAI_API_KEY is also set, the original OpenAI is used for vision or image generation. Tested model is only 'gemini-1.5-pro-latest'' |
6767
| YFILES_SERVER_URL | no | `http://localhost:3835` | The URL to the yFiles graph service for embedding auto-generated diagrams. |
68-
| NODE_EXTRA_CA_CERTS | no | `/file/to/cert.crt` | a link to a certificate file to pass to node.js for authenticating self-signed certificates |
68+
| NODE_EXTRA_CA_CERTS | no | `/file/to/cert.crt` | a link to a certificate file to pass to ``node.js`` for authenticating self-signed certificates |
6969
| MATTERMOST_BOTNAME | no | `"@chatgpt"` | the name of the bot user in Mattermost, defaults to '@chatgpt' |
7070
| PLUGINS | no | `graph-plugin, image-plugin` | The enabled plugins of the bot. By default all plugins (grpah-plugin and image-plugin) are enabled. |
7171
| DEBUG_LEVEL | no | `TRACE` | a debug level used for logging activity, defaults to `INFO` |
@@ -219,7 +219,7 @@ docker compose down
219219

220220

221221
## Deploy to Kubernetes with Helm
222-
The chatgpt-mattermost-bot chart deploys a containerized chatgpt-mattermost-bot instance which will connect to a running mattermost container in the same kubernetes cluster. Chart uses 'mattermost-team-edition' and the 'mattermost' namespace by default. Uses environment variables MATTERMOST_TOKEN and OPENAI_API_KEY.
222+
The chatgpt-mattermost-bot chart deploys a containerized chatgpt-mattermost-bot instance which will connect to a running mattermost container in the same Kubernetes cluster. Chart uses 'mattermost-team-edition' and the 'mattermost' namespace by default. Uses environment variables MATTERMOST_TOKEN and OPENAI_API_KEY.
223223
```bash
224224
helm upgrade chatgpt-mattermost-bot ./helm/chatgpt-mattermost-bot \
225225
--create-namespace \

dist/botservice.mjs

Lines changed: 72 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,9 @@ var CohereAdapter = class extends AIAdapter {
243243
} else {
244244
return {
245245
role: "assistant",
246-
content: chat.text
246+
content: chat.text,
247+
refusal: null
248+
// アシスタントからの拒否メッセージ
247249
};
248250
}
249251
}
@@ -261,7 +263,9 @@ var CohereAdapter = class extends AIAdapter {
261263
const message = {
262264
role: "assistant",
263265
content: null,
264-
tool_calls: openAItoolCalls
266+
tool_calls: openAItoolCalls,
267+
refusal: null
268+
// アシスタントからの拒否メッセージ
265269
};
266270
return message;
267271
}
@@ -365,7 +369,8 @@ var CohereAdapter = class extends AIAdapter {
365369
// src/adapters/GoogleGeminiAdapter.ts
366370
import {
367371
FinishReason,
368-
GoogleGenerativeAI
372+
GoogleGenerativeAI,
373+
SchemaType
369374
} from "@google/generative-ai";
370375
import { Log as Log4 } from "debug-level";
371376
Log4.options({ json: true, colors: true });
@@ -487,7 +492,9 @@ var GoogleGeminiAdapter = class extends AIAdapter {
487492
role: "assistant",
488493
//this.convertRoleGeminitoOpenAI(candidate.content.role),
489494
content,
490-
tool_calls: toolCalls
495+
tool_calls: toolCalls,
496+
refusal: null
497+
// アシスタントからの拒否メッセージ
491498
}
492499
});
493500
});
@@ -524,7 +531,9 @@ var GoogleGeminiAdapter = class extends AIAdapter {
524531
//example:
525532
};
526533
}
527-
const parameters = tool.function.parameters;
534+
let parameters = tool.function.parameters;
535+
this.convertType(tool, parameters);
536+
parameters = this.workaroundObjectNoParameters(parameters);
528537
functionDeclarations.push({
529538
name: tool.function.name,
530539
description: tool.function.description,
@@ -533,10 +542,31 @@ var GoogleGeminiAdapter = class extends AIAdapter {
533542
});
534543
return geminiTool;
535544
}
545+
workaroundObjectNoParameters(parameters) {
546+
if (parameters?.type === SchemaType.OBJECT && Object.keys(parameters?.properties).length === 0) {
547+
parameters = void 0;
548+
}
549+
return parameters;
550+
}
551+
convertType(tool, parameters) {
552+
const typeMapping = {
553+
object: SchemaType.OBJECT,
554+
string: SchemaType.STRING,
555+
number: SchemaType.NUMBER,
556+
integer: SchemaType.INTEGER,
557+
boolean: SchemaType.BOOLEAN,
558+
array: SchemaType.ARRAY
559+
};
560+
const paramType = tool.function.parameters?.type;
561+
if (paramType && typeMapping[paramType]) {
562+
parameters.type = typeMapping[paramType];
563+
}
564+
}
536565
createContents(messages) {
537566
const currentMessages = [];
538567
messages.forEach(async (message) => {
539568
switch (message.role) {
569+
// To Google ["user", "model", "function", "system"]
540570
case "system":
541571
currentMessages.push({
542572
role: "user",
@@ -558,6 +588,7 @@ var GoogleGeminiAdapter = class extends AIAdapter {
558588
break;
559589
case "tool":
560590
case "function":
591+
//Deprecated
561592
default:
562593
log3.error(`getChatHistory(): ${message.role} not yet support.`, message);
563594
break;
@@ -762,29 +793,12 @@ function registerChatPlugin(plugin) {
762793
});
763794
}
764795
async function continueThread(messages, msgData) {
765-
openAILog.trace(
766-
"messsages: ",
767-
JSON.parse(JSON.stringify(messages)).map(
768-
//シリアライズでDeep Copy
769-
(message) => {
770-
if (typeof message.content !== "string") {
771-
message.content?.map((content) => {
772-
const url = shortenString(content.image_url?.url);
773-
if (url) {
774-
;
775-
content.image_url.url = url;
776-
}
777-
return content;
778-
});
779-
}
780-
return message;
781-
}
782-
)
783-
);
796+
logMessages(messages);
784797
const NO_MESSAGE = "Sorry, but it seems I found no valid response.";
798+
const promptTokensDetails = { cached_tokens: 0 };
785799
let aiResponse = {
786800
message: NO_MESSAGE,
787-
usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 },
801+
usage: { prompt_tokens: 0, completion_tokens: 0, prompt_tokens_details: promptTokensDetails, total_tokens: 0 },
788802
model: ""
789803
};
790804
let maxChainLength = 7;
@@ -797,6 +811,7 @@ async function continueThread(messages, msgData) {
797811
if (usage && aiResponse.usage) {
798812
aiResponse.usage.prompt_tokens += usage.prompt_tokens;
799813
aiResponse.usage.completion_tokens += usage.completion_tokens;
814+
aiResponse.usage.prompt_tokens_details.cached_tokens += usage?.prompt_tokens_details?.cached_tokens ? usage.prompt_tokens_details.cached_tokens : 0;
800815
aiResponse.usage.total_tokens += usage.total_tokens;
801816
}
802817
if (responseMessage.function_call) {
@@ -879,6 +894,24 @@ async function continueThread(messages, msgData) {
879894
}
880895
return aiResponse;
881896
}
897+
function logMessages(messages) {
898+
openAILog.trace(
899+
"messages: ",
900+
//シリアライズでDeep Copy
901+
JSON.parse(JSON.stringify(messages)).map((message) => {
902+
if (typeof message.content !== "string") {
903+
message.content?.forEach((content) => {
904+
const url = shortenString(content.image_url?.url);
905+
if (url) {
906+
;
907+
content.image_url.url = url;
908+
}
909+
});
910+
}
911+
return message;
912+
})
913+
);
914+
}
882915
async function createChatCompletion(messages, functions2 = void 0) {
883916
let useTools = true;
884917
let currentOpenAi = openai;
@@ -902,19 +935,19 @@ async function createChatCompletion(messages, functions2 = void 0) {
902935
const chatCompletionOptions = {
903936
model: currentModel,
904937
messages,
905-
max_tokens: MAX_TOKENS,
906-
//TODO: messageのTOKEN数から最大値にする。レスポンス長くなるけど翻訳などが一発になる
907938
temperature
908939
};
940+
if (currentModel.indexOf("o1") === 0) {
941+
chatCompletionOptions.max_completion_tokens = MAX_TOKENS;
942+
} else {
943+
chatCompletionOptions.max_tokens = MAX_TOKENS;
944+
}
909945
if (functions2 && useTools) {
910946
if (model.indexOf("gpt-3") >= 0) {
911947
chatCompletionOptions.functions = functions2;
912948
chatCompletionOptions.function_call = "auto";
913949
} else {
914-
chatCompletionOptions.tools = [];
915-
functions2?.forEach((funciton) => {
916-
chatCompletionOptions.tools?.push({ type: "function", function: funciton });
917-
});
950+
chatCompletionOptions.tools = functions2.map((func) => ({ type: "function", function: func }));
918951
chatCompletionOptions.tool_choice = "auto";
919952
}
920953
}
@@ -1390,7 +1423,11 @@ Error: ${e.message}`;
13901423
let message = `
13911424
${SYSTEM_MESSAGE_HEADER} `;
13921425
if (usage) {
1393-
message += ` Prompt:${usage.prompt_tokens} Completion:${usage.completion_tokens} Total:${usage.total_tokens}`;
1426+
message += ` Prompt:${usage.prompt_tokens} Completion:${usage.completion_tokens} `;
1427+
if (usage.prompt_tokens_details?.cached_tokens) {
1428+
message += `Cached:${usage.prompt_tokens_details.cached_tokens} `;
1429+
}
1430+
message += `Total:${usage.total_tokens}`;
13941431
}
13951432
if (model2) {
13961433
message += ` Model:${model2}`;
@@ -1653,10 +1690,11 @@ async function isMessageIgnored(msgData, meId, previousPosts) {
16531690
}
16541691
function isUnuseImages(meId, previousPosts) {
16551692
for (let i = previousPosts.length - 1; i >= 0; i--) {
1656-
if (previousPosts[i].props.bot_images === "stopped") {
1693+
const post = previousPosts[i];
1694+
if (post.props.bot_images === "stopped") {
16571695
return true;
16581696
}
1659-
if (previousPosts[i].user_id === meId || previousPosts[i].message.includes(name)) {
1697+
if (post.user_id === meId || post.message.includes(name)) {
16601698
return false;
16611699
}
16621700
}

0 commit comments

Comments
 (0)