Implement token counting on server side (✓ Sandbox Passed)#20
Implement token counting on server side (✓ Sandbox Passed)#20sweep-ai-deprecated[bot] wants to merge 7 commits intoendlessfrom
Conversation
--------- Co-authored-by: sweep-ai[bot] <128439645+sweep-ai[bot]@users.noreply.github.com>
Sandbox Executions
Check src/utils/tiktoken-server.ts with contents:
Check src/pages/api/generate.ts with contents: |
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Apply Sweep Rules to your PR?
|
|
Hi there! 👋 Thanks for opening a PR. 🎉 To get the most out of Senior Dev, please sign up in our Web App, connect your GitHub account, and add/join your organization CNSeniorious000. After that, you will receive code reviews beginning on your next opened PR. 🚀 |
❌ Deploy Preview for endless-chat failed.
|
Pull Request ReportHey there! I've created a report for the pull request based on the commit history. Let's dive in! Changes
Suggestions
// Add this function at the top of the file
const trimMessages = (messages: ChatMessage[], maxTokens: number, minMessages: number) => {
let trimmedMessages = [...messages];
let tokenCount = countTokensServer(null, trimmedMessages).total;
while (tokenCount > maxTokens && trimmedMessages.length > minMessages) {
trimmedMessages.shift();
tokenCount = countTokensServer(null, trimmedMessages).total;
}
return trimmedMessages;
};
// Update the post function
export const post: APIRoute = async ({ request }) => {
// ...
const trimmedMessages = trimMessages(messages, maxTokens, minMessages);
// ...
};Bugs
Improvements
RatingI would rate the code a 7 out of 10. The code is generally readable and performs well. However, there are a few areas that could be refactored for better readability, such as extracting the token counting and trimming logic into a separate function. Overall, the code seems to be secure and error handling is in place. That's it for the pull request report! Let me know if you need any further assistance. Have a great day! 😄 |
❌ Deploy Preview for chat-for-free failed.
|
There was a problem hiding this comment.
❌ Failed to make changes
Please join our Discord to report this issue.
There was a problem hiding this comment.
Bugs
正则表达式未关闭
Other Suggestions
- In
src/pages/api/generate.ts, consider extracting the token counting and trimming logic into a separate function for better readability and maintainability. Here's a suggested refactoring:
// Add this function at the top of the file
const trimMessages = (messages: ChatMessage[], maxTokens: number, minMessages: number) => {
let trimmedMessages = [...messages];
let tokenCount = countTokensServer(null, trimmedMessages).total;
while (tokenCount > maxTokens && trimmedMessages.length > minMessages) {
trimmedMessages.shift();
tokenCount = countTokensServer(null, trimmedMessages).total;
}
return trimmedMessages;
};
// Update the post function
export const post: APIRoute = async ({ request }) => {
// ...
const trimmedMessages = trimMessages(messages, maxTokens, minMessages);
// ...
};There was a problem hiding this comment.
❌ Failed to make changes
Please join our Discord to report this issue.
There was a problem hiding this comment.
❌ Failed to make changes
Please join our Discord to report this issue.
117c9ef to
cd542b7
Compare
6956558 to
7489e3f
Compare
PR Feedback: 👎
Description
This pull request implements token counting on the server side.
Summary
src/pages/api/generate.tsto include token counting logic.src/utils/tiktoken-server.tsto handle token counting and initialization of Tiktoken.Fixes #17.
🎉 Latest improvements to Sweep:
ropelibrary to refactor Python! Check out Large Language Models are Bad at Refactoring Code. To have Sweep refactor your code, trysweep: Refactor <your_file>.py!💡 To get Sweep to edit this pull request, you can: