-
Notifications
You must be signed in to change notification settings - Fork 0
155 game of life llm feedback #159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Set default seed value to 42 for consistency - Enhance prompt instructions for clarity - Improve content extraction logic from text - Ensure generation number is displayed and incremented
- Introduce `.state-block` class for styling state displays - Add iteration labels above each state in the history - Ensure each state is wrapped in code blocks for clarity - Improve UI by grouping iteration labels with state blocks
- Insert a line break for better readability - No functional changes made - Related to code formatting improvements
- Extract content before </think> with removeThink function - Simplify extractDisplayContent by using removeThink - Comment out unnecessary console logs - Add detailed log for message generation - Ensure currentState is cleaned before processing
|
Automatic code review is enabled for this repository. Reviewing this PR now. |
|
I'm getting to work. See my steps and track my cost here.
I always see new activity while actively working. When paused, I'll only respond when tagged unless this box is checked. |
|
🚀 PR Preview deployed! Preview URL: https://pollinations.github.io/hive/pr-159/ |
1 similar comment
|
🚀 PR Preview deployed! Preview URL: https://pollinations.github.io/hive/pr-159/ |
Testing .mentatTesting ai-chatTesting static HTML app: ai-chat Testing graphics-editorTesting static HTML app: graphics-editor Testing llm-feedbackTesting static HTML app: llm-feedback Testing millionaire-gameTesting static HTML app: millionaire-game Testing placeholder-generatorTesting Node.js app: placeholder-generator Installing dependenciesℹ️ No tests found BuildingTesting pollinations-image-showTesting Node.js app: pollinations-image-show Installing dependenciesℹ️ No tests found BuildingTesting prompt-guessing-gameTesting Node.js app: prompt-guessing-game Installing dependenciesℹ️ No tests found BuildingTesting svg-feedbackTesting static HTML app: svg-feedback ❌ HTML tests failed for svg-feedback Testing tarot-readerTesting Node.js app: tarot-reader Installing dependenciesℹ️ No tests found Building |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reviewed the changes to the LLM feedback system. The code organization improvements and addition of thinking steps are good changes. I've suggested some enhancements around input validation, error handling, and LLM instruction clarity that would further improve the robustness and user experience of the system.
| function extractDisplayContent(text) { | ||
|
|
||
| // Then check for content before </think> | ||
| function removeThink(text) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good refactoring to extract removeThink into a separate function for better code organization and reusability. However, consider adding input validation to handle cases where text might be undefined or null.
|
|
||
|
|
||
| console.log('Generating text... for message:', JSON.stringify(messages, null, 2)); | ||
| const response = await fetch('https://text.pollinations.ai/', { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding more specific error handling around the API call. While you're logging the messages for debugging, it would be helpful to display user-friendly error messages in the UI when text generation fails. This would improve the user experience.
| - If there is an input state, transform it gradually according to the prompt. | ||
| - If there is no input state, use the prompt to come up with a starting state. Be creative | ||
| - Maintain consistent style and theme | ||
| - Before outputting anything spend some time thinking between <think> </think> tags. Please think step by step maybe decomposing the job. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new thinking instruction is a good addition for better LLM responses. However, consider adding an example of the <think> tag usage in the instructions to make it clearer for the LLM. For example:
Example:
<think>
1. First, I'll analyze the current state
2. Then, I'll apply the transformation rules
3. Finally, I'll format the output
</think>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall the changes look good and improve code organization and reliability. Here are a few specific comments:
The changes look solid and improve both code quality and functionality. The separation of concerns with the new text processing functions makes the code more maintainable, and the improved logging will help with debugging. The changes appear ready to merge.
| function extractDisplayContent(text) { | ||
|
|
||
| // Then check for content before </think> | ||
| function removeThink(text) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good refactoring to separate this into its own function. One suggestion would be to also handle the opening tag to be more robust - in case there's content after the closing tag. Something like:
const thinkPattern = /<think>[\s\S]*?<\/think>/;
return text.replace(thinkPattern, '').trim();|
|
||
|
|
||
|
|
||
| console.log('Generating text... for message:', JSON.stringify(messages, null, 2)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good addition for debugging, but consider wrapping this in a debug flag or environment variable since it could log sensitive prompt data in production.
| - If there is an input state, transform it gradually according to the prompt. | ||
| - If there is no input state, use the prompt to come up with a starting state. Be creative | ||
| - Maintain consistent style and theme | ||
| - Before outputting anything spend some time thinking between <think> </think> tags. Please think step by step maybe decomposing the job. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice addition to the format instructions. This should help get more structured and thoughtful responses from the model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some issues that need to be addressed:
These changes would improve error handling, input validation, and clarity of the LLM instructions. Additionally, the failing tests for svg-feedback need to be addressed, but since those files aren't part of this PR's changes, that should be handled in a separate issue.
| function extractDisplayContent(text) { | ||
|
|
||
| // Then check for content before </think> | ||
| function removeThink(text) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding input validation and error handling:
function removeThink(text) {
if (!text) return '';
try {
const thinkMatch = text.split('</think>');
if (thinkMatch.length > 1)
return thinkMatch[1].trim();
return text;
} catch (error) {
console.error('Error processing think tags:', error);
return text;
}
}|
|
||
|
|
||
|
|
||
| console.log('Generating text... for message:', JSON.stringify(messages, null, 2)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and user feedback:
try {
console.log('Generating text... for message:', JSON.stringify(messages, null, 2));
const response = await fetch('https://text.pollinations.ai/', {
// ... existing config
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
// ... process response
} catch (error) {
console.error('Text generation failed:', error);
elements.start.textContent = 'Failed to generate text. Please try again.';
throw error;
}| - If there is an input state, transform it gradually according to the prompt. | ||
| - If there is no input state, use the prompt to come up with a starting state. Be creative | ||
| - Maintain consistent style and theme | ||
| - Before outputting anything spend some time thinking between <think> </think> tags. Please think step by step maybe decomposing the job. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding an example in the instructions to make it clearer for the LLM:
Example:
<think>
1. First, I'll analyze the current state
2. Then, I'll apply the transformation rules
3. Finally, I'll format the output
</think>
This will help ensure consistent and well-structured responses.
No description provided.