The vscode-ai-chat-analyzer is a Visual Studio Code extension designed to analyze AI chat responses after completion and suggest updates to the AGENTS.md file based on the accuracy of the responses. This tool aims to enhance the quality of AI interactions by providing actionable insights for improving agent responses.
- Interactive Chat Participant: Use
@agentsfeedbackin VS Code chat to provide natural language feedback - Slash Commands: Quick actions with
/learn,/stop,/remember, and/showcommands - Conversation History: Automatically includes previous turns from the conversation for context
- Reference Support: Include
#selection,#file, and#editorreferences in your feedback - Intelligent Analysis: Uses LLM-powered analyzers to generate actionable suggestions for improving AGENTS.md
- Multiple Analyzer Options:
- GitHub Copilot (default): Uses VS Code's integrated Copilot models (gpt-4o-mini by default)
- LM Studio: Uses local models for privacy-focused analysis
- Feedback Integration: Automatically captures upvote/downvote feedback from chat responses
- Configurable: Switch analyzers and models on-the-fly without reloading
- Few-Shot Prompting: Uses static examples to guide LLM suggestions
- Token Limit Protection: Validates prompts don't exceed model context windows
- Clone the repository:
git clone https://github.com/Keep-Social-Dev/GP-Feed-Service-Post.git - Navigate to the project directory:
cd vscode-ai-chat-analyzer - Install the dependencies:
npm install
The primary way to use this extension is through the @agentsfeedback chat participant in VS Code's chat panel:
@agentsfeedback always create unit tests when adding new functions
@agentsfeedback stop hallucinating JavaScript syntax in Go files
@agentsfeedback remember to use dependency injection in this project
| Command | Usage | Purpose |
|---|---|---|
/learn |
@agentsfeedback /learn always create tests |
Reinforce positive patterns |
/stop |
@agentsfeedback /stop suggesting JS in Go |
Correct bad behaviors |
/remember |
@agentsfeedback /remember use pytest not unittest |
Add project-specific rules |
/show |
@agentsfeedback /show |
View current AGENTS.md content |
You can include additional context using VS Code's reference syntax:
#selection- Include currently selected code#file:path/to/file.ts- Reference a specific file#editor- Include the current editor content
Example:
@agentsfeedback /stop #selection This pattern causes memory leaks, always use cleanup functions
The chat participant automatically includes previous turns from the same @agentsfeedback conversation, allowing for follow-up refinements:
@agentsfeedback always validate input parameters
@agentsfeedback also add specific validation for email formats
- Open Visual Studio Code
- Go to Extensions view (
Ctrl+Shift+X) - Search for
vscode-ai-chat-analyzerand install it - The extension activates automatically and registers the
@agentsfeedbackchat participant
Before the extension is published to the VS Code Marketplace, you can build and test it locally:
-
Install dependencies:
npm install
-
Compile the TypeScript code:
npm run compile
-
Run the extension in development mode:
- Open this project in VS Code
- Press
F5(or Run > Start Debugging) - This will open a new "Extension Development Host" window with your extension loaded
- In the new window, you can test your extension commands via the Command Palette (
Ctrl+Shift+P)
-
Run tests:
# Unit tests (with mocks) npm test # Integration tests (requires Copilot/LM Studio) npm run test:integration
-
Watch mode for development:
npm run watch
This will automatically recompile your code when you make changes.
Configure the analyzer in VS Code settings:
{
"aiChatAnalyzer.analyzer": "copilot", // or "lmstudio"
"aiChatAnalyzer.copilotModel": "gpt-5-mini", // Model family for Copilot
"aiChatAnalyzer.lmStudioEndpoint": "http://localhost:1234",
"aiChatAnalyzer.lmStudioModel": "" // Optional: specific model name, otherwise currently loaded. Suggestion is ibm/granite-4-h-tiny
}GitHub Copilot (Recommended):
- Uses VS Code's integrated Language Model API
- Requires GitHub Copilot subscription
- Default model:
gpt-4o-mini - Fresh inference on each request (no caching)
LM Studio (Privacy-focused):
- Uses local models running in LM Studio
- No external API calls or data sharing
- Requires LM Studio running with server mode enabled (found in the "Developer" tab in LM Studio's sidebar)
- No caching (fresh inference each time)
- Configure endpoint and model in settings
To install the extension locally without publishing to the marketplace:
-
Install vsce (VS Code Extension Manager):
npm install -g @vscode/vsce
-
Package the extension:
vsce package
This creates a
.vsixfile in your project directory. -
Install the VSIX file:
- In VS Code, go to Extensions view (
Ctrl+Shift+X) - Click the
...menu at the top - Select "Install from VSIX..."
- Choose the generated
.vsixfile
- In VS Code, go to Extensions view (
To contribute to the development of this extension:
- Clone the repository and set up your development environment as described above.
- Make your changes and ensure to run the tests located in the
testdirectory. - Submit a pull request with a clear description of your changes.
This project is licensed under the MIT License. See the LICENSE file for details.
- Thanks to the contributors and the open-source community for their support and resources.