We release patches for security vulnerabilities. The following versions are currently being supported with security updates:
| Version | Supported |
|---|---|
| 0.0.x | ✅ |
The Prompt Evaluator team takes security bugs seriously. We appreciate your efforts to responsibly disclose your findings and will make every effort to acknowledge your contributions.
Please do not report security vulnerabilities through public GitHub issues.
Instead, please report them via email to:
Please include the following information in your report:
- Type of vulnerability (e.g., XSS, SQL injection, command injection, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- Location of the affected source code (tag/branch/commit or direct URL)
- Step-by-step instructions to reproduce the issue
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit it
- Your contact information for follow-up questions
When you report a security issue, you can expect:
- Acknowledgment: We will acknowledge receipt of your report within 48 hours
- Assessment: We will assess the issue and determine its severity and impact
- Updates: We will keep you informed about our progress
- Resolution: We will work on a fix and release it as soon as possible
- Credit: We will credit you in the release notes (unless you prefer to remain anonymous)
- Initial response: Within 48 hours
- Severity assessment: Within 7 days
- Fix development: Depends on complexity (typically 7-30 days)
- Security advisory: Published after fix is released
- Never commit API keys to version control
- Use environment variables for sensitive data
- Rotate API keys regularly
- Use API keys with minimal required permissions
- Review generated YAML before running evaluations
- Be cautious with custom providers and verify endpoints
- Monitor API usage and costs
- Keep the application updated to the latest version
- Be aware of data sent to LLM APIs - prompts and datasets are sent to external providers
- Avoid including PII (Personally Identifiable Information) in test datasets
- Review LLM provider privacy policies before use
- Use security testing features to detect potential vulnerabilities in your prompts
- Download installers only from official sources:
- GitHub Releases: https://github.com/syamsasi99/prompt-evaluator/releases
- Verify installer integrity if checksums are provided
- Keep your operating system updated
- Use antivirus software on Windows
Prompt Evaluator is built with Electron. We follow Electron security best practices:
- Context isolation is enabled
- Node.js integration is disabled in renderer
- Remote module is disabled
- WebSecurity is enabled
- IPC communication is validated
We regularly monitor and update dependencies to address known vulnerabilities:
npm auditWhen using LLM providers:
- API keys are stored locally in your system
- Prompts and datasets are sent to external LLM APIs
- Responses are received and displayed
- We recommend reviewing provider terms of service and privacy policies
The application requires file system access for:
- Saving/loading project files
- Storing evaluation history
- Running Promptfoo CLI
All file operations are restricted to user-selected directories.
Prompt Evaluator includes OWASP LLM security testing capabilities:
- LLM01: Prompt Injection detection
- LLM02: Insecure Output Handling tests
- LLM03: Training Data Poisoning checks
- LLM06: Sensitive Information Disclosure detection
- LLM07: Insecure Plugin Design tests
- LLM09: Overreliance checks
These tests help you identify potential security vulnerabilities in your prompts and LLM applications.
- Input validation on all user-provided data
- YAML configuration validation before execution
- Provider configuration validation
- Dataset schema validation
We follow a coordinated disclosure process:
- Reporter notifies us of the vulnerability
- We confirm and assess the vulnerability
- We develop and test a fix
- We release the fix in a new version
- We publish a security advisory with details
- Public disclosure after users have had time to update (typically 30 days)
Security updates will be:
- Released as soon as possible
- Announced in release notes
- Highlighted in GitHub Security Advisories
- Communicated through project channels
We currently do not have a bug bounty program, but we deeply appreciate security researchers who responsibly disclose vulnerabilities. We will:
- Publicly acknowledge your contribution (if desired)
- Credit you in release notes
- Keep you informed throughout the process
For security concerns, contact:
Email: syamsasi99@gmail.com
For general questions or non-security issues, please use:
- GitHub Issues: https://github.com/syamsasi99/prompt-evaluator/issues
- GitHub Discussions: https://github.com/syamsasi99/prompt-evaluator/discussions
Thank you for helping keep Prompt Evaluator and its users safe!