Live Demo | Quick Start | FAQ | Chrome Extension
Development Docs | Vercel Deployment Guide | MCP Deployment Guide | DeepWiki Docs | ZRead Docs
Prompt Optimizer is a powerful AI prompt optimization tool that helps you write better AI prompts and improve the quality of AI outputs. It supports four usage methods: web application, desktop application, Chrome extension, and Docker deployment.
1. Hard-Nosed Reviewer: Turn Agreement into Useful Critique
Starting from a minimal English role prompt, optimization pushes a small model away from generic pushback and toward a clearer, more structured review that surfaces weak assumptions, evidence gaps, and concrete revision advice.
2. Marketplace Bargaining Reply: Let Variables Change the Strategy
With a single reusable prompt template, you can swap in item details, price anchors, buyer offers, tone, and negotiation goals for different marketplace conversations. After optimization, the same small model does a better job turning those variables into a clearer, more transaction-ready reply instead of a generic helper-style response.
3. Text-to-Image: Optimize a One-Line Idea into a More Directable Key Visual Prompt
This is not just prompt expansion. Starting from a vague one-line idea, Prompt Optimizer adds clearer subject cues, spatial relationships, and mood anchors. The left side is simply “a floating library in the night sky,” while the optimized version gives the model a more directed fantasy composition that feels closer to a reusable key visual than a lucky generic image.
- 🎯 Intelligent Optimization: One-click prompt optimization with multi-round iterative improvements to enhance AI response accuracy
- 📝 Dual Mode Optimization: Support for both system prompt optimization and user prompt optimization to meet different usage scenarios
- 🔄 Analysis and Compare Evaluation: Supports analysis, single-result evaluation, and multi-result compare evaluation to help determine whether a prompt has truly improved
- 🤖 Multi-model Integration: Support for mainstream AI models including OpenAI, Gemini, DeepSeek, Zhipu AI, SiliconFlow, etc.
- 🖼️ Image Generation: Support for Text-to-Image (T2I) and Image-to-Image (I2I) with models like Gemini, Seedream
- 📊 Advanced Testing Mode: Context variable management, multi-turn conversation testing, Function Calling support
- 🔒 Secure Architecture: Pure client-side processing with direct data interaction with AI service providers, bypassing intermediate servers
- 📱 Multi-platform Support: Available as web application, desktop application, Chrome extension, and Docker deployment
- 🔐 Access Control: Password protection feature for secure deployment
- 🧩 MCP Protocol Support: Supports Model Context Protocol (MCP), enabling integration with MCP-compatible AI applications like Claude Desktop
- 🖼️ Text-to-Image (T2I): Generate images from text prompts
- 🎨 Image-to-Image (I2I): Transform and optimize images based on local files
- 🔌 Multi-model Support: Integrated with mainstream image generation models like Gemini, Seedream
- ⚙️ Model Parameters: Support model-specific parameter configuration (size, style, etc.)
- 📥 Preview & Download: Real-time preview of generated results with download support
- 📊 Context Variable Management: Custom variables, batch replacement, variable preview
- 💬 Multi-turn Conversation Testing: Simulate real conversation scenarios to test prompt performance in multi-turn interactions
- 🛠️ Function Calling Support: Function Calling integration with support for OpenAI and Gemini tool calling
- 🔍 Analysis and Evaluation Pipeline: Supports analysis, evaluation, compare evaluation, and evaluation-driven smart rewrite in text modes
For detailed usage instructions, please refer to the Image Mode Documentation
Direct access: https://prompt.always200.com
This is a pure frontend project with all data stored locally in your browser and never uploaded to any server, making the online version both safe and reliable to use.
Method 1: One-click deployment to your own Vercel:
Method 2: Fork the project and import to Vercel (Recommended):
- First fork the project to your GitHub account
- Then import the project to Vercel
- This allows tracking of source project updates for easy syncing of new features and fixes
- Configure environment variables:
ACCESS_PASSWORD: Set access password to enable access restrictionVITE_OPENAI_API_KEYetc.: Configure API keys for various AI service providers
For more detailed deployment steps and important notes, please check:
Download the latest version from GitHub Releases. We provide both installer and archive formats for each platform.
- Installer (Recommended): Such as
*.exe,*.dmg,*.AppImage, etc. Strongly recommended as it supports automatic updates. - Archive: Such as
*.zip. Extract and use, but cannot auto-update.
Core Advantages of Desktop Application:
- ✅ No CORS Limitations: As a native desktop application, it completely eliminates browser Cross-Origin Resource Sharing (CORS) issues. This means you can directly connect to any AI service provider's API, including locally deployed Ollama or commercial APIs with strict security policies, for the most complete and stable functional experience.
- ✅ Automatic Updates: Versions installed through installers (like
.exe,.dmg) can automatically check and update to the latest version. - ✅ Independent Operation: No browser dependency, providing faster response and better performance.
- Install from Chrome Web Store (may not be the latest version due to approval delays): Chrome Web Store
- Click the icon to open the Prompt Optimizer
Click to view Docker deployment commands
```bash # Run container (default configuration) docker run -d -p 8081:80 --restart unless-stopped --name prompt-optimizer linshen/prompt-optimizerdocker run -d -p 8081:80
-e VITE_OPENAI_API_KEY=your_key
-e ACCESS_USERNAME=your_username \ # Optional, defaults to "admin"
-e ACCESS_PASSWORD=your_password \ # Set access password
--restart unless-stopped
--name prompt-optimizer
linshen/prompt-optimizer
</details>
### 6. Docker Compose Deployment
<details>
<summary>Click to view Docker Compose deployment steps</summary>
```bash
# 1. Clone the repository
git clone https://github.com/linshenkx/prompt-optimizer.git
cd prompt-optimizer
# 2. Optional: Create .env file for API keys and authentication
cat > .env << EOF
# API Key Configuration
VITE_OPENAI_API_KEY=your_openai_api_key
VITE_GEMINI_API_KEY=your_gemini_api_key
VITE_DEEPSEEK_API_KEY=your_deepseek_api_key
VITE_ZHIPU_API_KEY=your_zhipu_api_key
VITE_SILICONFLOW_API_KEY=your_siliconflow_api_key
# Basic Authentication (Password Protection)
ACCESS_USERNAME=your_username # Optional, defaults to "admin"
ACCESS_PASSWORD=your_password # Set access password
EOF
# 3. Start the service
docker compose up -d
# 4. View logs
docker compose logs -f
# 5. Access the service
Web Interface: http://localhost:8081
MCP Server: http://localhost:8081/mcp
You can also directly edit the docker-compose.yml file to customize your configuration:
Click to view docker-compose.yml example
services:
prompt-optimizer:
# Use Docker Hub image
image: linshen/prompt-optimizer:latest
container_name: prompt-optimizer
restart: unless-stopped
ports:
- "8081:80" # Web application port (MCP server accessible via /mcp path)
environment:
- VITE_OPENAI_API_KEY=your_openai_key
- VITE_GEMINI_API_KEY=your_gemini_key
# Access Control (Optional)
- ACCESS_USERNAME=admin
- ACCESS_PASSWORD=your_passwordClick to view MCP Server usage instructions
Prompt Optimizer now supports the Model Context Protocol (MCP), enabling integration with AI applications that support MCP such as Claude Desktop.
When running via Docker, the MCP Server automatically starts and can be accessed via http://ip:port/mcp.
MCP Server requires API key configuration to function properly. Main MCP-specific configurations:
# MCP Server Configuration
MCP_DEFAULT_MODEL_PROVIDER=openai # Options: openai, gemini, anthropic, deepseek, siliconflow, zhipu, dashscope, openrouter, modelscope, custom
MCP_LOG_LEVEL=info # Log levelIn a Docker environment, the MCP Server runs alongside the web application. You can access the MCP service through the same port as the web application at the /mcp path.
For example, if you map the container's port 80 to port 8081 on the host:
docker run -d -p 8081:80 \
-e VITE_OPENAI_API_KEY=your-openai-key \
-e MCP_DEFAULT_MODEL_PROVIDER=openai \
--name prompt-optimizer \
linshen/prompt-optimizerThe MCP Server will then be accessible at http://localhost:8081/mcp.
To use Prompt Optimizer in Claude Desktop, you need to add the service configuration to Claude Desktop's configuration file.
-
Find Claude Desktop's configuration directory:
- Windows:
%APPDATA%\Claude\services - macOS:
~/Library/Application Support/Claude/services - Linux:
~/.config/Claude/services
- Windows:
-
Edit or create the
services.jsonfile, adding the following content:
{
"services": [
{
"name": "Prompt Optimizer",
"url": "http://localhost:8081/mcp"
}
]
}Make sure to replace localhost:8081 with the actual address and port where you've deployed Prompt Optimizer.
- optimize-user-prompt: Optimize user prompts to improve LLM performance
- optimize-system-prompt: Optimize system prompts to improve LLM performance
- iterate-prompt: Iteratively improve mature prompts based on specific requirements
For more detailed information, please refer to the MCP Server User Guide.
Click to view API key configuration methods
- Click the "⚙️Settings" button in the upper right corner
- Select the "Model Management" tab
- Click on the model you need to configure (such as OpenAI, Gemini, DeepSeek, etc.)
- Enter the corresponding API key in the configuration box
- Click "Save"
Supported models: OpenAI, Gemini, DeepSeek, Zhipu AI, SiliconFlow, Custom API (OpenAI compatible interface)
In addition to API keys, you can configure advanced LLM parameters for each model individually. These parameters are configured through a field called llmParams, which allows you to specify any parameters supported by the LLM SDK in key-value pairs for fine-grained control over model behavior.
Advanced LLM Parameter Configuration Examples:
- OpenAI/Compatible APIs:
{"temperature": 0.7, "max_tokens": 4096, "timeout": 60000} - Gemini:
{"temperature": 0.8, "maxOutputTokens": 2048, "topP": 0.95} - DeepSeek:
{"temperature": 0.5, "top_p": 0.9, "frequency_penalty": 0.1}
For more detailed information about llmParams configuration, please refer to the LLM Parameters Configuration Guide.
Configure environment variables through the -e parameter when deploying with Docker:
-e VITE_OPENAI_API_KEY=your_key
-e VITE_GEMINI_API_KEY=your_key
-e VITE_DEEPSEEK_API_KEY=your_key
-e VITE_ZHIPU_API_KEY=your_key
-e VITE_SILICONFLOW_API_KEY=your_key
# Multiple Custom Models Configuration (Unlimited Quantity)
-e VITE_CUSTOM_API_KEY_ollama=dummy_key
-e VITE_CUSTOM_API_BASE_URL_ollama=http://localhost:11434/v1
-e VITE_CUSTOM_API_MODEL_ollama=qwen2.5:7b📖 Detailed Configuration Guide: See Multiple Custom Models Documentation for complete configuration methods and advanced usage
For detailed documentation, see Development Documentation
Click to view local development commands
# 1. Clone the project
git clone https://github.com/linshenkx/prompt-optimizer.git
cd prompt-optimizer
# 2. Install dependencies
pnpm install
# 3. Start development server
pnpm dev # Main development command: build core/ui and run web app
pnpm dev:web # Run web app only
pnpm dev:fresh # Complete reset and restart development environment- Basic feature development
- Web application release
- Chrome extension release
- Internationalization support
- Support for system prompt optimization and user prompt optimization
- Desktop application release
- MCP service release
- Advanced mode: Variable management, context testing, function calling
- Image generation: Text-to-Image (T2I) and Image-to-Image (I2I) support
- Prompt favorites and template management
- Support for workspace/project management
For detailed project status, see Project Status Document
- Documentation Index - Index of all documentation
- Technical Development Guide - Technology stack and development specifications
- LLM Parameters Configuration Guide - Detailed guide for advanced LLM parameter configuration
- Project Structure - Detailed project structure description
- Project Status - Current progress and plans
- Product Requirements - Product requirements document
- Vercel Deployment Guide - Detailed instructions for Vercel deployment
Click to view frequently asked questions
A: Most connection failures are caused by Cross-Origin Resource Sharing (CORS) issues. As this project is a pure frontend application, browsers block direct access to API services from different origins for security reasons. Model services will reject direct requests from browsers if CORS policies are not correctly configured.
A: Ollama fully supports the OpenAI standard interface, just configure the correct CORS policy:
- Set environment variable
OLLAMA_ORIGINS=*to allow requests from any origin - If issues persist, set
OLLAMA_HOST=0.0.0.0:11434to listen on any IP address
Q3: How to solve CORS issues with commercial APIs (such as Nvidia's DS API, ByteDance's Volcano API)?
A: These platforms typically have strict CORS restrictions. Recommended solutions:
-
Use Desktop Application (Most Recommended)
- Desktop app has no CORS restrictions as a native application
- Can directly connect to any API service, including locally deployed models
- Provides the most complete and stable feature experience
- Download from GitHub Releases
-
Use Self-deployed API Proxy Service (Professional solution)
- Deploy open-source API aggregation/proxy tools like OneAPI, NewAPI
- Configure as custom API endpoint in settings
- Request flow: Browser → Proxy service → Model service provider
- Full control over security policies and access permissions
Note: All web versions (including online version, Vercel deployment, Docker deployment) are pure frontend applications and subject to browser CORS restrictions. Only the desktop version or using an API proxy service can solve CORS issues.
Q4: I have correctly configured CORS policies for my local model (like Ollama), why can't I still connect using the online version?
A: This is caused by the browser's Mixed Content security policy. For security reasons, browsers block secure HTTPS pages (like the online version) from sending requests to insecure HTTP addresses (like your local Ollama service).
Solutions: To bypass this limitation, you need to have the application and API under the same protocol (e.g., both HTTP). We recommend the following approaches:
- Use the desktop version: Desktop applications have no browser restrictions and are the most stable and reliable way to connect to local models
- Use Docker deployment (HTTP): Access via
http://localhost:8081, both the app and local Ollama use HTTP - Use Chrome extension: Extensions can bypass some security restrictions in certain situations
A: This is because the application has not been signed with an Apple Developer certificate. Due to the high cost of Apple Developer accounts, the desktop application is currently unsigned.
Solution: Run the following command in Terminal to remove the quarantine attribute:
# For installed applications
xattr -rd com.apple.quarantine /Applications/PromptOptimizer.app
# For downloaded .dmg files (run before installation)
xattr -rd com.apple.quarantine ~/Downloads/PromptOptimizer-*.dmgAfter running the command, you can open the application normally.
Click to view contribution guidelines
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some feature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Tip: When developing with Cursor tool, it is recommended to do the following before committing:
- Use the "CodeReview" rule for review
- Check according to the review report format:
- Overall consistency of changes
- Code quality and implementation method
- Test coverage
- Documentation completeness
- Optimize based on review results before submitting
Thanks to all the developers who have contributed to this project!
This project was partly inspired by LangGPT in prompt engineering and structured prompt design. Thanks to the LangGPT project and community for their open-source sharing and continued exploration.
This project is licensed under AGPL-3.0.
In simple terms: You can freely use, modify, and commercialize this project, but if you turn it into a website or service for others, you must share your source code.
👉 Click for detailed explanation
What you can do:
- ✅ Personal use, learning, and research
- ✅ Internal company use (not offering public services)
- ✅ Modify code for commercial projects
- ✅ Charge for products or services
What you must do:
- 📖 If distributing software or offering network services, disclose source code
- 📝 Preserve original author's copyright notices
Core principle: Commercial use is allowed, but not closed-source.
If this project is helpful to you, please consider giving it a Star ⭐️
- Submit an Issue
- Create a Pull Request
- Join the discussion group