Note: This is a fork of the original Perplexica project by @ItzCrazyKns. This fork is maintained separately with additional features, improvements, and independent installation using local build for complete independence from external registries.
- Planning & Execution: Step-by-step search planning and execution
- Real-time Progress: See exactly what steps are being executed in real-time
- Enhanced UI: Beautiful step-by-step interface in the Steps tab
- Better Debugging: Clear error messages and execution tracking
- Multiple Search Modes: Web, Academic, YouTube, Reddit, Wolfram Alpha, and Writing Assistant
There are 3 different ways to run Perplexify depending on your needs:
Best for: Regular users who want to run Perplexify as a service
Setup:
- Install Docker Desktop from here
- Clone the repository:
git clone https://github.com/Kamran1819G/Perplexify.git cd Perplexify
- Copy the config file:
cp sample.config.toml config.toml
- Start Perplexify:
docker compose up --build
- Open http://localhost:3000
β
Pros: Easy setup, production-ready, isolated environment
β Cons: Slower startup, no hot reloading
Best for: Developers who want to contribute or modify the code
Setup:
-
Install Docker Desktop and Node.js/yarn
-
Clone the repository:
git clone https://github.com/Kamran1819G/Perplexify.git cd Perplexify
-
Copy the config file:
cp sample.config.toml config.toml
-
Start development environment:
Linux/macOS:
./dev.sh
Windows:
dev.bat
β
Pros: Fast hot reloading, easy debugging, instant code changes
β Cons: Requires Node.js/yarn installation
Best for: Advanced users who want full control over the setup
Setup:
- Install Node.js, SearXNG, and configure it
- Clone the repository:
git clone https://github.com/Kamran1819G/Perplexify.git cd Perplexify
- Copy and configure the config file:
cp sample.config.toml config.toml # Edit config.toml with your settings
- Install dependencies and start:
yarn install yarn build yarn start
β
Pros: Full control, no Docker dependency
β Cons: Complex setup, manual dependency management
Use Case | Recommended Option | Why? |
---|---|---|
Just want to use Perplexify | Option 1: Production Docker | Easiest setup, works out of the box |
Want to contribute code | Option 2: Development Setup | Fast development with hot reloading |
Advanced user, no Docker | Option 3: Manual Installation | Full control over the environment |
Testing/Evaluation | Option 1: Production Docker | Quick to get started |
Custom modifications | Option 2: Development Setup | Easy to modify and test changes |
- Overview
- Preview
- Features
- Installation
- Using as a Search Engine
- Using Perplexify's API
- Expose Perplexify to a network
- One-Click Deployment
- Support Us
- Contribution
- Help and Support
- Development vs Production Compose Files
Perplexify is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results and provides clear answers with sources cited.
Using SearxNG to stay current and fully open source, Perplexify ensures you always get the most up-to-date information without compromising your privacy.
This Fork: This repository is a fork of the original Perplexica project with additional features, improvements, and an independent installation system using local build for complete independence from external registries. We maintain this fork separately to provide enhanced functionality while staying true to the original project's goals.
Want to know more about its architecture and how it works? You can read it here.
- Local LLMs: You can make use local LLMs such as Llama3 and Mixtral using Ollama.
- Advanced Search Modes: Choose from three powerful search modes tailored to different needs:
Feature | Quick Search β‘ | Pro Search β¨ | Ultra Search π§ |
---|---|---|---|
Search Agents | 1 | 4-6 | 12 Parallel |
Max Sources | 15 | 25 | 50 |
Research Depth | Basic | Advanced | PhD-Level |
Cross-Validation | β | Limited | β Full Loops |
Dynamic Replanning | β | β | β Every 45s |
Expert Sourcing | β | β | β Enhanced |
Research Time | ~10s | 2-4min | 2-4min+ |
Context Analysis | Basic | Good | Comprehensive |
Best For | Quick answers | In-depth research | Academic/Professional research |
- Fast web search with immediate results
- Perfect for simple queries and quick fact-checking
- Single search agent for rapid response
- Deep research with comprehensive analysis
- Multiple search queries for thorough coverage
- Enhanced source ranking and analysis
-
PhD-level research with parallel agents and cross-validation
-
12 parallel research agents working simultaneously
-
Cross-validation loops to verify information accuracy
-
Dynamic replanning every 45 seconds based on findings
-
Comprehensive research covering 8-12 specialized angles:
- Contextual Foundation & Historical Context
- Expert Perspectives & Comparative Analysis
- Technical Deep-Dive & Case Studies
- Future Implications & Critical Assessment
-
Web Search: Perplexify searches across the entire web to find the best and most relevant results for your queries.
-
Current Information: Some search tools might give you outdated info because they use data from crawling bots and convert them into embeddings and store them in a index. Unlike them, Perplexify uses SearxNG, a metasearch engine to get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information without the overhead of daily data updates.
-
API: Integrate Perplexify into your existing applications and make use of its capibilities.
It has many more features like image and video search. Some of the planned features are mentioned in upcoming features.
Perplexify supports a wide range of AI models and providers, giving you flexibility to choose the best model for your needs:
- OpenAI: GPT-4, GPT-3.5-turbo, and more
- Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
- Google Gemini: Latest Gemini 2.5 Pro, 2.5 Flash, 2.5 Flash-Lite, and more
- Groq: Ultra-fast inference with various models
- DeepSeek: Advanced reasoning models
- LM Studio: Local model hosting
- Ollama: Local models like Llama3, Mixtral, and more
- OpenRouter: Access to multiple model providers
- Custom OpenAI: Self-hosted or custom OpenAI-compatible endpoints
- OpenAI: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large
- Google Gemini: Text Embedding 004, Embedding 001
- Transformers: Local embedding models
- LM Studio: Local embedding models
Perplexify now supports the latest Gemini 2.5 models from Google:
- Gemini 2.5 Pro: Most powerful thinking model for complex reasoning
- Gemini 2.5 Flash: Best price-performance balance
- Gemini 2.5 Flash-Lite: Most cost-efficient for high-volume tasks
For detailed information about Gemini models, see Gemini Models Documentation.
There are 3 different installation methods for Perplexify. Choose the one that best fits your needs:
The easiest way to get started. Everything runs in Docker containers.
Quick Start:
git clone https://github.com/Kamran1819G/Perplexify.git
cd Perplexify
cp sample.config.toml config.toml
docker compose up --build
β
Best for: Regular users, quick setup, production use
π Details: See the Docker Setup Guide
Hybrid approach: SearXNG in Docker + Next.js on host for fast development.
Quick Start:
git clone https://github.com/Kamran1819G/Perplexify.git
cd Perplexify
cp sample.config.toml config.toml
./dev.sh # Linux/macOS
# or
dev.bat # Windows
β
Best for: Developers, contributors, custom modifications
π Details: See the Development Guide
Full manual setup without Docker dependencies.
Setup:
- Install Node.js, SearXNG, and configure it
- Clone the repository and copy config file
- Run
yarn install && yarn build && yarn start
β
Best for: Advanced users, full control, no Docker dependency
π Details: See the Installation Documentation
- New users: Start with Option 1 (Production Docker)
- Contributors: Use Option 2 (Development Setup)
- Advanced users: Choose Option 3 (Manual Installation)
See the installation documentation for more information like updating, etc.
If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can:
-
Check your Ollama API URL: Ensure that the API URL is correctly set in the settings menu.
-
Update API URL Based on OS:
- Windows: Use
http://host.docker.internal:11434
- Mac: Use
http://host.docker.internal:11434
- Linux: Use
http://<private_ip_of_host>:11434
Adjust the port number if you're using a different one.
- Windows: Use
-
Linux Users - Expose Ollama to Network:
-
Inside
/etc/systemd/system/ollama.service
, you need to addEnvironment="OLLAMA_HOST=0.0.0.0"
. Then restart Ollama bysystemctl restart ollama
. For more information see Ollama docs -
Ensure that the port (default is 11434) is not blocked by your firewall.
-
If you wish to use Perplexify as an alternative to traditional search engines like Google or Bing, or if you want to add a shortcut for quick access from your browser's search bar, follow these steps:
- Open your browser's settings.
- Navigate to the 'Search Engines' section.
- Add a new site search with the following URL:
http://localhost:3000/?q=%s
. Replacelocalhost
with your IP address or domain name, and3000
with the port number if Perplexify is not hosted locally. - Click the add button. Now, you can use Perplexify directly from your browser's search bar.
Perplexify also provides an API for developers looking to integrate its powerful search engine into their own applications. You can run searches, use multiple models and get answers to your queries.
For more details, check out the full documentation here.
Perplexify runs on Next.js and handles all API requests. It works right away on the same network and stays accessible even with port forwarding.
β οΈ Note: One-click deployment services may not work with the current local build setup. These services typically expect pre-built Docker images from registries, but this project uses local builds for complete independence.
Use the provided Docker Compose file for reliable deployment:
# Production deployment
NODE_ENV=production docker-compose up --build -d
For cloud platforms that support local builds:
- Railway: Connect your GitHub repo and use
docker-compose.deploy.yaml
- Render: Use the deployment compose file with build context
- DigitalOcean App Platform: Supports Docker Compose with local builds
Use the provided Kubernetes template:
# Apply the deployment template
kubectl apply -f deploy-template.yaml
π For comprehensive deployment instructions, see the Deployment Guide
Perplexify is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexify you can read the CONTRIBUTING.md file to learn more about Perplexify and how you can contribute to it.
We welcome contributions to add new languages or improve existing translations! Perplexify currently supports 12 languages including RTL support for Arabic.
- Want to add a new language? Check out our Language Contribution Guide
- Current languages: English, Spanish, French, German, Italian, Portuguese, Russian, Japanese, Korean, Chinese, Arabic, Hindi
- Need help? Join our Discord community for translation support
Your contributions help make Perplexify accessible to users worldwide! π
If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more personalized help. Click here to join the Discord server. To discuss matters outside of regular support, feel free to contact me on Discord at Kamran1819G
.
Thank you for exploring Perplexify, the AI-powered search engine designed to enhance your search experience. We are constantly working to improve Perplexify and expand its capabilities. We value your feedback and contributions which help us make Perplexify even better. Don't forget to check back for updates and new features!
For comprehensive Docker setup instructions, including:
- Development environment with live updates
- Production deployment
- Troubleshooting common issues
- Performance optimization tips
- External services integration (Ollama, LM Studio)
π Read the complete Docker Setup Guide