Skip to content

About Perplexify is an AI-powered search engine. It is an Open source alternative to Perplexity AI

License

Notifications You must be signed in to change notification settings

Kamran1819G/Perplexify

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ Perplexify - An AI-powered search engine πŸ”Ž

Note: This is a fork of the original Perplexica project by @ItzCrazyKns. This fork is maintained separately with additional features, improvements, and independent installation using local build for complete independence from external registries.

✨ New Features

πŸ”„ Search Orchestrator

  • Planning & Execution: Step-by-step search planning and execution
  • Real-time Progress: See exactly what steps are being executed in real-time
  • Enhanced UI: Beautiful step-by-step interface in the Steps tab
  • Better Debugging: Clear error messages and execution tracking
  • Multiple Search Modes: Web, Academic, YouTube, Reddit, Wolfram Alpha, and Writing Assistant

πŸš€ How to Run Perplexify

There are 3 different ways to run Perplexify depending on your needs:

🟒 Option 1: Production Docker (Recommended for End Users)

Best for: Regular users who want to run Perplexify as a service

Setup:

  1. Install Docker Desktop from here
  2. Clone the repository:
    git clone https://github.com/Kamran1819G/Perplexify.git
    cd Perplexify
  3. Copy the config file:
    cp sample.config.toml config.toml
  4. Start Perplexify:
    docker compose up --build
  5. Open http://localhost:3000

βœ… Pros: Easy setup, production-ready, isolated environment
❌ Cons: Slower startup, no hot reloading


πŸ› οΈ Option 2: Development Setup (Recommended for Contributors)

Best for: Developers who want to contribute or modify the code

Setup:

  1. Install Docker Desktop and Node.js/yarn

  2. Clone the repository:

    git clone https://github.com/Kamran1819G/Perplexify.git
    cd Perplexify
  3. Copy the config file:

    cp sample.config.toml config.toml
  4. Start development environment:

    Linux/macOS:

    ./dev.sh

    Windows:

    dev.bat
  5. Open http://localhost:3000

βœ… Pros: Fast hot reloading, easy debugging, instant code changes
❌ Cons: Requires Node.js/yarn installation


πŸ”§ Option 3: Manual Installation (Advanced Users)

Best for: Advanced users who want full control over the setup

Setup:

  1. Install Node.js, SearXNG, and configure it
  2. Clone the repository:
    git clone https://github.com/Kamran1819G/Perplexify.git
    cd Perplexify
  3. Copy and configure the config file:
    cp sample.config.toml config.toml
    # Edit config.toml with your settings
  4. Install dependencies and start:
    yarn install
    yarn build
    yarn start

βœ… Pros: Full control, no Docker dependency
❌ Cons: Complex setup, manual dependency management


🎯 Which Option Should You Choose?

Use Case Recommended Option Why?
Just want to use Perplexify Option 1: Production Docker Easiest setup, works out of the box
Want to contribute code Option 2: Development Setup Fast development with hot reloading
Advanced user, no Docker Option 3: Manual Installation Full control over the environment
Testing/Evaluation Option 1: Production Docker Quick to get started
Custom modifications Option 2: Development Setup Easy to modify and test changes

Discord

preview

Table of Contents

Overview

Perplexify is an open-source AI-powered searching tool or an AI-powered search engine that goes deep into the internet to find answers. Inspired by Perplexity AI, it's an open-source option that not just searches the web but understands your questions. It uses advanced machine learning algorithms like similarity searching and embeddings to refine results and provides clear answers with sources cited.

Using SearxNG to stay current and fully open source, Perplexify ensures you always get the most up-to-date information without compromising your privacy.

This Fork: This repository is a fork of the original Perplexica project with additional features, improvements, and an independent installation system using local build for complete independence from external registries. We maintain this fork separately to provide enhanced functionality while staying true to the original project's goals.

Want to know more about its architecture and how it works? You can read it here.

Preview

video-preview

Features

  • Local LLMs: You can make use local LLMs such as Llama3 and Mixtral using Ollama.
  • Advanced Search Modes: Choose from three powerful search modes tailored to different needs:

πŸ” Search Modes Comparison

Feature Quick Search ⚑ Pro Search ✨ Ultra Search 🧠
Search Agents 1 4-6 12 Parallel
Max Sources 15 25 50
Research Depth Basic Advanced PhD-Level
Cross-Validation ❌ Limited βœ… Full Loops
Dynamic Replanning ❌ ❌ βœ… Every 45s
Expert Sourcing ❌ βœ… βœ… Enhanced
Research Time ~10s 2-4min 2-4min+
Context Analysis Basic Good Comprehensive
Best For Quick answers In-depth research Academic/Professional research

⚑ Quick Search

  • Fast web search with immediate results
  • Perfect for simple queries and quick fact-checking
  • Single search agent for rapid response

✨ Pro Search

  • Deep research with comprehensive analysis
  • Multiple search queries for thorough coverage
  • Enhanced source ranking and analysis

🧠 Ultra Search (New!)

  • PhD-level research with parallel agents and cross-validation

  • 12 parallel research agents working simultaneously

  • Cross-validation loops to verify information accuracy

  • Dynamic replanning every 45 seconds based on findings

  • Comprehensive research covering 8-12 specialized angles:

    • Contextual Foundation & Historical Context
    • Expert Perspectives & Comparative Analysis
    • Technical Deep-Dive & Case Studies
    • Future Implications & Critical Assessment
  • Web Search: Perplexify searches across the entire web to find the best and most relevant results for your queries.

  • Current Information: Some search tools might give you outdated info because they use data from crawling bots and convert them into embeddings and store them in a index. Unlike them, Perplexify uses SearxNG, a metasearch engine to get the results and rerank and get the most relevant source out of it, ensuring you always get the latest information without the overhead of daily data updates.

  • API: Integrate Perplexify into your existing applications and make use of its capibilities.

It has many more features like image and video search. Some of the planned features are mentioned in upcoming features.

πŸ€– Supported AI Models

Perplexify supports a wide range of AI models and providers, giving you flexibility to choose the best model for your needs:

🧠 Large Language Models (LLMs)

  • OpenAI: GPT-4, GPT-3.5-turbo, and more
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku, Claude 3 Opus
  • Google Gemini: Latest Gemini 2.5 Pro, 2.5 Flash, 2.5 Flash-Lite, and more
  • Groq: Ultra-fast inference with various models
  • DeepSeek: Advanced reasoning models
  • LM Studio: Local model hosting
  • Ollama: Local models like Llama3, Mixtral, and more
  • OpenRouter: Access to multiple model providers
  • Custom OpenAI: Self-hosted or custom OpenAI-compatible endpoints

πŸ”€ Embedding Models

  • OpenAI: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large
  • Google Gemini: Text Embedding 004, Embedding 001
  • Transformers: Local embedding models
  • LM Studio: Local embedding models

πŸ†• Latest Gemini 2.5 Models

Perplexify now supports the latest Gemini 2.5 models from Google:

  • Gemini 2.5 Pro: Most powerful thinking model for complex reasoning
  • Gemini 2.5 Flash: Best price-performance balance
  • Gemini 2.5 Flash-Lite: Most cost-efficient for high-volume tasks

For detailed information about Gemini models, see Gemini Models Documentation.

Installation

There are 3 different installation methods for Perplexify. Choose the one that best fits your needs:

🟒 Option 1: Production Docker (Recommended for End Users)

The easiest way to get started. Everything runs in Docker containers.

Quick Start:

git clone https://github.com/Kamran1819G/Perplexify.git
cd Perplexify
cp sample.config.toml config.toml
docker compose up --build

βœ… Best for: Regular users, quick setup, production use
πŸ“– Details: See the Docker Setup Guide

πŸ› οΈ Option 2: Development Setup (Recommended for Contributors)

Hybrid approach: SearXNG in Docker + Next.js on host for fast development.

Quick Start:

git clone https://github.com/Kamran1819G/Perplexify.git
cd Perplexify
cp sample.config.toml config.toml
./dev.sh  # Linux/macOS
# or
dev.bat   # Windows

βœ… Best for: Developers, contributors, custom modifications
πŸ“– Details: See the Development Guide

πŸ”§ Option 3: Manual Installation (Advanced Users)

Full manual setup without Docker dependencies.

Setup:

  1. Install Node.js, SearXNG, and configure it
  2. Clone the repository and copy config file
  3. Run yarn install && yarn build && yarn start

βœ… Best for: Advanced users, full control, no Docker dependency
πŸ“– Details: See the Installation Documentation

🎯 Recommendation

  • New users: Start with Option 1 (Production Docker)
  • Contributors: Use Option 2 (Development Setup)
  • Advanced users: Choose Option 3 (Manual Installation)

See the installation documentation for more information like updating, etc.

Ollama Connection Errors

If you're encountering an Ollama connection error, it is likely due to the backend being unable to connect to Ollama's API. To fix this issue you can:

  1. Check your Ollama API URL: Ensure that the API URL is correctly set in the settings menu.

  2. Update API URL Based on OS:

    • Windows: Use http://host.docker.internal:11434
    • Mac: Use http://host.docker.internal:11434
    • Linux: Use http://<private_ip_of_host>:11434

    Adjust the port number if you're using a different one.

  3. Linux Users - Expose Ollama to Network:

    • Inside /etc/systemd/system/ollama.service, you need to add Environment="OLLAMA_HOST=0.0.0.0". Then restart Ollama by systemctl restart ollama. For more information see Ollama docs

    • Ensure that the port (default is 11434) is not blocked by your firewall.

Using as a Search Engine

If you wish to use Perplexify as an alternative to traditional search engines like Google or Bing, or if you want to add a shortcut for quick access from your browser's search bar, follow these steps:

  1. Open your browser's settings.
  2. Navigate to the 'Search Engines' section.
  3. Add a new site search with the following URL: http://localhost:3000/?q=%s. Replace localhost with your IP address or domain name, and 3000 with the port number if Perplexify is not hosted locally.
  4. Click the add button. Now, you can use Perplexify directly from your browser's search bar.

Using Perplexify's API

Perplexify also provides an API for developers looking to integrate its powerful search engine into their own applications. You can run searches, use multiple models and get answers to your queries.

For more details, check out the full documentation here.

Expose Perplexify to network

Perplexify runs on Next.js and handles all API requests. It works right away on the same network and stays accessible even with port forwarding.

One-Click Deployment

⚠️ Note: One-click deployment services may not work with the current local build setup. These services typically expect pre-built Docker images from registries, but this project uses local builds for complete independence.

Alternative Deployment Options

Option 1: Manual Deployment (Recommended)

Use the provided Docker Compose file for reliable deployment:

# Production deployment
NODE_ENV=production docker-compose up --build -d

Option 2: Cloud Platform Deployment

For cloud platforms that support local builds:

  • Railway: Connect your GitHub repo and use docker-compose.deploy.yaml
  • Render: Use the deployment compose file with build context
  • DigitalOcean App Platform: Supports Docker Compose with local builds

Option 3: Kubernetes Deployment

Use the provided Kubernetes template:

# Apply the deployment template
kubectl apply -f deploy-template.yaml

πŸ“– For comprehensive deployment instructions, see the Deployment Guide

Contribution

Perplexify is built on the idea that AI and large language models should be easy for everyone to use. If you find bugs or have ideas, please share them in via GitHub Issues. For more information on contributing to Perplexify you can read the CONTRIBUTING.md file to learn more about Perplexify and how you can contribute to it.

🌍 Language Contributions

We welcome contributions to add new languages or improve existing translations! Perplexify currently supports 12 languages including RTL support for Arabic.

  • Want to add a new language? Check out our Language Contribution Guide
  • Current languages: English, Spanish, French, German, Italian, Portuguese, Russian, Japanese, Korean, Chinese, Arabic, Hindi
  • Need help? Join our Discord community for translation support

Your contributions help make Perplexify accessible to users worldwide! 🌍

Help and Support

If you have any questions or feedback, please feel free to reach out to us. You can create an issue on GitHub or join our Discord server. There, you can connect with other users, share your experiences and reviews, and receive more personalized help. Click here to join the Discord server. To discuss matters outside of regular support, feel free to contact me on Discord at Kamran1819G.

Thank you for exploring Perplexify, the AI-powered search engine designed to enhance your search experience. We are constantly working to improve Perplexify and expand its capabilities. We value your feedback and contributions which help us make Perplexify even better. Don't forget to check back for updates and new features!

πŸ“š Docker Documentation

For comprehensive Docker setup instructions, including:

  • Development environment with live updates
  • Production deployment
  • Troubleshooting common issues
  • Performance optimization tips
  • External services integration (Ollama, LM Studio)

πŸ“– Read the complete Docker Setup Guide