You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**Pages and Routes (`src/app`)**: Next.js app directory structure with page components.
14
-
- Main app routes include: home (`/`), chat (`/c`), discover (`/discover`), library (`/library`), and settings (`/settings`).
15
-
-**API Routes (`src/app/api`)**: API endpoints implemented with Next.js API routes.
16
-
-`/api/chat`: Handles chat interactions.
17
-
-`/api/search`: Provides direct access to Perplexica's search capabilities.
18
-
- Other endpoints for models, files, and suggestions.
14
+
- Main app routes include: home (`/`), chat (`/c`), discover (`/discover`), and library (`/library`).
15
+
-**API Routes (`src/app/api`)**: Server endpoints implemented with Next.js route handlers.
19
16
-**Backend Logic (`src/lib`)**: Contains all the backend functionality including search, database, and API logic.
20
-
- The search functionality is present inside `src/lib/search` directory.
21
-
-All of the focus modes are implemented using the Meta Search Agent class in `src/lib/search/metaSearchAgent.ts`.
17
+
- The search system lives in `src/lib/agents/search`.
18
+
-The search pipeline is split into classification, research, widgets, and writing.
22
19
- Database functionality is in `src/lib/db`.
23
-
- Chat model and embedding model providers are managed in `src/lib/providers`.
24
-
- Prompt templates and LLM chain definitions are in `src/lib/prompts` and `src/lib/chains` respectively.
20
+
- Chat model and embedding model providers are in `src/lib/models/providers`, and models are loaded via `src/lib/models/registry.ts`.
21
+
- Prompt templates are in `src/lib/prompts`.
22
+
- SearXNG integration is in `src/lib/searxng.ts`.
23
+
- Upload search lives in `src/lib/uploads`.
24
+
25
+
### Where to make changes
26
+
27
+
If you are not sure where to start, use this section as a map.
28
+
29
+
-**Search behavior and reasoning**
30
+
31
+
-`src/lib/agents/search` contains the core chat and search pipeline.
32
+
-`classifier.ts` decides whether research is needed and what should run.
33
+
-`researcher/` gathers information in the background.
34
+
35
+
-**Add or change a search capability**
36
+
37
+
- Research tools (web, academic, discussions, uploads, scraping) live in `src/lib/agents/search/researcher/actions`.
38
+
- Tools are registered in `src/lib/agents/search/researcher/actions/index.ts`.
39
+
40
+
-**Add or change widgets**
41
+
42
+
- Widgets live in `src/lib/agents/search/widgets`.
43
+
- Widgets run in parallel with research and show structured results in the UI.
44
+
45
+
-**Model integrations**
46
+
47
+
- Providers live in `src/lib/models/providers`.
48
+
- Add new providers there and wire them into the model registry so they show up in the app.
49
+
50
+
-**Architecture docs**
51
+
- High level overview: `docs/architecture/README.md`
52
+
- High level flow: `docs/architecture/WORKING.md`
25
53
26
54
## API Documentation
27
55
28
-
Perplexica exposes several API endpoints for programmatic access, including:
56
+
Perplexica includes API documentation for programmatic access.
29
57
30
-
-**Search API**: Access Perplexica's advanced search capabilities directly via the `/api/search` endpoint. For detailed documentation, see `docs/api/search.md`.
58
+
-**Search API**: For detailed documentation, see `docs/API/SEARCH.md`.
31
59
32
60
## Setting Up Your Environment
33
61
34
62
Before diving into coding, setting up your local environment is key. Here's what you need to do:
35
63
36
-
1. In the root directory, locate the `sample.config.toml` file.
37
-
2. Rename it to `config.toml` and fill in the necessary configuration fields.
38
-
3. Run `npm install` to install all dependencies.
39
-
4. Run `npm run db:migrate` to set up the local sqlite database.
40
-
5. Use `npm run dev` to start the application in development mode.
64
+
1. Run `npm install` to install all dependencies.
65
+
2. Use `npm run dev` to start the application in development mode.
66
+
3. Open http://localhost:3000 and complete the setup in the UI (API keys, models, search backend URL, etc.).
67
+
68
+
Database migrations are applied automatically on startup.
69
+
70
+
For full installation options (Docker and non Docker), see the installation guide in the repository README.
41
71
42
72
**Please note**: Docker configurations are present for setting up production environments, whereas `npm run dev` is used for development purposes.
Copy file name to clipboardExpand all lines: README.md
+9-12Lines changed: 9 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,9 +18,11 @@ Want to know more about its architecture and how it works? You can read it [here
18
18
19
19
🤖 **Support for all major AI providers** - Use local LLMs through Ollama or connect to OpenAI, Anthropic Claude, Google Gemini, Groq, and more. Mix and match models based on your needs.
20
20
21
-
⚡ **Smart search modes** - Choose Balanced Mode for everyday searches, Fast Mode when you need quick answers, or wait for Quality Mode (coming soon) for deep research.
21
+
⚡ **Smart search modes** - Choose Speed Mode when you need quick answers, Balanced Mode for everyday searches, or Quality Mode for deep research.
22
22
23
-
🎯 **Six specialized focus modes** - Get better results with modes designed for specific tasks: Academic papers, YouTube videos, Reddit discussions, Wolfram Alpha calculations, writing assistance, or general web search.
23
+
🧭 **Pick your sources** - Search the web, discussions, or academic papers. More sources and integrations are in progress.
24
+
25
+
🧩 **Widgets** - Helpful UI cards that show up when relevant, like weather, calculations, stock prices, and other quick lookups.
24
26
25
27
🔍 **Web search powered by SearxNG** - Access multiple search engines while keeping your identity private. Support for Tavily and Exa coming soon for even better results.
26
28
@@ -81,7 +83,7 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
81
83
Perplexica can be easily run using Docker. Simply run the following command:
docker run -d -p 3000:3000 -v perplexica-data:/home/perplexica/data --name perplexica itzcrazykns1337/perplexica:latest
85
87
```
86
88
87
89
This will pull and start the Perplexica container with the bundled SearxNG search engine. Once running, open your browser and navigate to http://localhost:3000. You can then configure your settings (API keys, models, etc.) directly in the setup screen.
@@ -93,7 +95,7 @@ This will pull and start the Perplexica container with the bundled SearxNG searc
93
95
If you already have SearxNG running, you can use the slim version of Perplexica:
Copy file name to clipboardExpand all lines: docs/API/SEARCH.md
+13-12Lines changed: 13 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ Use the `id` field as the `providerId` and the `key` field from the models array
57
57
58
58
### Request
59
59
60
-
The API accepts a JSON object in the request body, where you define the focus mode, chat models, embedding models, and your query.
60
+
The API accepts a JSON object in the request body, where you define the enabled search `sources`, chat models, embedding models, and your query.
61
61
62
62
#### Request Body Structure
63
63
@@ -72,7 +72,7 @@ The API accepts a JSON object in the request body, where you define the focus mo
72
72
"key": "text-embedding-3-large"
73
73
},
74
74
"optimizationMode": "speed",
75
-
"focusMode": "webSearch",
75
+
"sources": ["web"],
76
76
"query": "What is Perplexica",
77
77
"history": [
78
78
["human", "Hi, how are you?"],
@@ -87,24 +87,25 @@ The API accepts a JSON object in the request body, where you define the focus mo
87
87
88
88
### Request Parameters
89
89
90
-
-**`chatModel`** (object, optional): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
90
+
-**`chatModel`** (object, required): Defines the chat model to be used for the query. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
91
91
92
92
-`providerId` (string): The UUID of the provider. You can get this from the `/api/providers` endpoint response.
93
93
-`key` (string): The model key/identifier (e.g., `gpt-4o-mini`, `llama3.1:latest`). Use the `key` value from the provider's `chatModels` array, not the display name.
94
94
95
-
-**`embeddingModel`** (object, optional): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
95
+
-**`embeddingModel`** (object, required): Defines the embedding model for similarity-based searching. To get available providers and models, send a GET request to `http://localhost:3000/api/providers`.
96
96
97
97
-`providerId` (string): The UUID of the embedding provider. You can get this from the `/api/providers` endpoint response.
98
98
-`key` (string): The embedding model key (e.g., `text-embedding-3-large`, `nomic-embed-text`). Use the `key` value from the provider's `embeddingModels` array, not the display name.
99
99
100
-
-**`focusMode`** (string, required): Specifies which focus mode to use. Available modes:
100
+
-**`sources`** (array, required): Which search sources to enable. Available values:
-**`optimizationMode`** (string, optional): Specifies the optimization mode to control the balance between performance and quality. Available modes:
105
105
106
106
-`speed`: Prioritize speed and return the fastest answer.
107
107
-`balanced`: Provide a balanced answer with good speed and reasonable quality.
108
+
-`quality`: Prioritize answer quality (may be slower).
108
109
109
110
-**`query`** (string, required): The search query or question.
110
111
@@ -132,14 +133,14 @@ The response from the API includes both the final message and the sources used t
132
133
"message": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online. Here are some key features and characteristics of Perplexica:\n\n- **AI-Powered Technology**: It utilizes advanced machine learning algorithms to not only retrieve information but also to understand the context and intent behind user queries, providing more relevant results [1][5].\n\n- **Open-Source**: Being open-source, Perplexica offers flexibility and transparency, allowing users to explore its functionalities without the constraints of proprietary software [3][10].",
133
134
"sources": [
134
135
{
135
-
"pageContent": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
136
+
"content": "Perplexica is an innovative, open-source AI-powered search engine designed to enhance the way users search for information online.",
136
137
"metadata": {
137
138
"title": "What is Perplexica, and how does it function as an AI-powered search ...",
Perplexica's architecture consists of the following key components:
3
+
Perplexica is a Next.js application that combines an AI chat experience with search.
4
4
5
-
1.**User Interface**: A web-based interface that allows users to interact with Perplexica for searching images, videos, and much more.
6
-
2.**Agent/Chains**: These components predict Perplexica's next actions, understand user queries, and decide whether a web search is necessary.
7
-
3.**SearXNG**: A metadata search engine used by Perplexica to search the web for sources.
8
-
4.**LLMs (Large Language Models)**: Utilized by agents and chains for tasks like understanding content, writing responses, and citing sources. Examples include Claude, GPTs, etc.
9
-
5.**Embedding Models**: To improve the accuracy of search results, embedding models re-rank the results using similarity search algorithms such as cosine similarity and dot product distance.
5
+
For a high level flow, see [WORKING.md](WORKING.md). For deeper implementation details, see [CONTRIBUTING.md](../../CONTRIBUTING.md).
10
6
11
-
For a more detailed explanation of how these components work together, see [WORKING.md](https://github.com/ItzCrazyKns/Perplexica/tree/master/docs/architecture/WORKING.md).
7
+
## Key components
8
+
9
+
1.**User Interface**
10
+
11
+
- A web based UI that lets users chat, search, and view citations.
12
+
13
+
2.**API Routes**
14
+
15
+
-`POST /api/chat` powers the chat UI.
16
+
-`POST /api/search` provides a programmatic search endpoint.
17
+
-`GET /api/providers` lists available providers and model keys.
18
+
19
+
3.**Agents and Orchestration**
20
+
21
+
- The system classifies the question first.
22
+
- It can run research and widgets in parallel.
23
+
- It generates the final answer and includes citations.
24
+
25
+
4.**Search Backend**
26
+
27
+
- A meta search backend is used to fetch relevant web results when research is enabled.
28
+
29
+
5.**LLMs (Large Language Models)**
30
+
31
+
- Used for classification, writing answers, and producing citations.
32
+
33
+
6.**Embedding Models**
34
+
35
+
- Used for semantic search over user uploaded files.
36
+
37
+
7.**Storage**
38
+
- Chats and messages are stored so conversations can be reloaded.
0 commit comments