Your personal, AI-powered RAG knowledge base.
(Built with Next.js, Pinecone, Jina AI, and LLaMA 3.1)
Sec2ndBrain isn't just a note-taking app โ it's an intelligent retrieval-augmented generation (RAG) platform for your personal data. It allows you to save notes, YouTube videos, and Twitter posts, and then chat with your knowledge base.
It uses:
- Jina AI โ Create high-fidelity embeddings
- Pinecone โ High-speed vector search
- Groq (via LLaMA 3.1) โ Query optimization & human-like response generation
This end-to-end TypeScript project demonstrates a scalable, modern AI stack.
๐ก (Insert your app demo GIF below โ essential for portfolio/showcases) >
-
๐ง AI-Powered RAG Search: Chat directly with your data. Fetches context from your personal content for accurate, sourced answers.
-
๐ Query Optimization: LLaMA 3.1 refines user queries before search (e.g.,
"find js video"โ"show videos related to JavaScript tutorials"). -
๐พ Unified Content Management: Manage notes, YouTube links, or Twitter posts โ all in one place.
-
โ๏ธ Cloud Media Storage: Profile photo uploads & CDN delivery with Cloudinary.
-
๐ Secure Auth & Sharing:
- JWT-based auth (httpOnly cookies)
- Shareable profile links
- Instant revocation
-
๐ก๏ธ End-to-End Type Safety: 100% TypeScript โ scalable, maintainable, and reliable.
| Layer | Technology | Purpose |
|---|---|---|
| Frontend | Next.js + TypeScript | Client-side rendering, routing, and UI |
| Backend | Node.js + Express | REST API, business logic, and orchestration |
| Database | MongoDB | Store user and content data |
| Authentication | JWT (httpOnly Cookies) | Secure, stateless auth |
| File Storage | Cloudinary | Profile photo uploads & CDN |
| Vector DB | Pinecone | Store and query text embeddings |
| Embeddings | Jina AI | Generate semantic embeddings |
| LLM | LLaMA 3.1 (Groq) | Query optimization and response generation |
flowchart TD
subgraph Ingestion Flow
A[User Adds Content] --> B[Express API];
B --> C[Save to MongoDB];
B --> D[Jina AI API] -- Embedding --> E[Pinecone DB];
end
subgraph RAG Search Flow
F[User Queries] --> G[Next.js App];
G --> H[Express API];
H -- 1. Optimize --> I[Groq/LLaMA 3.1];
I -- 2. Embed --> D;
D -- 3. Vector Search --> E;
E -- 4. Get Context --> H;
H -- 5. Generate Answer --> I;
I -- 6. AI Response --> G;
end
When a user adds content, it's processed in two parallel paths:
- MongoDB: Raw text/link saved as primary record
- Pinecone: Embedded using Jina AI โ stored under the userโs namespace
// 1. Store in MongoDB
const content = await Content.create({ userId, text: "..." });
// 2. Embed with Jina
const embedding = await jina.embedText(content.text);
// 3. Upsert vector into user's namespace
await pinecone.upsert({
namespace: userId,
vectors: [
{
id: content._id,
values: embedding,
metadata: { text: content.text, type: content.type },
},
],
});The RAG process unfolds as:
- Optimize Query โ LLaMA 3.1 rephrases search
- Embed Query โ Jina AI converts to vector
- Retrieve Context โ Pinecone finds top-K matches
- Generate Answer โ LLaMA 3.1 synthesizes final output
// 1. Optimize query
const optimizedQuery = await groq.optimize(searchQuery);
// 2. Embed query
const queryEmbedding = await jina.embedText(optimizedQuery);
// 3. Retrieve context
const results = await pinecone.query({
namespace: userId,
vector: queryEmbedding,
topK: 5,
includeMetadata: true,
});
// 4. Generate answer
const context = results.matches.map((r) => r.metadata.text).join("\n");
const aiResponse = await groq.generate(
`Using this context:\n${context}\n\nAnswer the user's question: ${searchQuery}`
);Follow these steps to set up the project locally.
- Node.js (v18+)
- MongoDB Atlas account
- Pinecone account
- Cloudinary account
- API keys for Jina AI and Groq
# Clone the repository
git clone https://github.com/your-username/sec2ndbrain.git
cd sec2ndbrain
# Install root dependencies
npm install
# Install client dependencies
cd client
npm install
# Install server dependencies
cd ../server
npm installCreate a .env file inside /server and fill in values based on .env.example:
# MongoDB
MONGO_URI=your_mongodb_connection_string
# Authentication
JWT_SECRET=your_super_secret_jwt_key
# Cloudinary
CLOUDINARY_CLOUD_NAME=your_cloud_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
# AI Services
PINECONE_API_KEY=your_pinecone_key
JINA_API_KEY=your_jina_api_key
GROQ_API_KEY=your_groq_api_key# Start backend (from /server)
npm run dev
# Start frontend (from /client)
npm run dev- Multi-modal Embeddings: Add support for images (screenshots, diagrams, etc.)
- Chat Interface: Transform search bar into persistent chat
- Auto-Summarization: Summarize long texts or YouTube videos
- Content Analytics: Dashboard for most-searched or top topics
- Team Workspaces: Shared Pinecone namespaces for collaboration
Developed by: Prateek Singh
This project is licensed under the MIT License.
