A dual-language (Go + Python) cognitive prediction engine using LLMs.
VI-Sense is an intelligent system designed to predict what’s on a user’s mind by continuously asking context-aware questions. It establishes a real-time session with the user, processes each answer, and uses an LLM-powered reasoning loop to generate the next best question — ultimately converging on a prediction.
The system is built with:
- Golang (WebSocket server + gRPC client)
- Python (LLM inference service using LangChain + Groq API)
- Redis (session storage)
- MongoDB (user data persistence)
- The mobile app connects to the Go service over WebSockets.
- Go creates a unique session per game and stores session metadata in Redis.
- For each user answer, the Go server sends the data to the Python LLM service using gRPC
.protocontracts. - Python receives the context and generates the next question using LangChain + Groq.
- Python returns the generated question via gRPC.
- Go pushes it to the client over WebSockets in real-time.
- Redis stores all Q/A pairs per session.
- As context grows, the LLM becomes more confident and predicts the final intent or thought.
| Component | Technology |
|---|---|
| Go Service | Echo HTTP Framework, WebSockets, gRPC Client |
| Python Service | LangChain, Groq API, gRPC Server |
| LLM Provider | Groq (Mixtral, Llama, etc.) |
| Databases | MongoDB (user data), Redis (session storage) |
VI-Sense-backend/
│── Go/ # WebSocket + gRPC client
│ ├── cmd/ #Entry point + database
│ ├── internal/
│ ├── pkg/
│ └── proto
│
│── Python/ # LLM microservice
│ ├── app/
│ ├── proto/
│ ├── Dockerfile
│ └── main.py
│
└── README.md
Create a .env file in both Go and Python services:
MONGODB_URI=
REDIS_URL=
DATABASE_NAME=
PORT=
LLM_PORT=
LLM_HOST=
GROQ_API_KEY=
git clone https://github.com/suhas-developer07/VI-Sense-backend.gitcd Go
make run or go run ./cmdStarts:
- WebSocket server
- gRPC client
- Redis integration
cd Python
pip install -r requirement.txt
python main.pyStarts:
- gRPC server
- LangChain pipeline
- Groq-powered question generator
- Mobile App connects → Go WebSocket
- Go creates a session → stores in Redis
- User answers → Go sends to Python via gRPC
- LLM → generates next question
- Python → Go → User
- Loop continues until prediction
Currently deployed using:
- Railway.app for both Go and Python services
- Uses internal networking for gRPC communication
https://sixthsense-production.up.railway.app/
- Real-time question-answer loop
- Context-aware LLM prompting
- Predictive reasoning engine
- Redis-backed session storage
- Microservice architecture (Go ↔ Python)
- Scalable WebSocket infrastructure
- LangChain prompt orchestration
- gRPC high-performance communication
Licensed under the Apache License 2.0.
See LICENSE file for more details.
Pull requests are welcome! For major changes, please open an issue to discuss what you would like to improve.
If you like VI-Sense, please ⭐ the repository on GitHub — it helps a lot!