Voice-first journaling app that demonstrates how Redis Agent Memory Server can power long-term memory, session continuity, and semantic retrieval for a personal assistant that remembers what you said across conversations.
- Demo Objectives
- Tech Stack
- Prerequisites
- Getting Started
- Google Calendar Setup
- Screenshots
- Architecture
- Project Structure
- Usage
- Docker Commands Reference
- Cloud Deployment
- Resources
- Maintainers
- License
- Voice capture and transcription with Sarvam AI for voice-first journaling
- Long-term memory storage with Redis Agent Memory Server for cross-session recall
- Working memory continuity by persisting session conversation turns
- Intent-aware responses using a RedisVL semantic router for log, chat, and calendar flows
- Voice playback and summaries using streaming TTS plus optional Ollama-backed response generation
- Mood and schedule context through mood logging and optional Google Calendar integration
| Layer | Technology | Purpose |
|---|---|---|
| Memory | Redis Agent Memory Server | Long-term and working memory management |
| Database | Redis Cloud or Redis Stack | Journal storage, indexes, and vector-backed retrieval |
| Voice | Sarvam AI | Speech-to-text and text-to-speech |
| Backend | FastAPI | API endpoints for journaling, chat, mood, and calendar |
| Frontend | Next.js 16 + React 19 | Voice journal UI |
| Intent Routing | RedisVL + OpenAI embeddings | Semantic intent detection |
| Response Generation | Ollama (optional) | Natural-language journal answers |
| Deployment | Docker Compose | Local containerized development and demos |
- Python 3.11+
- Node.js 18+
- Docker and Docker Compose
- Redis instance reachable by the Agent Memory Server
- Sarvam AI API key
- OpenAI API key for embeddings and Agent Memory Server extraction
- Optional: Ollama if you want richer local journal responses
- Optional: Google Calendar OAuth credentials for schedule features
git clone https://github.com/bhavana-giri/voice_ai_redis_memory_demo.git
cd voice_ai_redis_memory_demoCopy the backend and deployment environment template:
cp .env.example .env
cp frontend/.env.local.example frontend/.env.localRecommended minimum .env values:
SARVAM_API_KEY=your_sarvam_api_key_here
REDIS_URL=redis://default:password@your-redis-host:port
MEMORY_SERVER_URL=http://localhost:8000
OPENAI_API_KEY=sk-your_openai_api_key_here
NEXT_PUBLIC_API_URL=http://localhost:8080
CORS_ORIGINS=http://localhost:3000
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2Environment notes by runtime:
| Variable | Local development | Docker on localhost |
|---|---|---|
MEMORY_SERVER_URL |
http://localhost:8000 |
http://memory-server:8000 for the backend container |
NEXT_PUBLIC_API_URL |
http://localhost:8080 |
http://localhost:8080 |
CORS_ORIGINS |
http://localhost:3000 |
http://localhost:3000 |
For local frontend development, keep frontend/.env.local aligned with the backend URL:
NEXT_PUBLIC_API_URL=http://localhost:8080
NEXT_PUBLIC_GOOGLE_CLIENT_ID=your_google_oauth_client_id_hereStart the memory server against your Redis instance:
docker run -p 8000:8000 \
-e REDIS_URL=redis://default:<password>@<your-redis-host>:<port> \
-e OPENAI_API_KEY=<your-openai-api-key> \
redislabs/agent-memory-server:latest \
agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncioThe repo now includes a Compose-based deployment path similar to the reference project:
docker compose up --buildServices:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8080
- Agent Memory Server: http://localhost:8000
Notes:
- The frontend image bakes in
NEXT_PUBLIC_API_URLat build time. - The frontend image also bakes in
NEXT_PUBLIC_GOOGLE_CLIENT_IDat build time. - Set
NEXT_PUBLIC_GOOGLE_CLIENT_IDin.envbefore runningdocker compose up --build, even ifGOOGLE_CLIENT_IDis already set for the backend. - The backend container overrides
MEMORY_SERVER_URLto the Compose service hostname. - Ollama is not started by Compose; if it is unavailable, the backend falls back to simpler text responses.
Backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m uvicorn api.main:app --host 0.0.0.0 --port 8080Frontend
cd frontend
npm install
npm run devCalendar support is optional and uses Google Calendar API OAuth, not an iCal URL.
- Create OAuth desktop credentials in Google Cloud Console.
- Save the downloaded file as
credentials.jsonin the project root. - Start the backend and call a calendar route once.
- Complete the browser-based OAuth consent flow to generate
token.json.
If credentials.json or token.json is missing, the calendar API returns an empty list and the rest of the app still works.
flowchart LR
USER[User Voice Input] --> FE[Next.js Frontend]
FE --> BE[FastAPI Backend]
BE --> STT[Sarvam STT]
BE --> TTS[Sarvam TTS]
BE --> AGENT[VoiceJournalAgent]
AGENT --> ROUTER[RedisVL Intent Router]
AGENT <--> AMS[Redis Agent Memory Server]
AGENT --> REDIS[(Redis Journal Data)]
AGENT --> OLLAMA[Ollama Optional]
AGENT --> CAL[Google Calendar Optional]
TTS --> FE
- The frontend captures typed or recorded input and sends it to FastAPI.
- The backend transcribes audio with Sarvam when needed.
- The semantic router decides whether the request is a journal log, a journal recall query, or a calendar question.
- The agent reads working memory and long-term memory from Redis Agent Memory Server.
- The response is generated with Ollama when available, or with a simple fallback when it is not.
- The backend streams TTS audio back to the frontend for playback.
voice_ai_redis_memory_demo/
├── api/
│ └── main.py
├── src/
│ ├── analytics.py
│ ├── audio_handler.py
│ ├── calendar_client.py
│ ├── intent_router.py
│ ├── journal_manager.py
│ ├── journal_store.py
│ ├── memory_client.py
│ └── voice_agent.py
├── frontend/
│ ├── src/
│ │ ├── app/
│ │ ├── components/
│ │ └── types/
│ ├── .env.local.example
│ └── package.json
├── docker/
│ ├── Dockerfile.backend
│ └── Dockerfile.frontend
├── docker-compose.yml
├── requirements.txt
└── README.md
- Record a voice journal entry from the modal, or type directly in chat.
- Save a mood snapshot from the dashboard header.
- Ask questions such as "What did I say about work this week?" or "Do I have meetings today?"
- Reuse the same chat session to benefit from working memory continuity.
- Review the sidebar schedule and journal feed in the frontend.
Use these Compose commands for the local container workflow:
docker compose up --build
docker compose up -d
docker compose logs -f backend
docker compose logs -f frontend
docker compose logs -f memory-server
docker compose downIf you update frontend environment values such as NEXT_PUBLIC_API_URL, rebuild the frontend image so the new value is baked into the Next.js bundle.
This repository currently documents local development and Docker Compose deployment only.
Unlike the reference dealership demo, this project does not yet include a terraform/ directory or a cloud deployment guide. If you want to deploy it to a cloud VM or container platform, the main pieces to externalize are:
- a Redis instance reachable by the Agent Memory Server
- the Agent Memory Server service
- the FastAPI backend service
- the Next.js frontend with the correct
NEXT_PUBLIC_API_URL - secrets for Sarvam, OpenAI, and optional Google Calendar OAuth credentials
- Bhavana Giri — @bhavana-giri
This repository does not currently include a top-level LICENSE file. Add one before redistributing the project if you need explicit license terms.
