Skip to content

redis-developer/voice-ai-redis-memory-demo

Repository files navigation

Voice Journal with Redis Agent Memory Server

Redis Agent Memory Server Sarvam AI Next.js FastAPI

Voice-first journaling app that demonstrates how Redis Agent Memory Server can power long-term memory, session continuity, and semantic retrieval for a personal assistant that remembers what you said across conversations.

Voice Journal App

Table of Contents

Demo Objectives

  • Voice capture and transcription with Sarvam AI for voice-first journaling
  • Long-term memory storage with Redis Agent Memory Server for cross-session recall
  • Working memory continuity by persisting session conversation turns
  • Intent-aware responses using a RedisVL semantic router for log, chat, and calendar flows
  • Voice playback and summaries using streaming TTS plus optional Ollama-backed response generation
  • Mood and schedule context through mood logging and optional Google Calendar integration

Tech Stack

Layer Technology Purpose
Memory Redis Agent Memory Server Long-term and working memory management
Database Redis Cloud or Redis Stack Journal storage, indexes, and vector-backed retrieval
Voice Sarvam AI Speech-to-text and text-to-speech
Backend FastAPI API endpoints for journaling, chat, mood, and calendar
Frontend Next.js 16 + React 19 Voice journal UI
Intent Routing RedisVL + OpenAI embeddings Semantic intent detection
Response Generation Ollama (optional) Natural-language journal answers
Deployment Docker Compose Local containerized development and demos

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • Docker and Docker Compose
  • Redis instance reachable by the Agent Memory Server
  • Sarvam AI API key
  • OpenAI API key for embeddings and Agent Memory Server extraction
  • Optional: Ollama if you want richer local journal responses
  • Optional: Google Calendar OAuth credentials for schedule features

Getting Started

1. Clone the Repository

git clone https://github.com/bhavana-giri/voice_ai_redis_memory_demo.git
cd voice_ai_redis_memory_demo

2. Environment Configuration

Copy the backend and deployment environment template:

cp .env.example .env
cp frontend/.env.local.example frontend/.env.local

Recommended minimum .env values:

SARVAM_API_KEY=your_sarvam_api_key_here
REDIS_URL=redis://default:password@your-redis-host:port
MEMORY_SERVER_URL=http://localhost:8000
OPENAI_API_KEY=sk-your_openai_api_key_here
NEXT_PUBLIC_API_URL=http://localhost:8080
CORS_ORIGINS=http://localhost:3000
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2

Environment notes by runtime:

Variable Local development Docker on localhost
MEMORY_SERVER_URL http://localhost:8000 http://memory-server:8000 for the backend container
NEXT_PUBLIC_API_URL http://localhost:8080 http://localhost:8080
CORS_ORIGINS http://localhost:3000 http://localhost:3000

For local frontend development, keep frontend/.env.local aligned with the backend URL:

NEXT_PUBLIC_API_URL=http://localhost:8080
NEXT_PUBLIC_GOOGLE_CLIENT_ID=your_google_oauth_client_id_here

3. Start Agent Memory Server

Start the memory server against your Redis instance:

docker run -p 8000:8000 \
  -e REDIS_URL=redis://default:<password>@<your-redis-host>:<port> \
  -e OPENAI_API_KEY=<your-openai-api-key> \
  redislabs/agent-memory-server:latest \
  agent-memory api --host 0.0.0.0 --port 8000 --task-backend=asyncio

4. Run with Docker

The repo now includes a Compose-based deployment path similar to the reference project:

docker compose up --build

Services:

Notes:

  • The frontend image bakes in NEXT_PUBLIC_API_URL at build time.
  • The frontend image also bakes in NEXT_PUBLIC_GOOGLE_CLIENT_ID at build time.
  • Set NEXT_PUBLIC_GOOGLE_CLIENT_ID in .env before running docker compose up --build, even if GOOGLE_CLIENT_ID is already set for the backend.
  • The backend container overrides MEMORY_SERVER_URL to the Compose service hostname.
  • Ollama is not started by Compose; if it is unavailable, the backend falls back to simpler text responses.

5. Run for Development

Backend

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python -m uvicorn api.main:app --host 0.0.0.0 --port 8080

Frontend

cd frontend
npm install
npm run dev

Google Calendar Setup

Calendar support is optional and uses Google Calendar API OAuth, not an iCal URL.

  1. Create OAuth desktop credentials in Google Cloud Console.
  2. Save the downloaded file as credentials.json in the project root.
  3. Start the backend and call a calendar route once.
  4. Complete the browser-based OAuth consent flow to generate token.json.

If credentials.json or token.json is missing, the calendar API returns an empty list and the rest of the app still works.

Screenshots

Voice Journal App

Architecture

flowchart LR
    USER[User Voice Input] --> FE[Next.js Frontend]
    FE --> BE[FastAPI Backend]
    BE --> STT[Sarvam STT]
    BE --> TTS[Sarvam TTS]
    BE --> AGENT[VoiceJournalAgent]
    AGENT --> ROUTER[RedisVL Intent Router]
    AGENT <--> AMS[Redis Agent Memory Server]
    AGENT --> REDIS[(Redis Journal Data)]
    AGENT --> OLLAMA[Ollama Optional]
    AGENT --> CAL[Google Calendar Optional]
    TTS --> FE
Loading

Architecture Flow

  1. The frontend captures typed or recorded input and sends it to FastAPI.
  2. The backend transcribes audio with Sarvam when needed.
  3. The semantic router decides whether the request is a journal log, a journal recall query, or a calendar question.
  4. The agent reads working memory and long-term memory from Redis Agent Memory Server.
  5. The response is generated with Ollama when available, or with a simple fallback when it is not.
  6. The backend streams TTS audio back to the frontend for playback.

Project Structure

voice_ai_redis_memory_demo/
├── api/
│   └── main.py
├── src/
│   ├── analytics.py
│   ├── audio_handler.py
│   ├── calendar_client.py
│   ├── intent_router.py
│   ├── journal_manager.py
│   ├── journal_store.py
│   ├── memory_client.py
│   └── voice_agent.py
├── frontend/
│   ├── src/
│   │   ├── app/
│   │   ├── components/
│   │   └── types/
│   ├── .env.local.example
│   └── package.json
├── docker/
│   ├── Dockerfile.backend
│   └── Dockerfile.frontend
├── docker-compose.yml
├── requirements.txt
└── README.md

Usage

  1. Record a voice journal entry from the modal, or type directly in chat.
  2. Save a mood snapshot from the dashboard header.
  3. Ask questions such as "What did I say about work this week?" or "Do I have meetings today?"
  4. Reuse the same chat session to benefit from working memory continuity.
  5. Review the sidebar schedule and journal feed in the frontend.

Docker Commands Reference

Use these Compose commands for the local container workflow:

docker compose up --build
docker compose up -d
docker compose logs -f backend
docker compose logs -f frontend
docker compose logs -f memory-server
docker compose down

If you update frontend environment values such as NEXT_PUBLIC_API_URL, rebuild the frontend image so the new value is baked into the Next.js bundle.

Cloud Deployment

This repository currently documents local development and Docker Compose deployment only.

Unlike the reference dealership demo, this project does not yet include a terraform/ directory or a cloud deployment guide. If you want to deploy it to a cloud VM or container platform, the main pieces to externalize are:

  • a Redis instance reachable by the Agent Memory Server
  • the Agent Memory Server service
  • the FastAPI backend service
  • the Next.js frontend with the correct NEXT_PUBLIC_API_URL
  • secrets for Sarvam, OpenAI, and optional Google Calendar OAuth credentials

Resources

Maintainers

License

This repository does not currently include a top-level LICENSE file. Add one before redistributing the project if you need explicit license terms.

About

Voice-first journaling demo built with Sarvam AI and Redis Agent Memory Server. It captures conversations, stores long-term memory in Redis and retrieves past context to enable personalized follow-up responses. The app also supports Google Calendar integration for schedule-aware interactions.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages