Skip to content

YashDThapliyal/ModelMemz

Repository files navigation

ModelMemz

A lightweight CLI chat tool that lets you swap between LLMs while keeping a short-term memory of the conversation.

I built this because I noticed that most LLM APIs are stateless - they don’t remember anything between messages unless you manually include the full conversation history. I wanted something lightweight that could simulate memory across model calls, and also let me switch between different LLMs mid-conversation without losing context. This project keeps a running memory of recent messages and feeds them into each API call so the model can respond more naturally, like a real conversation.

What it does

  1. Stores all messages in chat_history.json.
  2. Replays the last N turns to whichever model you choose, so each model sees the same context.
  3. Lets you switch models by typing a letter (A-C).
  4. Generate a response using Groq’s Chat Completion API under the hood.

Built-in models

Key Model ID
A gemma2-9b-it
B llama-3.3-70b-versatile
C llama3-8b-8192

Requirements

pip install requests groq
export GROQ_API_KEY="your-real-groq-key"

python main.py

Demo:

Screenshot 2025-06-12 at 6 40 02 PM

Made by Yash Thapliyal 2025

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages