Ever wondered what happens if you put two LLMs on a message bus and leave them unsupervised? This repo. This happens.
Alice (a curious marine biologist) and Bob (a philosophical software engineer) chat with each other through Apache Kafka, guided by a tiny LangGraph workflow and powered by OpenAI. It’s like walkie‑talkies for nerdy AIs.
- Two independent agents that talk to each other through Kafka topics
- Small, readable LangGraph state machine: listen → reason → respond → (maybe) continue
- PII Monitor Agent: Real-time observer that detects personally identifiable information violations
- Real-time Visualization: Streamlit dashboard and LangGraph graph visualization
- LangSmith Observability: Full tracing and monitoring with LangSmith for LLMOps
- Pluggable personas for delightful chaos
- Bounded conversations via
max_turnsso your GPUs (and patience) survive - Production-grade structured logging (no more
print()statements) - Environment-based configuration with sensible defaults
- Modular architecture for easy extension
- Each agent has a Kafka
consumer(to listen) and aproducer(to reply). - Both agents run the same three‑step graph:
- listen: pull the latest message from their topic
- reason: build context and ask the LLM what to say next
- respond: publish to the other agent’s topic and increment the turn counter
- The conversation ends when
turn_count >= max_turns.
Code to peek at: src/a2a_conversation/ package with modular components.
- Python 3.11+ recommended
- Note:
pyproject.tomlcurrently says>=3.14. If you don’t have a time machine, 3.11–3.12 works fine with the listed libs. Adjust if needed.
- Note:
- A running Kafka broker at
localhost:9092 - An OpenAI API key (
OPENAI_API_KEY)
With uv (fast, nice, recommended):
# if you have uv installed
uv syncCopy .env.example to .env and configure:
cp .env.example .envEdit .env with your settings:
# Required
OPENAI_API_KEY=sk-...
# Optional (defaults shown)
LLM_MODEL=gpt-4
LLM_TEMPERATURE=0.8
LLM_MAX_TOKENS=500
KAFKA_BOOTSTRAP_SERVERS=localhost:9092
KAFKA_ALICE_TOPIC=alice-messages
KAFKA_BOB_TOPIC=bob-messages
MAX_TURNS=12
LOG_LEVEL=INFO
LOG_FILE=logs/a2a_conversation.logAll settings have sensible defaults and can be configured via environment variables.
Run everything with a single command:
# Make sure your .env file has OPENAI_API_KEY set
docker-compose upThis starts both Kafka and the application in containers with health checks and proper networking.
To run in detached mode:
docker-compose up -dView logs:
docker-compose logs -f a2a-conversationStop everything:
docker-compose downFirst, start Kafka:
docker run -d --name kafka -p 9092:9092 apache/kafka:latestThen run the application locally:
# Using Python
python -m src.a2a_conversation.main
# Or with uv
uv run python -m src.a2a_conversation.main- Bob boots into listening mode, subscribes to
alice-messages. - Alice generates the first message via the LLM, publishes to
bob-messages. - The two take turns until
max_turnsis hit. - Structured logs with timestamps, log levels, and context (console + file).
Default topics (from the code):
- Alice sends on:
alice-messages, listens on:bob-messages - Bob sends on:
bob-messages, listens on:alice-messages
The project includes three visualization options with different dependency requirements:
Generate Mermaid diagrams and view text-based graph structure:
uv run python examples/visualize_graph.pyThis creates a conversation_graph.mmd file you can:
- View at mermaid.live
- Embed in GitHub markdown (renders automatically)
- Use in documentation
First, install the optional dashboard dependencies:
uv sync --extra dashboard
# or with pip
pip install -e ".[dashboard]"Then launch the interactive dashboard to monitor conversations in real-time:
streamlit run src/a2a_conversation/dashboard.pyFeatures:
- Live conversation timeline with color-coded messages
- Statistics dashboard showing message counts and lengths
- State graph visualization of the LangGraph workflow
- Auto-refresh to see new messages as they arrive
- Full conversation history with detailed state information
The dashboard will open in your browser at http://localhost:8501
Note: On Python 3.14, you may need cmake to build pyarrow (a Streamlit dependency). Consider using Python 3.11-3.12 for the dashboard.
Generate a static visualization of the LangGraph state machine:
from a2a_conversation.agents.conversational_agent import ConversationalAgent
from a2a_conversation.utils.visualization import visualize_graph, print_graph_structure
# After creating an agent
alice = ConversationalAgent(...)
# Generate PNG visualization
visualize_graph(alice.graph, "conversation_graph.png")
# Or generate Mermaid diagram
visualize_graph(alice.graph, "conversation_graph.mmd", format="mermaid")
# Print text structure
print_graph_structure(alice.graph)Prerequisites for static PNG/SVG visualization (optional):
# Install system dependency first
# macOS
brew install graphviz
# Ubuntu/Debian
sudo apt-get install graphviz graphviz-dev
# Then install the optional Python package
uv pip install -e ".[viz]"
# or with pip
pip install -e ".[viz]"Note: The Streamlit dashboard works without pygraphviz - it uses Mermaid diagrams instead!
This project includes comprehensive LangSmith integration for production-grade observability and LLMOps.
-
Get your LangSmith API key from smith.langchain.com
-
Add to your
.env:LANGCHAIN_TRACING_V2=true LANGCHAIN_API_KEY=your-langsmith-api-key-here LANGCHAIN_PROJECT=a2a-conversation
-
Run your agents - tracing happens automatically!
Conversational Agents:
listen_for_message: Kafka message consumption with timingreason_with_llm: LLM reasoning with context, turn count, and conversation historysend_response: Message publishing with metadatastart_conversation/run_agent: Full conversation lifecycle
PII Monitor Agent:
check_for_pii: PII detection analysis with categories checkedprocess_message: Message processing pipeline with violation counts
Trace Metadata Includes:
- Agent names and roles (initiator/listener)
- Turn counts and conversation length
- Message lengths and response times
- PII detection results (types, severity, violations)
- Conversation history context
Once enabled, view your traces at: https://smith.langchain.com
You'll see:
- Full conversation flows with timing breakdowns
- LLM calls with prompts, completions, and token usage
- Agent decision paths through the LangGraph state machine
- PII detection results with security insights
- Performance metrics for each conversation turn
from a2a_conversation.utils.observability import ConversationTracer
tracer = ConversationTracer(project_name="my-custom-project")
# Log custom metrics
tracer.log_turn_metrics(
agent_name="Alice",
turn_count=5,
message_length=128,
response_time_ms=450.2
)
# Trace custom operations
with tracer.trace_operation("custom_operation", tags=["analysis"]) as outputs:
# Your code here
outputs["result"] = "success"- Debug faster: See exactly what your agents are thinking and saying
- Optimize performance: Identify slow LLM calls and bottlenecks
- Monitor quality: Track PII violations and conversation health
- Collaborate: Share traces with your team
- Analyze patterns: Use LangSmith's analytics to understand agent behavior
Set LANGCHAIN_TRACING_V2=false in your .env or remove the variable entirely.
All configuration is via environment variables (see .env.example):
MAX_TURNS: how long they chat before touching grassLLM_MODEL,LLM_TEMPERATURE,LLM_MAX_TOKENS: control the LLM behaviorKAFKA_BOOTSTRAP_SERVERS: point to your Kafka clusterLOG_LEVEL: DEBUG, INFO, WARNING, ERROR, CRITICALLANGCHAIN_TRACING_V2,LANGCHAIN_API_KEY,LANGCHAIN_PROJECT: LangSmith observability settings- Personas: edit
src/a2a_conversation/main.py→create_agent_personas()(pirates? pastry chefs? pirate pastry chefs?)
- "ModuleNotFoundError: langgraph/langchain-openai/kafka-python" → dependencies not installed; run
uv syncorpip install. - "NoBrokersAvailable" → Kafka isn't up or not on
localhost:9092. Start Docker container or configureKAFKA_BOOTSTRAP_SERVERS. - "OpenAI Authentication Error" → set
OPENAI_API_KEYin.env. - Messages not flowing? Ensure both topics are implicitly created by the first
send(Kafka does this by default in many setups) or pre-create them. - Python version woes: if 3.14 jokes stop being funny, use 3.11–3.12.
- Check logs: Console output and
logs/a2a_conversation.log(configurable viaLOG_FILE).
- Why Kafka for two agents?
- Because real systems scale, and because it’s fun. You can split agents across machines, add more listeners, or log every message like a responsible adult.
- Do I need LangGraph?
- You don’t need it the way you don’t need seatbelts. It keeps the flow explicit and debuggable.
- Can I change the model provider?
- Yes. Swap out
ChatOpenAIfor your favoritelangchain-compatible chat model.
- Yes. Swap out
PRs welcome. If your change causes Alice and Bob to discuss the heat death of the universe for 10,000 turns, please also include snacks.
This project is offered as-is with many emojis and few guarantees. Use responsibly and hydrate frequently.