You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Junjo Server](https://github.com/mdrideout/junjo-server) is an optional, free, open-source companion telemetry server that can ingest OpenTelemetry traces from Junjo, and visualize the workflow execution graph structures.
89
+
[Junjo Server](https://github.com/mdrideout/junjo-server) is an optional, free, open-source companion telemetry visualization platform for debugging Junjo workflows.
90
90
91
-
The user interface makes it easy to observe and debug workflow executions. Step through every single state machine update to see how data changes throughout the workflow's lifecycle.
91
+
**Quick Start:**
92
+
93
+
```bash
94
+
# Create docker-compose.yml (see docs for full example)
95
+
# Start services
96
+
docker compose up -d
97
+
98
+
# Access UI at http://localhost:5153
99
+
```
100
+
101
+
**Features:**
102
+
- Interactive graph visualization with execution path tracking
103
+
- State step debugging - see every state change in chronological order
104
+
- LLM decision tracking and trace timeline
105
+
- Multi-execution comparison
106
+
- Built specifically for graph-based AI workflows
107
+
108
+
**Architecture:** Three-service Docker setup (backend, ingestion service, frontend) that runs on minimal resources (1GB RAM, shared vCPU).
109
+
110
+
See the [Junjo Server documentation](https://python-api.junjo.ai/junjo_server.html) for complete setup and configuration.
- "50051:50051"# OTel data ingestion (your app connects here)
77
+
- "50052:50052"# Internal gRPC
78
+
volumes:
79
+
- ./.dbdata/badgerdb:/dbdata/badgerdb
80
+
env_file: .env
81
+
networks:
82
+
- junjo-network
83
+
84
+
junjo-server-frontend:
85
+
image: mdrideout/junjo-server-frontend:latest
86
+
ports:
87
+
- "5153:80"# Web UI
88
+
env_file: .env
89
+
networks:
90
+
- junjo-network
91
+
depends_on:
92
+
- junjo-server-backend
93
+
- junjo-server-ingestion
94
+
95
+
networks:
96
+
junjo-network:
97
+
driver: bridge
98
+
99
+
**Start the services:**
47
100
48
101
.. code-block:: bash
49
102
50
-
# Pull and run Junjo Server
51
-
docker run -p 50051:50051 -p 3000:3000 junjo/junjo-server:latest
103
+
# Create .env file (see Configuration section below)
104
+
cp .env.example .env
105
+
106
+
# Start all services
107
+
docker compose up -d
108
+
109
+
# Access the UI
110
+
open http://localhost:5153
111
+
112
+
Resource Requirements
113
+
---------------------
52
114
53
-
Access the UI at http://localhost:3000
115
+
Junjo Server is designed to run on minimal resources:
54
116
55
-
For production setup and advanced configuration, see the `Junjo Server repository <https://github.com/mdrideout/junjo-server>`_.
117
+
- **CPU**: Single shared vCPU is sufficient
118
+
- **RAM**: 1GB minimum
119
+
- **Storage**: Uses SQLite, DuckDB, and BadgerDB (all embedded databases)
120
+
121
+
This makes it affordable to deploy on small cloud VMs.
56
122
57
123
Configuration
58
124
=============
59
125
60
126
Step 1: Generate an API Key
61
127
----------------------------
62
128
63
-
1. Open Junjo Server UI at http://localhost:3000
129
+
1. Open Junjo Server UI at http://localhost:5153
64
130
2. Navigate to Settings → API Keys
65
131
3. Create a new API key
66
132
4. Copy the key to your environment
@@ -107,8 +173,8 @@ Create an OpenTelemetry configuration file:
107
173
108
174
# Configure Junjo Server exporter
109
175
junjo_exporter = JunjoServerOtelExporter(
110
-
host="localhost",
111
-
port="50051",
176
+
host="localhost",# Junjo Server ingestion service host
177
+
port="50051",# Port 50051 receives OpenTelemetry data
112
178
api_key=api_key,
113
179
insecure=True# Use False in production with TLS
114
180
)
@@ -288,16 +354,43 @@ You can use Junjo Server alongside other platforms:
288
354
289
355
Platforms like Jaeger, Grafana, Honeycomb, etc. will receive all Junjo spans with their custom attributes, though they won't have Junjo Server's specialized workflow visualization.
290
356
357
+
Architecture Details
358
+
====================
359
+
360
+
Junjo Server uses a three-service architecture for scalability and reliability:
361
+
362
+
.. code-block:: text
363
+
364
+
Your Application (Junjo Python Library)
365
+
↓ (sends OTel spans via gRPC)
366
+
Ingestion Service :50051
367
+
↓ (writes to BadgerDB WAL)
368
+
↓ (backend polls via internal gRPC :50052)
369
+
Backend Service :1323
370
+
↓ (stores in SQLite + DuckDB)
371
+
↓ (serves HTTP API)
372
+
Frontend :5153
373
+
(web UI)
374
+
375
+
**Port Reference:**
376
+
377
+
- **50051**: Public gRPC - Your application sends telemetry here
378
+
- **50052**: Internal gRPC - Backend reads from ingestion service
379
+
- **50053**: Internal gRPC - Backend server communication
380
+
- **1323**: Public HTTP - API server
381
+
- **5153**: Public HTTP - Web UI
382
+
291
383
Troubleshooting
292
384
===============
293
385
294
386
No data appearing in Junjo Server
295
387
----------------------------------
296
388
297
389
- Verify API key is set correctly: ``echo $JUNJO_SERVER_API_KEY``
298
-
- Check Junjo Server is running: http://localhost:3000
299
-
- Ensure port 50051 is accessible
390
+
- Check services are running: ``docker compose ps``
391
+
- Ensure ingestion service is accessible on port 50051
300
392
- Look for connection errors in your application logs
393
+
- Check ingestion service logs: ``docker compose logs junjo-server-ingestion``
301
394
302
395
Missing LLM data
303
396
----------------
@@ -310,8 +403,17 @@ Performance issues
310
403
------------------
311
404
312
405
- Use sampling for high-volume workflows
313
-
- Consider metric export interval adjustments
314
-
- See Junjo Server docs for retention settings
406
+
- The ingestion service uses BadgerDB as a write-ahead log for durability
407
+
- Backend polls and indexes data asynchronously
408
+
- See `Junjo Server repository <https://github.com/mdrideout/junjo-server>`_ for tuning options
0 commit comments