WorkCenter is a Kubernetes-native operations layer for Claude Code agent teams. Go backend watches ~/.claude/ state files via fsnotify, broadcasts changes over WebSocket/SSE, and serves a D3 force-directed "living brain" dashboard as a single embedded HTML file. Targets ARM64 (Raspberry Pi clusters), deployed via ArgoCD.
Two binaries:
cmd/dashboard/— HTTP server, WebSocket hub, REST API, embedded frontend (this is the product)cmd/workcenter/— orchestrator binary (stub, not yet implemented)
make build-local # compile both binaries for host arch → dist/
make build # cross-compile for linux/arm64 → dist/
make test # go test ./...
make lint # go vet ./...
make docker # docker build -t workcenter:latest .Verify before committing:
go build ./... && go vet ./...The CI expectation is that go build ./... and go vet ./... both pass clean with zero output.
cmd/dashboard/main.go Entry point. Wires watcher → hub → handler → mux.
internal/watcher/watcher.go fsnotify watcher for ~/.claude/teams/ and ~/.claude/tasks/.
Caches state in-memory, emits Event structs to subscribers.
internal/ws/hub.go WebSocket hub. Upgrades /ws connections, broadcasts flattened
events to all clients. FlattenEvent() merges Event.Data into
top-level JSON alongside type and team.
internal/api/handler.go REST handlers. All routes registered in RegisterRoutes().
Legacy routes at /api/*, versioned at /api/v1/*.
internal/state/ State read/write layer (stub).
internal/orchestrator/ Agent lifecycle management (stub).
pkg/sdk/ Public Go SDK (stub).
web/embed.go go:embed directive — embeds *.html *.js *.css from web/.
web/index.html The brain interface. Single file, ~2400 lines, vanilla JS + D3.
web/canvas-engine.js D3 force simulation + Canvas renderer (imported by index.html).
web/ui-layer.js UI overlays: tooltip, drawer, feed, HUD.
web/api-bridge.js WebSocket/REST data bridge + demo mode.
web/ui-styles.css All CSS for UI overlays.
~/.claude/ files → fsnotify → Watcher.handleEvent() → Subscriber callbacks
→ Handler.BroadcastEvent() → Hub.Broadcast() (WebSocket) + SSE channels
→ Browser: WebSocket /ws or SSE /api/v1/events
Events are flattened — Event.Data fields merged into top level:
{ "type": "team_update", "team": "...", "members": [...], "tasks": [...] }
{ "type": "task_update", "taskId": "...", "status": "...", "owner": "...", "team": "..." }
{ "type": "message", "from": "...", "to": "...", "body": "...", "ts": "...", "team": "..." }
{ "type": "agent_status", "agent": "...", "state": "idle|working|waiting|done", "team": "..." }| Method | Path | Description |
|---|---|---|
| GET | /health | Health check |
| GET | /api/v1/teams | List teams with member/task counts |
| GET | /api/v1/teams/{team}/tasks | List tasks for a team |
| POST | /api/v1/spawn | Spawn agent {team, role, prompt} |
| POST | /api/v1/nudge | Send message {agent, team, message} |
| DELETE | /api/v1/agent | Kill agent {agent, team} |
| GET | /api/v1/events | SSE stream |
| POST | /api/v1/decompose | Proxy to Anthropic API {objective} |
| GET | /ws | WebSocket upgrade |
Legacy routes (no /v1/ prefix) still work for backwards compatibility.
~/.claude/teams/{name}/config.json TeamConfig: name, members[], createdAt
~/.claude/teams/{name}/messages.jsonl Append-only message log
~/.claude/teams/{name}/inboxes/ Per-agent message inboxes
~/.claude/tasks/{team}/{id}.json TaskFile: id, subject, status, owner, blockedBy, blocks
- Standard Go. No linter beyond
go vet. No custom style rules. - Flat package structure. Each package in
internal/does one thing. No sub-packages unless there's a real reason. - Concrete types over interfaces. Only define interfaces at consumption boundaries, not on the provider side. The watcher, hub, and handler are concrete structs.
- Errors returned, never panicked.
log.Fatalfonly inmain()for startup failures. - JSON everywhere. All API responses are
application/json. Errors are{"error": "..."}. - http.ServeMux with Go 1.22 patterns. Routes use
"METHOD /path"format. Path params viar.PathValue(). - No frameworks. stdlib
net/http+ gorilla/websocket + fsnotify. That's it. - Generics used sparingly.
readJSON[T]()in watcher is the pattern — only where it eliminates real duplication.
- Files: lowercase, no underscores (e.g.,
handler.go,hub.go) - Packages: single word, lowercase (
watcher,ws,api,state) - Types: PascalCase, exported (
Handler,Hub,Watcher,TeamConfig,TaskFile) - Constructors:
New()orNewX()(e.g.,api.New(),ws.NewHub(),watcher.New()) - Handler methods:
handle<Action>unexported, registered inRegisterRoutes() - Request/response types: defined adjacent to the handler that uses them
- Subscriber pattern for watcher events.
Watcher.Subscribe(func(Event))— callbacks invoked synchronously on the watcher goroutine. Keep them fast. - Mutex-guarded state. Watcher uses
stateMu(RWMutex) for cached teams/tasks, separate frommufor subscriber list. Hub usesmufor client set. Handler usessseMufor SSE channels. - SSE with channel-per-client. Buffered channel (64), slow clients get dropped (
select { case ch <- msg: default: }). - FlattenEvent for frontend compat. The watcher produces
Event{Type, Team, Data}with nested Data.ws.FlattenEvent()merges Data fields into top-level JSON. Both WebSocket and SSE use this. - go:embed for frontend.
web/embed.goembeds all html/js/css. The mux servesweb.FSat/. No build step for frontend. - Flag-based config.
--addrand--claude-dirflags. No env var config exceptANTHROPIC_API_KEY(read in decompose handler only).
- Do not add a web framework (gin, echo, chi, fiber). The stdlib mux is sufficient.
- Do not add a config file or config library (viper, envconfig). Flags + env vars only.
- Do not add an ORM or database. State is the filesystem. The watcher IS the data layer.
- Do not add middleware chains. If you need CORS or logging, add it as a simple wrapper in main.go, not an abstraction.
- Do not create interfaces for internal types.
*watcher.Watcher,*ws.Hub,*api.Handlerare passed as concrete pointers. Interfaces only if you need to mock in tests. - Do not add build tooling for the frontend. No webpack, no npm, no bundler. The brain interface is vanilla JS files embedded directly. If you need a library, load it from CDN.
- Do not split handler.go into multiple files. All handlers live in one file, registered in one place. This makes the API surface scannable.
- Do not change the WebSocket event schema. The frontend and backend must agree on the flattened event format. Changing it breaks live dashboards.
- Do not use gorilla/mux. The project uses
gorilla/websocketonly. Routing is stdlib. - Do not add vendor/. Dependencies managed via
go.sum, not vendored.
- Image:
forgejo.local/rpi-cluster/claude-dashboard:latestorworkcenter:latest - Dockerfile: Multi-stage,
golang:1.22-alpine→gcr.io/distroless/static:nonroot, linux/arm64 - K8s namespace:
claude-agents - Storage: iSCSI PVC (
iscsi-retainStorageClass), mounted at/data/.claudeon dashboard pod (read-only) - Service: NodePort 30801
- External:
https://workcenter.holm.chatvia Cloudflare Tunnel - Env:
ANTHROPIC_API_KEYfor decompose endpoint (optional) - Flag:
--claude-dir=/data/.claudeto point watcher at PVC mount
- Single HTML file, ~2400 lines. Canvas-based rendering with D3 v7 force simulation.
- External deps (CDN only): D3 v7, IBM Plex Mono font
- Demo mode activates automatically when backend is unreachable
- Node colors: blue=#00d4ff (objective), amber=#ff9500 (active), green=#00ff88 (done), red=#ff3366 (blocked), gray=#445566 (pending)
- The animation loop runs at 60fps via requestAnimationFrame
- All REST calls go to /api/v1/* endpoints
- Objective decomposition calls /api/v1/decompose (server-side proxy, no client-side API key)