This document presents real-world scenarios where emerge excels, complete with problem descriptions, solutions, code examples, and expected outcomes. Each use case shows how emerge's decentralized synchronization solves actual problems better than traditional approaches. See Alternatives for comparisons with other solutions.
Your SaaS platform has 200 microservices making calls to an expensive third-party API with a rate limit of 100 requests per second. Without coordination, services hit rate limits and get throttled.
- Central rate limiter: Single point of failure, added latency
- Fixed time slots: Wastes capacity, rigid scheduling
- Token bucket: Complex distributed state management
// Each microservice runs this
client := emerge.MinimizeAPICalls(scale.Medium) // See [Goals](../concepts/goals.md)
client.Start(ctx)
func (s *Service) CallExpensiveAPI(data []Request) {
// Accumulate requests
s.pendingRequests = append(s.pendingRequests, data...)
// Wait for synchronization
if client.IsConverged() && len(s.pendingRequests) > 0 {
// All services batch at the same time
batch := s.createBatch(s.pendingRequests)
response := s.apiClient.BatchCall(batch)
s.processBatchResponse(response)
s.pendingRequests = nil
}
}- 80% reduction in API calls through batching
- No rate limit violations - synchronized services stay under limit
- Resilient - continues working even if some services fail
- Cost savings - Fewer API calls = lower bills
Before Emerge:
Service A: --x--x--x--x--x-- (many individual calls)
Service B: -x--x--x--x--x--- (hitting rate limits)
Service C: ---x--x--x--x--x- (getting throttled)
With Emerge:
Service A: --------XXXX----- (batched calls)
Service B: --------XXXX----- (synchronized)
Service C: --------XXXX----- (under rate limit)
You have 1000 battery-powered sensors that need to report data while maximizing battery life. Constant reporting drains batteries, but you need regular updates.
- Always-on: Batteries die in days
- Fixed schedule: Wastes energy, misses events
- Central coordinator: Doesn't scale, single point of failure
// Each sensor runs this
client := emerge.SaveEnergy(scale.Large)
client.Start(ctx)
type Sensor struct {
batteryLevel float64
data []Reading
}
func (s *Sensor) ManagePower() {
for {
// Collect readings with minimal power
s.collectData()
// Synchronize for batch transmission
if client.IsConverged() && s.batteryLevel > 20 {
// All sensors transmit together
s.transmitData() // High power operation
s.enterSleepMode() // Save energy
}
// Sparse activity pattern
time.Sleep(5 * time.Second)
}
}- 10x battery life improvement
- Predictable data collection windows
- Adaptive to battery levels - low battery sensors conserve more
- Self-organizing - no central coordinator needed
Traditional (Always-on):
Battery: 100% ████████████████████
Day 1: 80% ████████████████
Day 2: 60% ████████████
Day 3: 40% ████████
Day 4: 20% ████
Day 5: 0% Dead
Emerge (Synchronized):
Battery: 100% ████████████████████
Week 1: 90% ██████████████████
Week 2: 80% ████████████████
Week 3: 70% ██████████████
Week 4: 60% ████████████
You run 50 trading bots across different exchanges. When they all try to execute trades simultaneously, they cause slippage and compete against each other, reducing profits.
- Market impact: Bots compete, moving prices against you
- Slippage: Simultaneous orders cause price movements
- Complexity: Managing inter-bot communication
// Coordinate bots to trade at different times
client := emerge.DistributeLoad(scale.Small)
client.Start(ctx)
type TradingBot struct {
exchange string
strategy Strategy
}
func (bot *TradingBot) ExecuteTrades() {
for {
opportunity := bot.strategy.FindOpportunity()
if opportunity != nil {
// Anti-synchronize to avoid competition
if client.Coherence() < 0.3 { // Well distributed
// Each bot trades at different times
bot.executeTrade(opportunity)
log.Printf("Bot %s executed trade", bot.exchange)
}
}
time.Sleep(100 * time.Millisecond)
}
}- 30% reduction in slippage
- Better fill prices - bots don't compete
- Market impact minimized - trades distributed over time
- Higher profits - less self-competition
Without Emerge (All bots trade together):
Exchange A: ████████ (high slippage)
Exchange B: ████████ (competing bids)
Exchange C: ████████ (moving market)
Impact: -$5000 per day from self-competition
With Emerge (Distributed timing):
Exchange A: ██ ██ ██ ██ (spread out)
Exchange B: ██ ██ ██ (no competition)
Exchange C: ██ ██ ██ ██ (minimal impact)
Impact: +$3000 per day from better fills
Your multiplayer game needs to synchronize events across 100 players with minimal latency. Traditional client-server architecture introduces lag and doesn't scale well.
- Server bottleneck: All updates go through server
- Latency: Round-trip to server for every action
- Scaling costs: More players = bigger servers
// Each game client runs this
client := emerge.MinimizeLatency(scale.Tiny)
client.Start(ctx)
type GameClient struct {
playerID string
state GameState
}
func (gc *GameClient) SyncGameEvents() {
ticker := time.NewTicker(50 * time.Millisecond) // 20 FPS
for range ticker.C {
// Quick synchronization for real-time feel
if client.IsConverged() {
// All players update together
gc.processLocalInputs()
gc.broadcastState()
gc.renderFrame()
}
}
}- <100ms latency for state sync
- Smooth gameplay - synchronized frame updates
- P2P architecture - reduced server costs
- Scales horizontally - add players without server upgrades
Client-Server:
Player → Server → Other Players = 150ms average
Emerge P2P:
Player ←→ Nearby Players = 50ms average
Perceived Responsiveness:
Traditional: "Laggy", "Rubber-banding"
Emerge: "Smooth", "Responsive"
Your factory has 20 robotic stations that need to coordinate for efficient production flow. Without synchronization, bottlenecks form and throughput drops.
- Central PLC: Expensive, single point of failure
- Fixed timing: Can't adapt to variations
- Complex programming: Ladder logic maintenance nightmare
// Each robot station runs this
client := emerge.MaintainRhythm(scale.Tiny)
client.Start(ctx)
type RobotStation struct {
stationID int
workQueue []WorkItem
}
func (rs *RobotStation) ProcessItems() {
for {
// Maintain steady production rhythm
if client.IsConverged() {
if len(rs.workQueue) > 0 {
item := rs.workQueue[0]
rs.processWorkItem(item)
rs.passToNextStation(item)
rs.workQueue = rs.workQueue[1:]
}
}
time.Sleep(1 * time.Second) // Production beat
}
}- 25% throughput increase - optimal flow
- Reduced bottlenecks - stations stay synchronized
- Adaptive pacing - adjusts to slowest station
- Simplified maintenance - no complex PLC logic
Uncoordinated:
Station 1: ████ ████ ████ (waiting)
Station 2: ████████ ██ (bottleneck)
Station 3: ████ ████ ████ (irregular)
Output: 120 units/hour
Emerge Synchronized:
Station 1: ████ ████ ████ ████ (steady)
Station 2: ████ ████ ████ ████ (matched)
Station 3: ████ ████ ████ ████ (smooth)
Output: 150 units/hour
Self-driving cars approaching an intersection need to coordinate crossing without traffic lights or central control.
- Traffic lights: Fixed timing, inefficient
- Central controller: Single point of failure, latency
- Complex negotiation: Vehicles must communicate extensively
// Each vehicle runs this
client := emerge.ReachConsensus(scale.Small)
client.Start(ctx)
type AutonomousVehicle struct {
vehicleID string
position Position
intention Intention // straight, left, right
}
func (av *AutonomousVehicle) ApproachIntersection() {
// Form consensus on crossing order
for !av.hasPassedIntersection() {
distance := av.distanceToIntersection()
if distance < 50 && client.IsConverged() {
// Vehicles reach consensus on order
crossingOrder := av.calculateCrossingOrder()
if av.isMyTurn(crossingOrder) {
av.crossIntersection()
} else {
av.adjustSpeed() // Slow down or speed up
}
}
time.Sleep(100 * time.Millisecond)
}
}- 40% reduction in wait times
- No collisions - consensus prevents conflicts
- Smooth traffic flow - no stop-and-go
- Works without infrastructure - no traffic lights needed
Traditional Traffic Light:
North: ████STOP████STOP████
South: ████STOP████STOP████
East: STOP████STOP████STOP
West: STOP████STOP████STOP
Average wait: 45 seconds
Emerge Coordination:
North: ███ ███ ███ ███ (continuous flow)
South: ███ ███ ███ ███ (coordinated gaps)
East: ███ ███ ███ ███ (smooth merging)
West: ███ ███ ███ ███ (no full stops)
Average wait: 27 seconds
Your data pipeline has 200 workers processing streams. Uncoordinated checkpointing causes performance hiccups and storage spikes.
- Checkpoint storms: All workers checkpoint simultaneously
- Storage overload: Sudden write spikes
- Recovery complexity: Inconsistent checkpoint times
// Coordinate checkpointing across workers
client := emerge.Custom().
WithGoal(goal.MinimizeAPICalls). // Treat checkpoints like API calls
WithScale(scale.Medium).
WithTargetCoherence(0.9).
Build()
client.Start(ctx)
type DataWorker struct {
workerID string
checkpoint Checkpoint
processed int64
}
func (dw *DataWorker) ProcessStream(stream DataStream) {
for data := range stream {
dw.processData(data)
dw.processed++
// Coordinate checkpointing
if dw.processed%1000 == 0 && client.IsConverged() {
// All workers checkpoint together
dw.saveCheckpoint()
log.Printf("Worker %s checkpointed at %d",
dw.workerID, dw.processed)
}
}
}- 50% reduction in storage IOPS spikes
- Consistent recovery points across workers
- Predictable performance - no random hiccups
- Simplified recovery - all checkpoints aligned
Uncoordinated:
Storage IOPS: ▁█▂▁▁█▁▃█▁▁▂█▁ (random spikes)
Performance: ▅▁▄▅▅▁▅▃▁▅▅▂▁▅ (unpredictable)
Emerge Coordinated:
Storage IOPS: ▁▁▁█▁▁▁█▁▁▁█▁▁ (predictable)
Performance: ▅▅▅▃▅▅▅▃▅▅▅▃▅▅ (consistent)
A hospital has 500 medical devices requiring periodic maintenance. Uncoordinated maintenance causes equipment shortages and staff overtime.
- Manual scheduling: Error-prone, time-consuming
- Fixed schedules: Doesn't adapt to usage patterns
- Equipment conflicts: Multiple devices offline simultaneously
// Distribute maintenance to avoid conflicts
client := emerge.AdaptToTraffic(scale.Large)
client.Start(ctx)
type MedicalDevice struct {
deviceID string
deviceType string
usageHours int
maintenance MaintenanceSchedule
}
func (md *MedicalDevice) ScheduleMaintenance() {
for {
if md.needsMaintenance() {
// Adapt to hospital traffic patterns
if client.Coherence() < 0.4 { // Well distributed
// Schedule during low-usage periods
md.scheduleWindow = md.findNextWindow()
md.notifyMaintenanceTeam()
log.Printf("Device %s scheduled for %v",
md.deviceID, md.scheduleWindow)
}
}
time.Sleep(1 * time.Hour)
}
}- Zero equipment conflicts - maintenance distributed
- 30% reduction in overtime costs
- Better equipment availability - strategic scheduling
- Adaptive to usage - responds to patterns
Traditional (Conflicting schedules):
MRI Units: ████ ████ (both offline)
Ventilators: ████████ (shortage!)
X-Ray: ████████ (multiple down)
Staff stress: HIGH, Equipment availability: 60%
Emerge (Distributed maintenance):
MRI Units: ██ ██ (staggered)
Ventilators: ██ ██ ██ (always available)
X-Ray: ██ ██ ██ (distributed)
Staff stress: LOW, Equipment availability: 85%
Your CDN has 100 edge servers that need to refresh cached content. Simultaneous refreshes overload origin servers and cause cache misses.
- Origin overload: All edges pull simultaneously
- Cache thundering herd: Mass invalidation causes misses
- Bandwidth spikes: Network congestion
// Coordinate cache refreshes
client := emerge.Custom().
WithGoal(goal.MinimizeAPICalls).
WithScale(scale.Medium).
WithTargetCoherence(0.7). // Partial sync for gradual refresh
Build()
client.Start(ctx)
type EdgeServer struct {
serverID string
cache Cache
origin OriginServer
}
func (es *EdgeServer) RefreshCache() {
for {
staleContent := es.cache.GetStaleContent()
if len(staleContent) > 0 {
// Coordinate refresh timing
coherence := client.Coherence()
// Gradual refresh based on coherence
if coherence > 0.5 && coherence < 0.8 {
// Refresh in waves, not all at once
es.refreshContent(staleContent)
log.Printf("Edge %s refreshed %d items",
es.serverID, len(staleContent))
}
}
time.Sleep(30 * time.Second)
}
}- 90% reduction in origin server spikes
- Smooth bandwidth usage - no congestion
- Better cache hit rates - gradual invalidation
- Improved user experience - fewer cache misses
Traditional (Thundering herd):
Origin Load: ▁▁▁▁█████▁▁▁▁█████▁▁▁ (massive spikes)
Cache Hits: ████▁▁▁▁▁████▁▁▁▁▁████ (drops to zero)
Bandwidth: ▁▁▁▁█████▁▁▁▁█████▁▁▁ (network congestion)
Emerge (Coordinated refresh):
Origin Load: ▂▂▂▃▄▄▄▃▂▂▂▃▄▄▄▃▂▂▂ (smooth waves)
Cache Hits: ████████▇▇████████▇▇██ (maintained)
Bandwidth: ▃▃▃▄▄▅▅▄▄▃▃▃▄▄▅▅▄▄▃▃ (distributed)
Musicians in different locations want to play together online, but network latency makes traditional synchronization impossible.
- Network latency: 50-200ms delays break timing
- Jitter: Variable delays make it worse
- Complexity: Requires expensive specialized hardware
// Musicians synchronize to a shared beat
client := emerge.MaintainRhythm(scale.Tiny)
client.Start(ctx)
type Musician struct {
instrument string
tempo int // BPM
latency time.Duration
}
func (m *Musician) PlayMusic(score MusicScore) {
// Compensate for network latency
m.measureLatency()
beatInterval := time.Minute / time.Duration(m.tempo)
ticker := time.NewTicker(beatInterval)
for beat := range ticker.C {
if client.IsConverged() {
// All musicians on same beat despite latency
nextNote := score.GetNote(beat)
// Play slightly ahead to compensate for latency
m.playWithCompensation(nextNote, m.latency)
}
}
}- Synchronized performance despite latency
- No specialized hardware required
- Adaptive to network conditions
- Musicians stay in rhythm
Traditional (Direct play with latency):
Musician A: ♪ ♪ ♪ ♪ (on time)
Musician B: ♪ ♪ ♪ ♪ (50ms behind)
Musician C: ♪ ♪ ♪ (100ms behind)
Result: Cacophony, unplayable
Emerge (Synchronized with compensation):
Musician A: ♪ ♪ ♪ ♪ (adjusted)
Musician B: ♪ ♪ ♪ ♪ (synchronized)
Musician C: ♪ ♪ ♪ ♪ (in rhythm)
Result: Harmonious, ensemble playing
| If You Need To... | Use This Goal | Expected Benefit |
|---|---|---|
| Batch operations to reduce costs | MinimizeAPICalls |
80% reduction in API calls |
| Spread work across resources | DistributeLoad |
Even load distribution |
| Coordinate agreement | ReachConsensus |
Consensus without voting |
| Optimize for speed | MinimizeLatency |
<100ms coordination |
| Conserve resources | SaveEnergy |
10x battery life |
| Maintain steady rhythm | MaintainRhythm |
Predictable timing |
| Handle failures gracefully | RecoverFromFailure |
Self-healing behavior |
| Adapt to changing conditions | AdaptToTraffic |
Dynamic optimization |
- FAQ - Common questions answered
- Getting Started - Quick start guide
- Goals - Detailed goal descriptions
- Goal-Directed - How emerge pursues goals
- Scales - Configuration sizes
- Alternatives - Comparison with other approaches
- Examples - Interactive simulation