Skip to content

Commit 119cdb2

Browse files
committed
RFC#0192 - ensures workers do not get unnecessarily killed
1 parent 7cf7165 commit 119cdb2

File tree

3 files changed

+231
-0
lines changed

3 files changed

+231
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,3 +69,4 @@ See [mechanics](mechanics.md) for more detail.
6969
| RFC#182 | [Allow remote references to .taskcluster.yml files processed by Taskcluster-GitHub](rfcs/0182-taskcluster-yml-remote-references.md) |
7070
| RFC#189 | [Batch APIs for task definition, status and index path](rfcs/0189-batch-task-apis.md) |
7171
| RFC#191 | [Worker Manager launch configurations](rfcs/0191-worker-manager-launch-configs.md) |
72+
| RFC#192 | [`minCapacity` ensures workers do not get unnecessarily killed](rfcs/0192-min-capacity-ensures-workers-do-not-get-unnecessarily-killed.md) |
Lines changed: 229 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,229 @@
1+
# RFC 192 - `minCapacity` ensures workers do not get unnecessarily killed
2+
* Comments: [#192](https://github.com/taskcluster/taskcluster-rfcs/pull/192)
3+
* Proposed by: @johanlorenzo
4+
5+
# Summary
6+
7+
Optimize worker pools with `minCapacity >= 1` by implementing minimum capacity workers that avoid unnecessary shutdown/restart cycles, preserving caches and reducing task wait times.
8+
9+
## Motivation
10+
11+
Currently, workers in pools with `minCapacity >= 1` exhibit wasteful behavior:
12+
13+
1. **Cache Loss**: Workers shut down after idle timeout (600 seconds for decision pools), losing valuable caches:
14+
- VCS repositories and history
15+
- Package manager caches (npm, pip, cargo, etc.)
16+
- Container images and layers
17+
18+
2. **Provisioning Delays**: New worker provisioning [takes ~75 seconds average for decision pools](https://taskcluster.github.io/mozilla-history/worker-metrics), during which tasks must wait
19+
20+
3. **Resource Waste**: The current cycle of shutdown → detection → spawn → provision → register wastes compute resources and increases task latency
21+
22+
4. **Violation of `minCapacity` Intent**: `minCapacity >= 1` suggests these pools should always have capacity available, but the current implementation allows temporary capacity gaps
23+
24+
# Details
25+
26+
## Current Behavior Analysis
27+
28+
**Affected Worker Pools:**
29+
- Direct `minCapacity: 1`: [`infra/build-decision`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L220), [`code-review/bot-gcp`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L3320)
30+
- Keyed `minCapacity: 1`: [`gecko-1/decision-gcp`, `gecko-3/decision-gcp`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L976), and [pools matching `(app-services|glean|mozillavpn|mobile|mozilla|translations)-1`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L1088)
31+
32+
**Current Implementation Issues:**
33+
- Worker-manager enforces `minCapacity` by spawning new workers when capacity drops below threshold
34+
- Generic-worker shuts down after `afterIdleSeconds` regardless of `minCapacity` requirements
35+
- Gap exists between worker shutdown and replacement detection/provisioning
36+
37+
## Proposed Solution
38+
39+
### Core Concept: `minCapacity` Workers
40+
41+
Workers fulfilling `minCapacity >= 1` requirements should receive significantly longer idle timeouts to preserve caches.
42+
43+
### Implementation Approach
44+
45+
#### 1. Worker Identification and Tagging
46+
47+
**Worker Provisioning Logic:**
48+
- Worker-manager determines worker idle timeout configuration at spawn time based on current pool capacity
49+
- Uses existing [`launch_config_id` system](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/data.js#L313-L362) to ensure per-variant capacity tracking
50+
- **No Special Replacement Logic**: Failed workers are handled through normal provisioning cycles, not immediate replacement
51+
52+
#### 2. Lifecycle-Based Worker Configuration
53+
54+
Workers are assigned `minCapacity` or overflow roles at launch time and never change configuration during their lifetime. This approach works within TaskCluster's architectural constraints where worker-manager cannot communicate configuration changes to running workers.
55+
56+
**Launch-Time Configuration:**
57+
Worker-manager determines worker role at spawn time based on current pool capacity needs. Workers fulfilling `minCapacity` requirements receive the pool's configured `minCapacityIdleTimeoutSecs` (0 for indefinite runtime) and are tagged with 'min-capacity' role. Additional workers beyond `minCapacity` receive standard idle timeouts (default 600 seconds) and are tagged as 'overflow' workers.
58+
59+
**Immutable Worker Behavior:**
60+
- Workers receive their complete configuration at startup
61+
- No runtime configuration changes
62+
- Worker role and idle timeout never change during worker lifetime
63+
- `minCapacity` requirements fulfilled through worker replacement, not reconfiguration
64+
65+
#### 3. Pool Configuration
66+
67+
**New Configuration Options:**
68+
Pools will have a new `minCapacityIdleTimeoutSecs` parameter that enables `minCapacity` worker behavior. Setting this to 0 means `minCapacity` workers will run indefinitely (no idle timeout), providing maximum cache preservation.
69+
70+
**Validation:**
71+
- Fail pool provisioning entirely if invalid configuration (e.g., `minCapacity > maxCapacity`)
72+
- Require `minCapacityIdleTimeoutSecs >= 0` if `minCapacity` workers are desired
73+
- Setting `minCapacityIdleTimeoutSecs = 0` enables indefinite runtime for maximum cache preservation
74+
75+
#### 4. Enhanced Provisioning Logic
76+
77+
**Provisioning Strategy:**
78+
The enhanced estimator uses existing `minCapacity` enforcement logic but provisions workers with different idle timeout configurations based on pool needs. When total running capacity falls below minCapacity, new workers are spawned with indefinite runtime (minCapacityIdleTimeoutSecs = 0) to serve as long-lived `minCapacity` workers. Additional workers beyond `minCapacity` are spawned with standard idle timeouts. This demand-based approach avoids over-provisioning and works within existing estimator logic.
79+
80+
**Current Estimator Implementation:** [Estimator.simple() method](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/estimator.js#L7-L84) enforces `minCapacity` as a floor at [line 35-39](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/estimator.js#L35-L39)
81+
82+
**Capacity Management:**
83+
- **minCapacity increases**: Spawn additional `minCapacity` workers with indefinite runtime
84+
- **minCapacity decreases**: Forcefully terminate excess `minCapacity` workers (see Capacity Reduction Trade-offs)
85+
- **Worker failures**: Failed workers are handled through normal provisioning cycles when total capacity falls below minCapacity
86+
- **Multi-region**: Pool-wide capacity management (any region can fulfill minCapacity)
87+
88+
**Capacity Reduction Trade-offs:**
89+
When `minCapacity` decreases, excess `minCapacity` workers must be forcefully terminated since:
90+
- Worker-manager cannot communicate with running workers to reduce their idle timeouts
91+
- Waiting for natural idle timeout could take hours (defeating responsive capacity management)
92+
- Forceful termination provides immediate capacity adjustment but loses cache benefits
93+
94+
#### 5. Forceful Termination for Capacity Management
95+
96+
**Technical Capability:**
97+
Worker-manager has direct cloud provider API access and can forcefully terminate instances:
98+
- **GCP**: [`workerId = instanceId` mapping](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/google.js#L341), uses [`compute.instances.delete()` API](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/google.js#L181-L185)
99+
- **AWS**: [`workerId = instanceId` mapping](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/aws.js#L243), uses [`TerminateInstancesCommand` API](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/aws.js#L393-L395)
100+
- **Azure**: [Worker ID maps to VM name](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/azure/index.js#L302), uses [VM deletion API](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/azure/index.js#L1249-L1254)
101+
102+
**Implementation:**
103+
Worker-manager has existing capability to forcefully terminate workers by updating the database to mark workers as STOPPING state and then making direct cloud provider API calls to terminate the instances. This process works across all supported cloud providers (GCP, AWS, Azure) using their respective termination APIs.
104+
105+
**Implementation References:**
106+
- [Worker.states.STOPPING definition](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/data.js#L858-L863)
107+
- [AWS removeWorker implementation](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/aws.js#L382-L418)
108+
- [Google removeWorker implementation](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/google.js#L168-L192)
109+
- [Azure removeWorker implementation](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/azure/index.js#L1221-L1233)
110+
111+
**Trade-offs:**
112+
-**Immediate capacity response**: `minCapacity` changes take effect in ~30 seconds
113+
-**Cache loss**: Defeats primary optimization goal when terminating active workers
114+
- ⚠️ **Selective termination**: Prioritize terminating idle workers when possible
115+
116+
#### 6. Enhanced Health Monitoring
117+
118+
**MinCapacity Worker Health Checks:**
119+
- More aggressive health monitoring for `minCapacity` workers
120+
- Faster detection of unresponsive `minCapacity` workers
121+
- Automatic detection and fallback if `minCapacity` workers cause issues
122+
123+
#### 7. Monitoring and Alerting
124+
125+
**Implementation Location:**
126+
New metrics will be added to [services/worker-manager/src/monitor.js](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/monitor.js#L377) following the existing pattern established in [lines 246-377](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/monitor.js#L246-L377). These metrics will be displayed in the Taskcluster UI Status Dashboard for human operators and exposed via Prometheus `/metrics` endpoint for external monitoring systems to create custom dashboards and alerting.
127+
128+
**New Prometheus Metrics:**
129+
130+
1. **MinCapacity Worker Count** - Gauge metric tracking the number of workers fulfilling `minCapacity` requirements, labeled by worker state (running/stopping/requested)
131+
132+
2. **Overflow Worker Count** - Gauge metric tracking the number of workers beyond `minCapacity` requirements, labeled by worker state
133+
134+
3. **MinCapacity Worker Terminations** - Counter metric tracking total `minCapacity` workers terminated, labeled by termination reason (capacity-reduction/health-check/idle-timeout)
135+
136+
4. **MinCapacity Deficit Duration** - Histogram metric measuring time spent below `minCapacity` threshold, with buckets for different time ranges (1s to 10 minutes)
137+
138+
**Metrics Integration:**
139+
Metrics will be exposed in [estimator.js](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/estimator.js#L89-L92) using the existing `exposeMetrics()` pattern to report worker counts by pool and role during each provisioning cycle.
140+
141+
**Alerting Rules:**
142+
Two key alerting rules will monitor `minCapacity` worker health: a MinCapacityDeficit alert triggered when worker pools fall below `minCapacity` for more than 30 seconds, and a HighMinCapacityTerminations alert triggered when termination rates exceed 0.1 workers per 5-minute window for more than 2 minutes.
143+
144+
## Success Measurement
145+
146+
**Primary Metrics:**
147+
148+
1. **Task Wait Time Reduction** - Measure 95th percentile task pending-to-running duration before and after implementation by worker pool
149+
150+
2. **Worker Spawn Frequency Reduction** - Track the rate of new worker requests relative to total capacity changes to measure churn reduction
151+
152+
3. **MinCapacity Worker Stability** - Calculate percentage of time pools maintain `minCapacity` without deficits
153+
154+
**Secondary Metrics:**
155+
156+
4. **Cache Preservation Effectiveness** - Compare 90th percentile worker uptime between `minCapacity` and overflow workers to measure cache retention
157+
158+
5. **Termination Frequency Impact** - Monitor rate of `minCapacity` worker terminations per hour
159+
160+
6. **Capacity Response Time** - Average time to fulfill `minCapacity` requirements after capacity deficits occur
161+
162+
**Success Criteria:**
163+
- **Task wait time**: 20% reduction in P95 pending→running time
164+
- **Worker churn**: 50% reduction in worker spawn frequency for `minCapacity` pools
165+
- **Cache effectiveness**: 80% of `minCapacity` workers have >1 hour uptime
166+
- **Deficit resolution**: <60 seconds average time to resolve `minCapacity` deficits
167+
- **Termination cost**: <5% of `minCapacity` workers terminated for capacity reduction per day
168+
169+
**Trade-off Evaluation:**
170+
Cost-benefit analysis will compare the benefits of reduced task wait times (measured by time savings multiplied by task throughput) against the costs of cache loss from forced terminations (measured by termination rate multiplied by average cache warmup time).
171+
172+
## Rollout Strategy
173+
174+
**Phase 1**: Single Pool Testing
175+
- Enable on one stable test pool (e.g., `gecko-1/decision-gcp` for try branch)
176+
- Monitor success metrics, cache preservation, and termination frequency
177+
- Validate that pool benefits outweigh costs of occasional forceful termination
178+
179+
**Phase 2**: Gradual Expansion
180+
- Roll out to remaining decision pools with stable `minCapacity` requirements
181+
- Prioritize pools where `minCapacity` changes are infrequent
182+
- Monitor trade-off between cache benefits and capacity management responsiveness
183+
184+
**Phase 3**: Full Deployment
185+
- Enable for all eligible pools after validating net positive impact
186+
- Continue monitoring optimization effectiveness across different usage patterns
187+
- Consider disabling for pools where `minCapacity` changes are too frequent
188+
189+
## Error Handling and Edge Cases
190+
191+
**Worker Lifecycle Management:**
192+
- **Pool reconfiguration**: Capacity changes trigger worker replacement, not reconfiguration
193+
- **Graceful transitions**: When possible, only terminate idle workers to preserve active caches
194+
- **Resource allocation**: `minCapacity` workers mixed with overflow workers on same infrastructure
195+
196+
**Capacity Reduction Strategy:**
197+
When `minCapacity` decreases, a hybrid approach minimizes cache loss by first prioritizing termination of idle excess workers to preserve active caches. If additional capacity reduction is needed, busy workers are terminated immediately, starting with the oldest workers first to maximize cache preservation for recently started workers.
198+
199+
## Compatibility Considerations
200+
201+
**Backward Compatibility:**
202+
- Opt-in via `minCapacityIdleTimeoutSecs` configuration
203+
- Existing pools continue current behavior unless explicitly enabled
204+
- No changes to generic-worker's core idle timeout mechanism
205+
206+
**API Changes:**
207+
- Enhanced worker pool configuration schema
208+
- Existing cloud provider termination APIs used for capacity management
209+
210+
**Security Implications:**
211+
- No security changes - same authentication and authorization flows
212+
- Longer-lived workers maintain same credential rotation schedule
213+
214+
**Performance Implications:**
215+
- **Positive**: Reduced task wait times, preserved caches, fewer API calls
216+
- **Negative**: Slightly higher resource usage for idle `minCapacity` workers
217+
- **Net**: Expected improvement in overall system efficiency
218+
219+
# Implementation
220+
221+
<Once the RFC is decided, these links will provide readers a way to track the
222+
implementation through to completion, and to know if they are running a new
223+
enough version to take advantage of this change. It's fine to update this
224+
section using short PRs or pushing directly to master after the RFC is
225+
decided>
226+
227+
* <link to tracker bug, issue, etc.>
228+
* <...>
229+
* Implemented in Taskcluster version ...

rfcs/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -57,3 +57,4 @@
5757
| RFC#182 | [Allow remote references to .taskcluster.yml files processed by Taskcluster-GitHub](0182-taskcluster-yml-remote-references.md) |
5858
| RFC#189 | [Batch APIs for task definition, status and index path](0189-batch-task-apis.md) |
5959
| RFC#191 | [Worker Manager launch configurations](0191-worker-manager-launch-configs.md) |
60+
| RFC#192 | [`minCapacity` ensures workers do not get unnecessarily killed](0192-min-capacity-ensures-workers-do-not-get-unnecessarily-killed.md) |

0 commit comments

Comments
 (0)