Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,3 +69,4 @@ See [mechanics](mechanics.md) for more detail.
| RFC#182 | [Allow remote references to .taskcluster.yml files processed by Taskcluster-GitHub](rfcs/0182-taskcluster-yml-remote-references.md) |
| RFC#189 | [Batch APIs for task definition, status and index path](rfcs/0189-batch-task-apis.md) |
| RFC#191 | [Worker Manager launch configurations](rfcs/0191-worker-manager-launch-configs.md) |
| RFC#192 | [`minCapacity` ensures workers do not get unnecessarily killed](rfcs/0192-min-capacity-ensures-workers-do-not-get-unnecessarily-killed.md) |
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
# RFC 192 - `minCapacity` ensures workers do not get unnecessarily killed
* Comments: [#192](https://github.com/taskcluster/taskcluster-rfcs/pull/192)
* Proposed by: @johanlorenzo

# Summary

Optimize worker pools with `minCapacity >= 1` by implementing minimum capacity workers that avoid unnecessary shutdown/restart cycles, preserving caches and reducing task wait times.

## Motivation

Currently, workers in pools with `minCapacity >= 1` exhibit wasteful behavior:

1. **Cache Loss**: Workers shut down after idle timeout (600 seconds for decision pools), losing valuable caches:
- VCS repositories and history
- Package manager caches (npm, pip, cargo, etc.)
- Container images and layers

2. **Provisioning Delays**: New worker provisioning [takes ~75 seconds average for decision pools](https://taskcluster.github.io/mozilla-history/worker-metrics), during which tasks must wait

3. **Resource Waste**: The current cycle of shutdown → detection → spawn → provision → register wastes compute resources and increases task latency

4. **Violation of `minCapacity` Intent**: `minCapacity >= 1` suggests these pools should always have capacity available, but the current implementation allows temporary capacity gaps

# Details

## Current Behavior Analysis

**Affected Worker Pools:**
- Direct `minCapacity: 1`: [`infra/build-decision`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L220), [`code-review/bot-gcp`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L3320)
- Keyed `minCapacity: 1`: [`gecko-1/decision-gcp`, `gecko-3/decision-gcp`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L976), and [pools matching `(app-services|glean|mozillavpn|mobile|mozilla|translations)-1`](https://github.com/mozilla-releng/fxci-config/blob/43c18aab0826244e369b16a964637b6c411c7760/worker-pools.yml#L1088)

**Current Implementation Issues:**
- Worker-manager enforces `minCapacity` by spawning new workers when capacity drops below threshold
- Generic-worker shuts down after `idleTimeoutSecs` regardless of `minCapacity` requirements
- Gap exists between worker shutdown and replacement detection/provisioning

## Proposed Solution

### Core Concept: `minCapacity` Workers

Workers fulfilling `minCapacity >= 1` requirements should receive significantly longer idle timeouts to preserve caches.

### Implementation Approach

#### 1. Worker Identification and Tagging

**Worker Provisioning Logic:**
- Worker-manager determines worker idle timeout configuration at spawn time based on current pool capacity
- **No Special Replacement Logic**: Failed workers are handled through normal provisioning cycles, not immediate replacement

#### 2. Lifecycle-Based Worker Configuration

Workers are identified with a boolean flag at launch time indicating whether they fulfill minCapacity requirements. This approach works within TaskCluster's architectural constraints where worker-manager cannot communicate configuration changes to running workers.

**Launch-Time Configuration:**
Worker-manager determines worker type at spawn time based on current pool capacity needs. Workers fulfilling `minCapacity` requirements are marked with a boolean flag and receive indefinite idle timeout (0 seconds). Additional workers beyond `minCapacity` receive standard idle timeouts (default 600 seconds).

**Immutable Worker Behavior:**
- Workers receive their complete configuration at startup
- No runtime configuration changes
- Worker idle timeout never changes during worker lifetime
- `minCapacity` requirements fulfilled through worker replacement, not reconfiguration

#### 3. Pool Configuration

**New Configuration Options:**
Pools will have a new boolean flag `enableMinCapacityWorkers` that enables minCapacity worker behavior. When enabled, workers fulfilling minCapacity requirements will run indefinitely (idle timeout set to 0), providing maximum cache preservation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we really want an extra flag for this. For me minCapacity is already there.
Unless someone wanted to use it the way they were currently working, I think we can just not introduce a new flag.
just thinking out loud

Copy link
Author

@JohanLorenzo JohanLorenzo Oct 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of a new flag in order to roll out the changes. But maybe there's another way I didn't think of. I'm open to other options 🙂


**Validation:**
- Fail pool provisioning entirely if invalid configuration (e.g., `minCapacity > maxCapacity`)
- Boolean flag is optional and defaults to `false` for backward compatibility
- When `enableMinCapacityWorkers` is `true`, minCapacity workers receive indefinite runtime (idle timeout = 0)

#### 4. Enhanced Provisioning Logic

**Provisioning Strategy:**
The enhanced estimator uses existing `minCapacity` enforcement logic but provisions workers with different idle timeout configurations based on pool needs. When total running capacity falls below minCapacity and `enableMinCapacityWorkers` is enabled, new workers are spawned with indefinite runtime (idle timeout = 0) to serve as long-lived minCapacity workers. Additional workers beyond `minCapacity` are spawned with standard idle timeouts. This demand-based approach avoids over-provisioning and works within existing estimator logic.

**Current Estimator Implementation:** [Estimator.simple() method](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/estimator.js#L7-L84) enforces `minCapacity` as a floor at [line 35-39](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/estimator.js#L35-L39)

**Capacity Management:**
- **minCapacity increases**: Spawn additional minCapacity workers with indefinite runtime when `enableMinCapacityWorkers` is enabled
- **minCapacity decreases**: Use quarantine-based two-phase removal to safely terminate excess minCapacity workers (see Forceful Termination section)
- **Worker failures**: Failed workers are handled through normal provisioning cycles when total capacity falls below minCapacity
- **Pending tasks check**: Workers should not be terminated if there are pending tasks they could execute

**Capacity Reduction Strategy:**
When `minCapacity` decreases, excess minCapacity workers are removed using a two-phase quarantine approach to prevent task claim expiration. Workers with indefinite idle timeouts cannot have their configuration changed at runtime, so they must be terminated and replaced with standard workers when capacity needs to decrease.

#### 5. Worker Removal Using Quarantine Mechanism

**Existing Quarantine System:**
Taskcluster's Queue service provides a [quarantine mechanism](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/queue/src/api.js#L2492-L2545) that prevents workers from claiming new tasks while keeping them alive. When a worker is quarantined:
- The worker cannot claim new tasks from the queue
- The worker's capacity is not counted toward pool capacity
- The worker remains alive until the quarantine period expires
- This prevents task claim expiration errors when removing workers

**Two-Phase Removal Process:**

When minCapacity workers need to be removed (e.g., when `minCapacity` decreases or launch configuration changes), worker-manager uses a two-phase quarantine-based approach:

**Phase 1 - Quarantine (First Scanner Run):**
1. Worker-manager counts all minCapacity workers in the pool
2. When the count exceeds the pool's `minCapacity` setting, identify excess workers by selecting the oldest first.
3. Call the Queue service's [`quarantineWorker` API](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/queue/src/api.js#L2519-L2525) for each excess worker
4. Set `quarantineInfo` to 'minCapacity reduction' to document the reason
5. Quarantined workers immediately stop claiming new tasks

**Phase 2 - Termination (Next Scanner Run):**
1. Worker-manager checks each quarantined worker for three conditions:
- Worker is marked as a minCapacity worker (boolean flag is true)
- Worker was quarantined with reason 'minCapacity reduction'
- Worker is not currently running any tasks
2. If all conditions are met, forcefully terminate the worker via cloud provider API
3. If conditions are not met, wait for the next scanner run

**Alternative Approach:**
Immediate removal on the first scan is simpler to implement but carries the risk of task claim expiration if a worker picks up a task at the moment it's being terminated. The two-phase approach is preferred because it guarantees safe removal.

**Technical Capability:**
Worker-manager has direct cloud provider API access for forceful termination:
- [Worker.states.STOPPING definition](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/data.js#L858-L863)
- [AWS removeWorker](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/aws.js#L382-L418)
- [Google removeWorker](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/google.js#L168-L192)
- [Azure removeWorker](https://github.com/taskcluster/taskcluster/blob/d7fbf0ee9e0d93e079cc7ff069eaceecfd7d29ec/services/worker-manager/src/providers/azure/index.js#L1221-L1233)

#### 6. Monitoring

Existing worker-manager metrics are sufficient for monitoring this feature. The existing `*_queue_running_workers` metrics already track active workers by pool. Implementation-specific monitoring details will be determined during development.

## Rollout Strategy

**Phase 1**: Single Pool Testing
- Enable on one stable test pool (e.g., `gecko-1/decision-gcp` for try branch)
- Monitor success metrics, cache preservation, and termination frequency
- Validate that pool benefits outweigh costs of occasional forceful termination

**Phase 2**: Gradual Expansion
- Roll out to remaining decision pools with stable `minCapacity` requirements
- Prioritize pools where `minCapacity` changes are infrequent
- Monitor trade-off between cache benefits and capacity management responsiveness

**Phase 3**: Full Deployment
- Enable for all eligible pools after validating net positive impact
- Continue monitoring optimization effectiveness across different usage patterns
- Consider disabling for pools where `minCapacity` changes are too frequent

## Error Handling and Edge Cases

**Worker Lifecycle Management:**
- **Pool reconfiguration**: Capacity changes trigger worker replacement, not reconfiguration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a very important edge-case for the long-running (or forever running workers)
If launch config is changed /removed/ etc (either for cost reasons, or if we want to upgrade instance type or image itself) what do we do?

My intuition says if launch config is being archived (not present in new config) and there are long running workers created from it, workers should be killed

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a section to cover this edge case. Let me know what you think!

- **Graceful transitions**: When possible, only terminate idle workers to preserve active caches
- **Resource allocation**: minCapacity workers mixed with other workers on same infrastructure

**Launch Configuration Changes:**
When a launch configuration is changed, removed, or archived, all workers created from the old configuration must be terminated and replaced:
- If a launch configuration is archived (not present in new configuration), identify all long-running workers created from it
- Use the two-phase quarantine process to safely terminate these workers
- Worker-manager will spawn new workers using the updated launch configuration
- This ensures workers always run with current configuration and prevents indefinite use of outdated configurations

**Capacity Reduction Strategy:**
When `minCapacity` decreases, excess minCapacity workers are removed using the two-phase quarantine process to prevent task claim expiration. Idle workers are prioritized for termination to preserve active caches when possible.

## Compatibility Considerations

**Backward Compatibility:**
- Opt-in via `enableMinCapacityWorkers` boolean flag (defaults to `false`)
- Existing pools continue current behavior unless explicitly enabled
- No changes to generic-worker's core idle timeout mechanism

**Future Direction:**
Once this behavior is proven stable and beneficial, it may become the default behavior for all pools with `minCapacity >= 1`. The boolean flag provides a transition period to validate the approach before making it the standard.

**API Changes:**
- Enhanced worker pool configuration schema
- Existing cloud provider termination APIs used for capacity management

**Security Implications:**
- No security changes - same authentication and authorization flows
- Longer-lived workers maintain same credential rotation schedule

**Performance Implications:**
- **Positive**: Reduced task wait times, preserved caches, fewer API calls
- **Negative**: Slightly higher resource usage for idle `minCapacity` workers
- **Net**: Expected improvement in overall system efficiency

# Implementation

<Once the RFC is decided, these links will provide readers a way to track the
implementation through to completion, and to know if they are running a new
enough version to take advantage of this change. It's fine to update this
section using short PRs or pushing directly to master after the RFC is
decided>

* <link to tracker bug, issue, etc.>
* <...>
* Implemented in Taskcluster version ...
1 change: 1 addition & 0 deletions rfcs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,4 @@
| RFC#182 | [Allow remote references to .taskcluster.yml files processed by Taskcluster-GitHub](0182-taskcluster-yml-remote-references.md) |
| RFC#189 | [Batch APIs for task definition, status and index path](0189-batch-task-apis.md) |
| RFC#191 | [Worker Manager launch configurations](0191-worker-manager-launch-configs.md) |
| RFC#192 | [`minCapacity` ensures workers do not get unnecessarily killed](0192-min-capacity-ensures-workers-do-not-get-unnecessarily-killed.md) |