diff --git a/source/_static/images/deployment-planner-diagram.png b/source/_static/images/deployment-planner-diagram.png new file mode 100644 index 0000000000..a041ff0b26 Binary files /dev/null and b/source/_static/images/deployment-planner-diagram.png differ diff --git a/source/adminguide/deployment_planners.rst b/source/adminguide/deployment_planners.rst new file mode 100644 index 0000000000..64cee50d90 --- /dev/null +++ b/source/adminguide/deployment_planners.rst @@ -0,0 +1,99 @@ +.. Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information# + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 + Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + + +Deployment Planners +====================== + + +Deployment planners determine *how and where instances* are placed across clusters within a zone. +A planner builds and orders a *list of candidate clusters* based on a placement strategy such as available capacity, user dispersion, or pod concentration. +This ordered list is then passed to the *host allocator*, which attempts to deploy the instance following the planner’s priority order. + +Administrators can configure the global setting ``vm.deployment.planner`` to define the default deployment planner for the environment. +This can also be overridden per *Compute Offering*, allowing flexible control over how instances are distributed across the infrastructure. + +Available Planners +------------------ + +FirstFitPlanner +~~~~~~~~~~~~~~~ + +The ``FirstFitPlanner`` ranks all clusters in a zone by their *available (free) capacity*, placing clusters with the most available resources at the top of the list. +This approach prioritizes capacity-driven placement, ensuring efficient utilization of resources across the zone. + +UserDispersingPlanner +~~~~~~~~~~~~~~~~~~~~~ + +The ``UserDispersingPlanner`` aims to *spread a user’s instances across multiple clusters*, reducing the impact of any single cluster failure on that user. + +#. The planner counts the number of instances in the *Running* or *Starting* state for the user’s account in each cluster. +#. Clusters are sorted in **ascending order** based on this count, so clusters with fewer instances from the user are preferred. +#. The global setting ``vm.user.dispersion.weight`` (default: ``1``) controls how strongly dispersion affects ordering: + * ``1``: Ranking is based entirely on dispersion. + * ``< 1``: Available capacity has more influence in placement decisions. + +Lowering the dispersion weight allows a balance between *even distribution* and *efficient capacity usage*. + +UserConcentratedPodPlanner +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``UserConcentratedPodPlanner`` focuses on *pod-level affinity*, preferring pods where the user already has active instances. + +#. The planner identifies all pods in the zone that contain *Running* instances for the user’s account. +#. Pods are sorted in **descending order** by the number of user instances — pods with more user instances come first. +#. Clusters from these pods are then added to the top of the list in that order, so deployment is biased toward pods where the user is already active. +#. Clusters within each pod are *not* further sorted by capacity or instance count. +#. If no pods contain user instances, the cluster order remains unchanged. + +Summary of Planner Behavior +--------------------------- + +.. list-table:: + :header-rows: 1 + + * - Planner + - Placement Focus + - Ordering Criteria + - Typical Use Case + * - FirstFitPlanner + - Capacity + - Descending by available resources + - Capacity-optimized or general-purpose placement + * - UserDispersingPlanner + - Dispersion + - Ascending by user instance count (optionally weighted with capacity) + - Distribute user instances evenly across clusters + * - UserConcentratedPodPlanner + - Pod Affinity + - Descending by user instance count per pod + - Keep user instances within the same pod for locality or data proximity + +Pod-Level vs Cluster-Level Allocation +------------------------------------ + +When ``apply.allocation.algorithm.to.pods = true``: +The allocation algorithm (for example, *FirstFit*) is applied at *pod granularity* first. +The planner will evaluate or rank pods according to the allocation heuristics — for *FirstFit*, that means prioritizing pods with more available capacity according to the FirstFit capacity checks. +After pods are ordered, the planner then considers clusters *inside each pod* — typically evaluating clusters within the selected pod in order (or applying cluster-level heuristics only within that pod). +In other words, *pod-level ordering happens before cluster selection*. + +When ``apply.allocation.algorithm.to.pods = false`` (the default in many deployments): +The allocation algorithm operates at the *cluster level* across the entire zone. + +|deployment-planner-diagram.png| + +.. |deployment-planner-diagram.png| image:: /_static/images/deployment-planner-diagram.png + :alt: Deployment Planner Diagram diff --git a/source/adminguide/index.rst b/source/adminguide/index.rst index 040ad1cd67..1c12ffa9a1 100644 --- a/source/adminguide/index.rst +++ b/source/adminguide/index.rst @@ -138,6 +138,7 @@ Managing VM and Volume Allocation .. toctree:: :maxdepth: 4 + deployment_planners host_and_storage_tags arch_types vm_volume_allocators diff --git a/source/adminguide/vm_volume_allocators.rst b/source/adminguide/vm_volume_allocators.rst index c15ebd8796..dded93ec03 100644 --- a/source/adminguide/vm_volume_allocators.rst +++ b/source/adminguide/vm_volume_allocators.rst @@ -37,14 +37,14 @@ VM allocator supports following algorithms to select a host in the cluster: .. cssclass:: table-striped table-bordered table-hover ============================= ======================== -Algorithm Description +Algorithm Description ============================= ======================== -random Selects a host in the cluster randomly. -firstfit Selects the first available host in the cluster. +random Selects a host in the cluster randomly. +firstfit Selects the first available host in the cluster. userdispersing Selects the host running least instances for the account, aims to spread out the instances belonging to a single user account. -userconcentratedpod_random Selects the host randomly aiming to keep all instances belonging to single user account in same pod. -userconcentratedpod_firstfit Selects the first suitable host from a pod running most instances for the user. -firstfitleastconsumed Selects the first host after sorting eligible hosts by least allocated resources (such as CPU or RAM). +userconcentratedpod_random Behaves same as random algorithm. +userconcentratedpod_firstfit Behaves same as firstfit algorithm. +firstfitleastconsumed Selects the first host after sorting eligible hosts by least allocated resources (such as CPU or RAM). ============================= ======================== Use global configuration parameter: @@ -62,14 +62,14 @@ Volume allocator supports following algorithms to select a host in the cluster: .. cssclass:: table-striped table-bordered table-hover ============================= ======================== -Algorithm Description +Algorithm Description ============================= ======================== -random Selects a storage pool in the cluster randomly. -firstfit Selects the first available storage pool in the cluster. -userdispersing Selects the storage pool running least instances for the account, aims to spread out the instances belonging to a single user account. -userconcentratedpod_random Selects the storage pool randomly aiming to keep all instances belonging to single user account in same pod. -userconcentratedpod_firstfit Selects the first suitable pool from a pod running most instances for the user. -firstfitleastconsumed Selects the first storage pool after sorting eligible pools by least allocated resources. +random Selects a storage pool in the cluster randomly. +firstfit Selects the first available storage pool in the cluster. +userdispersing Selects the storage pool running least instances for the account, aims to spread out the instances belonging to a single user account. +userconcentratedpod_random Behaves same as random algorithm. +userconcentratedpod_firstfit Behaves same as firstfit algorithm. +firstfitleastconsumed Selects the first storage pool after sorting eligible pools by least allocated resources. ============================= ======================== .. note:: @@ -98,11 +98,11 @@ Key: `host.capacityType.to.order.clusters` .. cssclass:: table-striped table-bordered table-hover ========= ======================== -Value Behavior +Value Behavior ========= ======================== -CPU Prioritizes resources with the most available CPU. -RAM Prioritizes resources with the most available memory. -COMBINED Uses a weighted formula to balance CPU and RAM in prioritization. +CPU Prioritizes resources with the most available CPU. +RAM Prioritizes resources with the most available memory. +COMBINED Uses a weighted formula to balance CPU and RAM in prioritization. ========= ======================== **Additional Configuration for COMBINED** @@ -132,8 +132,9 @@ Example Configuration Above config prioritizes CPU at 70% weight and RAM at 30% when ranking pods, clusters, and hosts. .. note:: - - `host.capacityType.to.order.clusters` is only respected for host ordering when: + - `host.capacityType.to.order.clusters` is only respected for cluster/host ordering when: .. code:: bash + vm.deployment.planner: FirstFitPlanner, UserDispersingPlanner (when vm.user.dispersion.weight is < 1) vm.allocation.algorithm: firstfitleastconsumed - When using COMBINED, make sure to tune cpu.to.memory.capacity.weight to reflect your environment’s resource constraints and workload profiles.