-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
ISSUE TYPE
- Bug Report
COMPONENT NAME
CLOUDSTACK VERSION
CONFIGURATION
OS / ENVIRONMENT
SUMMARY
We have zone1 where we have two clusters ex- cluster01 and cluster02 ,
Where cluster01 have 3 hypervisors with same h/w model and cluster02 has one hypervisor with dfifferent h/w model.
I have enabled drs setiings in globals setting and then disbaled it for cluster02 in cluster setting "drs.automatic.enable -- false" but enabled for cluster01 .
In the above scenario drs plan failed with below logs -
2024-02-08 10:26:33,532 DEBUG [c.c.s.ManagementServerImpl] (VMSchedulerPollTask:ctx-0bba1590) (logid:f0f7966b) Hosts having capacity and suitable for migration: [Host {"id":25,"name":"node-cluster01","type":"Routing","uuid":"5d145861-e4ad-4f94-a805-266711321d59"}, Host {"id":40,"name":"node-cluster02","type":"Routing","uuid":"66b47d2a-b047-452c-b5d3-65c160666b50"}, Host {"id":48,"name":"node-cluster01","type":"Routing","uuid":"da1be373-cd08-4ae1-948f-7592daabb3fc"}]
2024-02-08 10:26:33,535 ERROR [o.a.c.c.ClusterDrsServiceImpl] (VMSchedulerPollTask:ctx-0bba1590) (logid:f0f7966b) Unable to generate DRS plans for cluster Cluster-Z01 [id=5366c5fb-0ed0-4caf-b2c7-93ebea15a717]
If i disable host from cluster02 our drs works as expected and migrated VMs based on load . But when we have both clusters and all nodes in cluster enabled its failed to generate plan.
Below are cluster level settings for reference from DB.
621 | 1 | drs.automatic.enable | true |
| 622 | 1 | drs.automatic.interval | 10 |
| 623 | 1 | drs.imbalance | 0.4 |
| 624 | 16 | drs.automatic.enable | false
Even tried by keep it disabled in global settings and then enable it for just one cluster in cluster settings.
As our use case is where we have multiple clusters in same zone with different type of h/w.
STEPS TO REPRODUCE
EXPECTED RESULTS