You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a *follower* is down for some time it may fall out of the recovery window, i.e. the log may not include all the records needed to bring it up to date. In order to recover this server you need to:
@@ -340,7 +340,7 @@ During maintenance you may use ``onezone disable zone_id``. Disabled zone can st
HA deployment requires the filesystem view of most datastores (by default in ``/var/lib/one/datastores/``) to be the same on all frontends. It is necessary to set up a shared filesystem over the datastore directories. This document doesn't cover configuration and deployment of the shared filesystem; it is left completely up to the cloud administrator.
@@ -386,7 +386,7 @@ The Raft algorithm can be tuned by several parameters in the configuration file
386
386
387
387
Any change in these parameters can lead to unexpected behavior during the fail-over and result in whole-cluster malfunction. After any configuration change, always check the crash scenarios for the correct behavior.
388
388
389
-
Compatibility with the earlier HA
389
+
Compatibility with the Earlier HA
390
390
=================================
391
391
392
392
In OpenNebula <= 5.2, HA was configured using a classic active-passive approach, using Pacemaker and Corosync. While this still works for OpenNebula > 5.2, it is not the recommended way to set up a cluster. However, it is fine if you want to continue using that HA method if you're coming from earlier versions.
@@ -395,38 +395,30 @@ This is documented here: `Front-end HA Setup <http://docs.opennebula.io/5.2/adva
You can use the command ``onezone serversync``. This command is designed to help administrators to sync OpenNebula's configurations across High Availability (HA) nodes and fix lagging nodes in HA environments. It will first check for inconsistencies between local and remote configuration files inside ``/etc/one/`` directory. In case these exist, the local version will be replaced by the remote version and only the affected service will be restarted. Whole configuration files will be replaced with the sole exception of ``/etc/one/oned.conf``. In this case, the local ``FEDERATION`` configuration will be maintained, but the rest of the content will be overwritten. A backup will be made inside ``/etc/one/`` before replacing any file.
401
+
To synchronize files, you can use the command ``onezone serversync``. This command is designed to help administrators to sync OpenNebula’s configurations across HA nodes and fix lagging nodes in HA environments. The command first checks for inconsistencies between local and remote configuration files inside the ``/etc/one/`` directory. If inconsistencies are found, the local version of a file will be replaced by the remote version, and only the affected service will be restarted. Whole configuration files will be replaced, with the sole exception of ``/etc/one/oned.conf``. For this file, the local ``FEDERATION`` configuration will be maintained, but the rest of the content will be overwritten. Before replacing any file, a backup will be made inside ``/etc/one/``.
402
402
403
-
.. warning:: Only use this option between HA nodes, never across federated nodes
403
+
.. warning:: Only use this option between HA nodes, never across federated nodes.
404
404
405
405
This is the list of files that will be checked and replaced:
406
406
407
407
Individual files:
408
408
409
-
- ``/etc/one/az_driver.conf``
410
-
- ``/etc/one/az_driver.default``
411
-
- ``/etc/one/ec2_driver.conf``
412
-
- ``/etc/one/ec2_driver.default``
413
-
- ``/etc/one/econe.conf``
414
409
- ``/etc/one/monitord.conf``
415
410
- ``/etc/one/oneflow-server.conf``
416
411
- ``/etc/one/onegate-server.conf``
417
-
- ``/etc/one/vcenter_driver.default``
418
-
419
-
420
412
421
413
Folders:
422
414
423
415
- ``/etc/one/fireedge``
424
416
- ``/etc/one/auth``
425
-
- ``/etc/one/ec2query_templates``
426
417
- ``/etc/one/hm``
418
+
- ``/etc/one/schedulers``
427
419
- ``/etc/one/vmm_exec``
428
420
429
-
.. note:: Any file inside previous folders that doesn't exist on the remote server (like backups) will **not** be removed.
421
+
.. note:: Any file inside the above folders that does not exist on the remote server (such as backups) will *not* be removed.
By default the VMM driver is configured to allow more than one action to be executed per Host. Make sure the parameter ``-p`` is added to the driver executable. This is done in ``/etc/one/oned.conf`` in the VM_MAD configuration section:
487
+
By default the VMM driver is configured to allow more than one action to be executed per Host. Make sure the parameter ``-p`` is added to the driver executable. This is done in ``/etc/one/oned.conf``, in the ``VM_MAD`` configuration section:
488
488
489
489
.. code::
490
490
@@ -495,23 +495,27 @@ By default the VMM driver is configured to allow more than one action to be exec
495
495
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
496
496
TYPE = "kvm" ]
497
497
498
-
Restart the main OpenNebula service if changes were made to the mentioned file:
498
+
Additionally, also in ``/etc/one/oned.conf``, increase the value of the ``MAX_ACTIONS_PER_HOST`` (default = ``1``), for example:
499
+
500
+
.. prompt:: bash $ auto
501
+
502
+
MAX_ACTIONS_PER_HOST = 10
503
+
504
+
To increase the maximum number of allowed actions per cluster, increase the value of the ``MAX_ACTIONS_PER_CLUSTER`` parameter (default = ``30``).
505
+
506
+
After changing ``/etc/one/oned.conf``, restart the main OpenNebula service:
499
507
500
508
.. prompt:: bash $ auto
501
509
502
510
$ sudo systemctl restart opennebula
503
511
504
-
The scheduler configuration should be changed to let it deploy more than one VM per Host. The file is located at ``/etc/one/sched.conf`` and the value to change is ``MAX_HOST`` For example, to let the scheduler submit 10 VMs per Host use this line:
512
+
Additionally, if you are using the Rank Scheduler, you will need to change the configuration to let the scheduler deploy more than one VM per Host. In the file ``/etc/one/schedulers/rank.conf``, change the value of the ``MAX_HOST`` parameter. For example, to let the scheduler submit 10 VMs per Host:
505
513
506
514
.. code::
507
515
508
516
MAX_HOST = 10
509
517
510
-
Restart the scheduler service for this change to take effect:
511
-
512
-
.. prompt:: bash $ auto
513
-
514
-
$ sudo systemctl restart opennebula-scheduler
518
+
Changes in ``rank.conf`` do not require a restart.
0 commit comments