Skip to content

vr_balancing

Ruben S. Montero edited this page Feb 5, 2024 · 10 revisions

Load Balancing

Introduction

The virtual router provides to open source solutions for load balancing TCP services:

  1. Keepalived + LVS (IPVS).
  2. HAProxy.

In this guide we will assume the following configuration:

        public network
      ┌───────────────   Users
      │
   ┌──┴─┐  10.0.1.1
┌──┤eth0├────────┐
│  └────┘        │   LB service (http:80)
│ Virtual Router │
│                │   10.0.0.1:80 ┌────► 172.20.0.104:8080
│ Load Balancer  │               │
│  ┌────┐        │               └────► 172.20.0.105:8080
└──┤eth1├────────┘               
   └─┬──┘                        
     │   private network
     └────┬─────────────────┬───
     172.20.0.104      172.20.0.105
       ┌──┴──┐           ┌──┴──┐
       │ LB1 │           │ LB2 │
       └─────┘           └─────┘

Note

The Load Balancing service can be provisioned in a failover setup activating the associated keepalived services, see more details here

Additionally we will consider two LB deployment modes:

  • Static, when the number LB servers are fixed and known
  • Dynamic, when LB servers can be added/removed from the backend

IP Addresses Placeholders

To configure the LB services you need to define the public (user-facing) IP. This IP is only known once the Virtual Router is deployed and the IP assignment is resolved. Usually, you can refer to the public facing IP of the Virtual router as ETH0_EP0

For advanced scenarios, OpenNebula provides you with some convenient placeholders that are resolved automatically at runtime:

  • <ETHx_IPy> means "interpolate (in-place) the y-th IP of the x-th NIC"
  • <ETHx_VIPy> means "interpolate (in-place) the y-th Virtual IP of the x-th NIC"
  • <ETHx_EPy> means "interpolate (in-place) the y-th EndPoint of the x-th NIC".

Values of ETHx_IPy and ETHx_VIPy are always merged together to produce ETHx_EPy. When ETHx_VIPy is undefined, then ETHx_EPy is always set to ETHx_IPy as a fallback, that way you can always have some valid "endpoints" even in non-HA scenarios (no VIPs defined).

Note

Virtual IP refers to the IP assigned to the NIC of the virtual router when FLOATING_IP has been set.

Static LBs

Keepalived LVS

To configure keepalived LVS you need to define:

  • The front-end IP and port to expose the service ONEAPP_VNF_LBx_IP, ONEAPP_VNF_LBx_PORT
  • The IP and port of the backend severs ONEAPP_VNF_LBx_SERVERy_HOST, ONEAPP_VNF_LBx_SERVERy_PORT set at deployment time
  • The protocol and mode where keepalive will use to load balance the services (e.g. ONEAPP_VNF_LBx_PROTOCOL)

Note

Each variable is indexed for each balanced service LB0, LB1, etc.. and each backend server is indexed as well with SERVER0, SERVER1,.. etc..

For example, to create a static LVS-based TCP LB, for the topology above, you can use:

CONTEXT = [
  ...
  ONEAPP_VNF_LB_ENABLED = "YES",

  ONEAPP_VNF_LB0_IP        = "<ETH0_EP0>", # Interpolate the first "endpoint".
  ONEAPP_VNF_LB0_PORT      = "80",
  ONEAPP_VNF_LB0_PROTOCOL  = "TCP",
  ONEAPP_VNF_LB0_METHOD    = "NAT",
  ONEAPP_VNF_LB0_SCHEDULER = "rr", # "Round-robin".

  ONEAPP_VNF_LB0_SERVER0_HOST = "172.20.0.104",
  ONEAPP_VNF_LB0_SERVER0_PORT = "8080",
  ONEAPP_VNF_LB0_SERVER1_HOST = "172.20.0.105",
  ONEAPP_VNF_LB0_SERVER1_PORT = "8080",
  ...
]

You can see in the example above, that we defined the LB0 by providing the IP / PORT pair with some extra settings. Then two (static) backends are defined for the LB0.

HAProxy

Similarly for HAProxy you need to define:

  • The front-end IP and port to expose the service ONEAPP_VNF_HAPROXY_LBx_IP, ONEAPP_VNF_HAPROXY_LBx_PORT
  • The IP and port of the backend severs ONEAPP_VNF_HAPROXY_LBx_SERVERy_HOST, ONEAPP_VNF_HAPROXY_LBx_SERVERy_PORT

For example, to create a HAProxy-based TCP LB for the topology above, you can use:

CONTEXT = [
  ...
  ONEAPP_VNF_HAPROXY_ENABLED = "YES",

  ONEAPP_VNF_HAPROXY_LB0_IP   = "<ETH0_EP0>", # Interpolate the first "endpoint".
  ONEAPP_VNF_HAPROXY_LB0_PORT = "80",

  ONEAPP_VNF_HAPROXY_LB0_SERVER0_HOST = "172.20.0.104",
  ONEAPP_VNF_HAPROXY_LB0_SERVER0_PORT = "8080",
  ONEAPP_VNF_HAPROXY_LB0_SERVER1_HOST = "172.20.0.105",
  ONEAPP_VNF_HAPROXY_LB0_SERVER1_PORT = "8080",
  ...
]

Dynamic LBs

Important

To dynamically add and remove LBs the Virutal Router cannot be instantiated as an standalone VM. It has to be an OpenNebula Virtual Router or part of an OpenNebula flow.

Important

To dynamically add and remove LBs the OpenNebula Gate service needs to be configured

The procedure is as follows:

  1. Create a new Virtual Machine that includes the IP, and PORT pairs; and BACKEND = YES in its CONTEXT attribute.
  2. Update the Virtual Router context section with the new LB server data.

Keepalived LVS

Let's assume you already have deployed a Virtual Router in static mode with backends 172.20.0.104 and 172.20.0.105, as described above. To add a new backend.

  1. Create a new Virtual Machine, for example you could use the following template:
NAME = MyLB
...
NIC = [ NETWORK = "private" ]

ONEGATE_LB0_IP = "<ETH0_EP0>"
ONEGATE_LB0_PORT = 80
ONEGATE_LB0_SERVER_HOST = "$NIC[IP, NETWORK=\"private\"]"
ONEGATE_LB0_SERVER_PORT = 8080

CONTEXT = [
  NETWORK = YES,
  SSH_PUBLIC_KEY= "$USER[SSH_KEY]",
  BACKEND = YES
  ...
]

Note

Dynamic variable ONEGATE_LB0_SERVER_HOST does not contain the server index, this is different from the static definition ONEAPP_VNF_LB0_SERVER0_HOST.

Important

The LB's ONEGATE_LBx_IP / ONEGATE_LBx_PORT pair in the VM template must match those of a statically defined Virtual Router. If this is not the case, then no change will be produced.

  1. Update the Context of the Virtual Router to include the new LB:
CONTEXT = [
  ...
  ONEAPP_VNF_LB0_SERVER3_HOST = 172.20.0.106,
  ONEAPP_VNF_LB0_SERVER3_HOST = 8080,
  ...
]

HAProxy

Equivalently you'll need to:

  1. Create a VM that includes the following attributes in its template:
NAME = MyLB
...
NIC = [ NETWORK = "private" ]

ONEGATE_HAPROXY_LB0_IP = "<ETH0_EP0>"
ONEGATE_HAPROXY_LB0_PORT = 80
ONEGATE_HAPROXY_LB0_SERVER_HOST = "$NIC[IP, NETWORK=\"private\"]"
ONEGATE_HAPROXY_LB0_SERVER_PORT = 8080

CONTEXT = [
  NETWORK = YES,
  SSH_PUBLIC_KEY= "$USER[SSH_KEY]",
  BACKEND = YES
  ...
]
  1. Update the Virtual Router context with the information of the new LB:
CONTEXT = [
  ...
  ONEAPP_VNF_LB0_SERVER3_HOST = 172.20.0.106,
  ONEAPP_VNF_LB0_SERVER3_HOST = 8080,
  ...
]

Virtual Router Standalone vs OneFlow Modes.

The Virtual Router appliance can deployed in three different ways:

  1. As a proper OpenNebula Virtual Router instance.
  2. As a standalone VM, but inside an OneFlow service instance.
  3. As a standalone VM. (not supported)

When running both types of LBs (static and dynamic) in modes 1. and 2. the context and OneGate interface is exactly the same (with the important caveat, that the mode 1. requires backends to have the CONTEXT = [..., BACKEND = "YES", ...] flag defined).

Under the hood the source of updates is different, it both cases OneGate is used, but in the mode 1. all VNETs attached to the VR are scanned (recursively), in the mode 2. OneFlow provides API responses.

In that sense mode 1. is superior to mode 2., because a "proper" VR can reverse-proxy to any VMs in its attached VNETs (also to VMs running in OneFlow instances), at the same time mode 2. allows for backends from within its OneFlow instance only.

Clone this wiki locally