Nginx Ingress Controller without LoadBalancer #56
-
The readme states
I was able to reconfigure the Nginx Ingress Helm chart to deploy the controller service as a NodePort, but now I guess I'm missing a piece of configuration (probably adding something to kube-vip). Do you have any example how to expose a service to the public IP address, the same way it is done for port 6443. I'm using your great module on Infomaniak for a private project and I can't afford paying an extra 13.- for the additional IP and the OpenStack load balancer. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 9 replies
-
@UncleSamSwiss Thanks for the point. That sentence is meant to be read slightly differently: there is no need for an openstack load-balancer to have the control-pane (the servers in rke2) highly available. However due to the nature of the module with an isolated/private network, it is strongly recommended to have a load-balancer for any incoming network connection. You might be able to get around that by attaching a floating IP to a node and try this however you would loose the high-availability in case that node goes down. |
Beta Was this translation helpful? Give feedback.
-
@zifeo I got it working but not without some "manual intervention" (i.e. not using Terraform). Reason for this being that I couldn't set the |
Beta Was this translation helpful? Give feedback.
-
By the way, I'm wondering why this aproach is not "high-available"?
From what I understand, kube-vip should now be able to do the fail-over (but not the load-balancing) expected from a load-balancer. |
Beta Was this translation helpful? Give feedback.
-
Thanks to the suggestions by @zifeo, I was able to get it working with these resources in a new terraform project (not the one creating the cluster): data "openstack_networking_secgroup_v2" "rke2_cluster_server" {
name = "rke2-cluster-server" // change this to the name of your secgroup (based on the name of the cluster)
}
resource "openstack_networking_secgroup_rule_v2" "ingress_http" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
security_group_id = data.openstack_networking_secgroup_v2.rke2_cluster_server.id
}
resource "openstack_networking_secgroup_rule_v2" "ingress_https" {
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = "0.0.0.0/0"
security_group_id = data.openstack_networking_secgroup_v2.rke2_cluster_server.id
}
resource "helm_release" "ingress_nginx" {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
version = "4.9.0"
namespace = "global-ingress-nginx"
create_namespace = true
recreate_pods = true
values = [
<<-EOT
controller:
kind: DaemonSet
service:
type: NodePort
hostPort:
enabled: true
hostNetwork: true
ingressClassResource:
default: true
watchIngressWithoutClass: true
nodeSelector:
node-role.kubernetes.io/master: 'true'
tolerations:
- key: CriticalAddonsOnly
operator: Exists
effect: NoExecute
- key: node.cloudprovider.kubernetes.io/uninitialized
value: 'true'
effect: NoSchedule
EOT
]
} |
Beta Was this translation helpful? Give feedback.
Thanks to the suggestions by @zifeo, I was able to get it working with these resources in a new terraform project (not the one creating the cluster):