Skip to content

Commit ee75a7a

Browse files
Native haproxy consul integration (#486)
* haproxy: use native dns instead of consul-template * drop consul-template! * feat(loadbalancer): working on noble * move local dev to noble * configurable backend count * remove demo backends counts * add some notes to docs --------- Co-authored-by: Jacob Coffee <[email protected]>
1 parent 33c6994 commit ee75a7a

File tree

9 files changed

+63
-111
lines changed

9 files changed

+63
-111
lines changed

Vagrantfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ SERVERS = [
1111
{:name => "docs", :codename => "noble"},
1212
{:name => "downloads", :codename => "noble"},
1313
{:name => "hg", :codename => "noble"},
14-
{:name => "loadbalancer", :ports => [20000, 20001, 20002, 20003, 20004, 20005, 20010, 20011]},
14+
{:name => "loadbalancer", :codename => "noble", :ports => [20000, 20001, 20002, 20003, 20004, 20005, 20010, 20011]},
1515
"mail",
1616
"moin",
1717
"planet",

docs/guides/haproxy-registration-guide.md

Lines changed: 33 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,12 @@ Register a service with haproxy
66
vagrant up salt-master
77
vagrant up loadbalancer
88
```
9+
910
2. In the local repository, create a new state/directory to manage files for your service:
1011
```console
1112
touch salt/base/salt.sls
1213
```
14+
1315
3. Additionally, add an `nginx` configuration state and `consul` service state that exposes that directory over HTTP:
1416
- This configuration might look similar to an existing haproxy service like `letsencrypt`
1517
```yaml
@@ -55,21 +57,48 @@ Register a service with haproxy
5557
}
5658
~
5759
```
58-
5. Prepare an SSH configuration file to access the host with native ssh commands:
60+
61+
5. Add an entry in `pillar/base/haproxy.sls` to create the haproxy configuration:
62+
```
63+
letsencrypt-well-known:
64+
domains: []
65+
verify_host: salt.psf.io
66+
check: "GET /.well-known/acme-challenge/sentinel HTTP/1.1\\r\\nHost:\\ salt.psf.io"
67+
```
68+
69+
This will render given the template in `salt/haproxy/config/haproxy.cfg.jinja` to create
70+
a service which has two "slots" that will be filled based on the DNS resolution of the consul
71+
service registered in step 3.
72+
73+
6. Prepare an SSH configuration file to access the host with native ssh commands:
5974
```console
6075
vagrant ssh-config salt-master loadbalancer >> vagrant-ssh
6176
```
62-
6. Open an SSH session with port forwarding to the haproxy status page:
77+
78+
7. Open an SSH session with port forwarding to the haproxy status page:
6379
```console
6480
ssh -L 4646:127.0.0.1:4646 -F vagrant-ssh loadbalancer
6581
```
6682
- Open [`http://localhost:4646/haproxy?stats`][loadbalancer] to see ``haproxy`` status
67-
7. In a new window run:
83+
84+
![](images/haproxy-service.png)
85+
86+
You will see the two "slots" registered in haproxy, with one host found via Consul DNS.
87+
88+
- Green indicates the host exists, was resolved, and is passing health check.
89+
- Brown indicates that a slot does not have a host, in other words not enough hosts were resolved, so it is reserved in "maintenance" state.
90+
- Red would indicate that a host exists, was resolved, and is failing health check.
91+
92+
8. In a new window run:
6893
```console
6994
ssh -F sshconfig -L 8500:127.0.0.1:8500 salt-master
7095
```
7196
- Open [`http://localhost:8500/ui/vagrant/services`][consul] to see what ``consul`` services are registered
7297

98+
![](images/consul-service.png)
99+
100+
You can browse to see what services have been registered, and what nodes are advertising that service.
101+
73102
[//]: # (Quicklink targets)
74103
[loadbalancer]: <http://localhost:4646/haproxy?stats>
75-
[consul]: <http://localhost:8500/ui/vagrant/services>
104+
[consul]: <http://localhost:8500/ui/vagrant/services>
212 KB
Loading
67.4 KB
Loading

pillar/base/haproxy.sls

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -99,12 +99,14 @@ haproxy:
9999
domains: []
100100
verify_host: salt.psf.io
101101
check: "GET /.well-known/acme-challenge/sentinel HTTP/1.1\\r\\nHost:\\ salt.psf.io"
102+
backends: 1
102103

103104
publish-files:
104105
domains:
105106
- salt-public.psf.io
106107
verify_host: salt.psf.io
107108
check: "GET /salt-server-list.rst HTTP/1.1\\r\\nHost:\\ salt-public.psf.io"
109+
backends: 1
108110

109111
redirects:
110112
cheeseshop.python.org:

salt/consul/init.sls

Lines changed: 0 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ consul-pkgs:
77
pkg.installed:
88
- pkgs:
99
- consul
10-
- consul-template
1110

1211
consul:
1312
file.managed:
@@ -115,50 +114,6 @@ consul:
115114
- group: consul
116115
- require:
117116
- pkg: consul-pkgs
118-
119-
120-
consul-template:
121-
pkg.installed: []
122-
123-
cmd.run:
124-
- name: consul-template -config /etc/consul-template.d -once
125-
- require:
126-
- pkg: consul-pkgs
127-
- service: consul
128-
- onchanges:
129-
- file: /etc/consul-template.d/*.json
130-
- file: /usr/share/consul-template/templates/*
131-
132-
file.managed:
133-
- name: /lib/systemd/system/consul-template.service
134-
- source: salt://consul/init/consul-template.service
135-
- mode: "0644"
136-
137-
service.running:
138-
- enable: True
139-
- restart: True
140-
- require:
141-
- pkg: consul-pkgs
142-
- service: consul
143-
- watch:
144-
- file: consul-template
145-
- file: /etc/consul-template.d/*.json
146-
- file: /usr/share/consul-template/templates/*
147-
148-
149-
/etc/consul-template.d/base.json:
150-
file.managed:
151-
- source: salt://consul/etc/consul-template/base.json
152-
- user: root
153-
- group: root
154-
- mode: "0644"
155-
156-
157-
/usr/share/consul-template/templates/:
158-
file.directory:
159-
- user: root
160-
- group: consul
161-
162117
{% endif %}
163118

164119

salt/haproxy/config/haproxy.cfg.jinja

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,11 @@
11
{% set haproxy = salt["pillar.get"]("haproxy", {}) -%}
22
{% set psf_internal = salt["pillar.get"]("psf_internal_network") -%}
33

4+
resolvers consul
5+
nameserver consul 127.0.0.1:8600
6+
accepted_payload_size 8192
7+
hold valid 5s
8+
49
global
510
log /dev/log local0
611
log /dev/log local1 notice
@@ -215,7 +220,25 @@ backend redirect
215220
{% for service, config in haproxy.services.items() %}
216221
backend {{ service }}
217222
{% if config.get("check") -%}
223+
{% if grains["oscodename"] != "noble" -%}
218224
option httpchk {{ config.check }}
225+
{%- else -%}
226+
# Noble Config using the newer http-check syntax
227+
# We need to split the check into parts to handle the extra things
228+
# ...maybe there is a better way to do this?
229+
{% set check_parts = config.check.split(' ', 2) -%}
230+
{% set method = check_parts[0] -%}
231+
{% set path = check_parts[1] -%}
232+
{% if check_parts|length > 2 -%}
233+
{% set extra = check_parts[2].split('\r\n') -%}
234+
{% set version = extra[0] -%}
235+
{% set headers = extra[1:] -%}
236+
{% endif -%}
237+
http-check send meth {{ method }} uri {{ path }} ver {{ version }}
238+
{%- for header in headers %}
239+
http-check send hdr {{ header.replace(':\\ ', ': ') }}
240+
{%- endfor %}
241+
{%- endif %}
219242
{%- endif %}
220243

221244
# http://gnuterrypratchett.com/
@@ -230,8 +253,7 @@ backend {{ service }}
230253
{{ item }}
231254
{% endfor -%}
232255

233-
{{ "{{" }}range service "{{ service }}@{{ pillar.dc }}" "any"}}
234-
{% raw %}server {{.Node}} {{.Address}}:{{.Port}}{% endraw %}{% if config.get("check", True) %} check{% if config.get("sni", False)%} check-sni {{ config.get("sni") }}{% endif %}{% if config.get("sni", False)%} sni str({{ config.get("sni") }}){% endif %}{% endif %}{% if config.get("tls", True) %} ssl force-tlsv12 verifyhost {{ config.get("verify_host", service + ".psf.io") }} ca-file {{ config.get("ca-file", "PSF_CA.pem") }}{% endif %}{{ "{{end}}" }}
256+
server-template backend {{ config.get("backends", 2) }} _{{ service }}._tcp.service.{{ pillar.dc }}.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 {% if config.get("check", True) %} check{% if config.get("sni", False)%} check-sni {{ config.get("sni") }}{% endif %}{% if config.get("sni", False)%} sni str({{ config.get("sni") }}){% endif %}{% endif %}{% if config.get("tls", True) %} ssl force-tlsv12 verifyhost {{ config.get("verify_host", service + ".psf.io") }} ca-file {{ config.get("ca-file", "PSF_CA.pem") }}{% endif %}
235257

236258
{% endfor %}
237259

@@ -248,8 +270,7 @@ listen {{ name }}
248270
{{ line }}
249271
{% endfor %}
250272

251-
{{ "{{" }}range service "{{ config.service }}@{{ pillar.dc }}"}}
252-
{% raw %}server {{.Node}} {{.Address}}:{{.Port}} check{{end}}{% endraw %}{% if config.get("send_proxy", False) %} send-proxy{% endif %}
273+
server-template backend {{ config.get("backends", 2) }} _{{ config.service }}._tcp.service.{{ pillar.dc }}.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check {% if config.get("send_proxy", False) %} send-proxy{% endif %}
253274

254275
{% endfor %}
255276

salt/haproxy/init.sls

Lines changed: 2 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -27,12 +27,12 @@ haproxy:
2727
- reload: True
2828
- require:
2929
- pkg: haproxy
30-
- cmd: consul-template
3130
- service: rsyslog
3231
- watch:
3332
- file: /etc/ssl/private/*.pem
3433
- file: /etc/haproxy/fastly_token
3534
- file: /etc/haproxy/our_domains
35+
- file: /etc/haproxy/haproxy.cfg
3636

3737

3838
/etc/haproxy/fastly_token:
@@ -56,31 +56,13 @@ haproxy:
5656
- pkg: haproxy
5757

5858

59-
/usr/share/consul-template/templates/haproxy.cfg:
59+
/etc/haproxy/haproxy.cfg:
6060
file.managed:
6161
- source: salt://haproxy/config/haproxy.cfg.jinja
6262
- template: jinja
6363
- user: root
6464
- group: root
6565
- mode: "0644"
66-
- require:
67-
- pkg: consul-pkgs
68-
69-
70-
/etc/consul-template.d/haproxy.json:
71-
file.managed:
72-
- source: salt://consul/etc/consul-template/template.json.jinja
73-
- template: jinja
74-
- context:
75-
source: /usr/share/consul-template/templates/haproxy.cfg
76-
destination: /etc/haproxy/haproxy.cfg
77-
command: service haproxy reload
78-
- user: root
79-
- group: root
80-
- mode: "0640"
81-
- require:
82-
- pkg: consul-pkgs
83-
8466

8567
/usr/local/bin/haproxy-ocsp:
8668
{% if ocsp %}

salt/postgresql/server/init.sls

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,6 @@ postgresql-server:
6262
- require:
6363
- cmd: postgresql-psf-cluster
6464
- file: {{ postgresql.config_dir }}/conf.d
65-
{% if salt["match.compound"](pillar["roles"]["postgresql-replica"]["pattern"]) %}
66-
- cmd: consul-template
67-
{% endif %}
6865
- watch:
6966
- file: {{ postgresql.config_file }}
7067
- file: {{ postgresql.ident_file }}
@@ -95,7 +92,6 @@ postgresql-psf-cluster:
9592
- file: postgresql-data
9693
{% if salt["match.compound"](pillar["roles"]["postgresql-replica"]["pattern"]) %}
9794
- file: /etc/ssl/certs/PSF_CA.pem
98-
- file: /etc/consul.d/service-postgresql.json
9995
- service: consul
10096
{% endif %}
10197

@@ -222,36 +218,3 @@ replicator:
222218
- mode: "0644"
223219
- require:
224220
- pkg: consul-pkgs
225-
226-
227-
{% if salt["match.compound"](pillar["roles"]["postgresql-replica"]["pattern"]) %}
228-
229-
/usr/share/consul-template/templates/recovery.conf:
230-
file.managed:
231-
- source: salt://postgresql/server/configs/recovery.conf.jinja
232-
- template: jinja
233-
- user: postgres
234-
- group: postgres
235-
- mode: "0640"
236-
- show_diff: False
237-
- require:
238-
- pkg: consul-template
239-
- cmd: postgresql-psf-cluster
240-
- file: {{ postgresql.config_dir }}
241-
242-
243-
/etc/consul-template.d/postgresql-recovery.json:
244-
file.managed:
245-
- source: salt://consul/etc/consul-template/template.json.jinja
246-
- template: jinja
247-
- context:
248-
source: /usr/share/consul-template/templates/recovery.conf
249-
destination: {{ postgresql.recovery_file }}
250-
command: "chgrp postgres {{ postgresql.recovery_file }} && chmod 640 {{ postgresql.recovery_file }} && service postgresql restart"
251-
- user: root
252-
- group: root
253-
- mode: "0640"
254-
- require:
255-
- pkg: consul-template
256-
257-
{% endif %}

0 commit comments

Comments
 (0)