Conversation
kevinmcconnell
left a comment
There was a problem hiding this comment.
@polarctos thanks for the PR.
I was curious: could you describe your setup a bit where you would use this? I'd normally expect the metrics to be consumed and routed internally, whereas the regular service traffic is externally accessible. Which makes me wonder how common it is that the same cert setup is likely to be useful for both cases.
Happy to ship this if it'll be solving a common need; I'd just like to understand more about when this would be useful. Thanks!
Serving metrics also via HTTPS is useful in setups where there is a greater security goal or alternatively just a security policy to only have encrypted traffic in the whole network. To make a service externally accessible another load balancer will usually be in front of a kamal setup with multiple servers, I agree in that regard. The metrics instead are scraped individually e.g. by a Prometheus instance in the internal network. |
cf57a42 to
3e4be77
Compare
Adds a config parameter
--metrics-https(or envMETRICS_HTTPS) that serves the metrics viaHTTPSinstead ofHTTP.It is reusing the TLS server certificates of the existing configured services of this proxy.
The metrics enablement parameter was named just
--metrics-portinstead of--metrics-http-port.Thus now this is just an additional
boolconfig flag to toggle HTTPS. To match the general port config parameter names, it could have also be named--metrics-https-port. On the other hand I see no benefit of actually allowing to run the metrics on two ports for HTTPS and HTTP.To reduce the need for more config parameters for the metrics server certificate and key files, it just reuses the already configured matching service certificates.
Started a discussion about this here: #101 (comment)