-
-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Perhaps this is just a request for guidance (if I am wrong), but I can't work out how to gracefully get a NiFi v2 container online behind an Openshift Route because of the "Invalid SNI" issue.
This issue (697) gave some hints using an ingress, but I haven't found a way using routes.
From tracing through the current solution what I think is happening is:
- The operator creates a
StatefulSetwhich forcesNODE_ADDRESSto exist during startup, but builds the value itself based on a internal address. - This is overwriting attempts to set
NODE_ADDRESSusing aConfigMap. - The
nifi.propertiesloadsNODE_ADDRESSinto bothnifi.cluster.node.addressandnifi.web.https.host, with the whole file being drawn from aConfigMap. - Attempts to edit
nifi.propertiesin theConfigMapare overwritten by the operator. - Attempts to edit the
StatefulSetto adjust the CLI settingNODE_ADDRESSare overwritten by the operator. - TBF I expected those just noting them as opposed to... attempts to add new environment variables into the
ConfigMapare not overwritten.
Would it be as simple as allowing the nifi.web.https.host to be overwritten by a new variable, like PUBLIC_ADDRESS that we can set in the ConfigMap? I gather this is what is causing Jetty to reject the traffic originating from the public route.
For NiFi v1.27.0, using a pass-through route 'just worked', but either upgrading or a fresh install of v2 are all failing. Unclear to me why the internal pod wants to perform this validation... and TBH I would love to disable it, but maybe it adds value. My last gasp is going to track down an admin that knows cert-fu and trying to switch to a Re-Encrypt route, but not looking forward to that.