Skip to content

Cluster Configuration Changes in Apache NiFi 2.x

Dmitriy Myasnikov edited this page Jul 21, 2025 · 1 revision

Cluster Configuration Changes in Apache NiFi 2.x

Apache NiFi 2.x introduces new features for running clusters in Kubernetes. It relies on Kubernetes Leases for leader election (see the admin guide for more details), as well as ConfigMaps for cluster state storage via the Kubernetes ConfigMap State Provider (see the admin guide for more details). These new features allow users to run an Apache NiFi cluster without the need to deploy Apache ZooKeeper to their Kubernetes environment.

Configuration

To enable these new features, you need to:

  1. Create a service account (if it does not already exist).
  2. Create the necessary roles.
  3. Create the necessary role bindings.
  4. Set the serviceAccountName and environment variables in your Deployment or StatefulSet.

Service Account and Roles

Create a service account (all names here and below are for example purposes):

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nifi-sa
  namespace: <your-namespace>

Then create the roles. Note: The Apache NiFi documentation does not require the patch verb for leases; however, its absence causes errors when actually running the cluster (at least as of version 2.4.0):

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nifi-lease-role
  namespace: <your-namespace>
rules:
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
  - get
  - update
  - patch
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nifi-configmap-role
  namespace: <your-namespace>
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update

Next, create the role bindings:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nifi-lease-role-binding
  namespace: <your-namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nifi-lease-role
subjects:
- kind: ServiceAccount
  name: nifi-sa
  namespace: <your-namespace>
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nifi-configmap-role-binding
  namespace: <your-namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nifi-configmap-role
subjects:
- kind: ServiceAccount
  name: nifi-sa
  namespace: <your-namespace>

You can also use kubectl to create the same resources:

# create leases role
kubectl create role nifi-lease-role --verb=create --verb=get --verb=update --verb=patch --resource=leases
# create configmap role
kubectl create role nifi-configmap-role --verb=create --verb=delete --verb=get --verb=list --verb=patch --verb=update --resource=configmaps
# create rolebinding for leases role
kubectl create rolebinding nifi-lease-role-binding --role=nifi-lease-role --serviceaccount=<your-namespace>:nifi-sa
# create rolebinding for configmap role
kubectl create rolebinding nifi-configmap-role-binding --role=nifi-configmap-role --serviceaccount=<your-namespace>:nifi-sa

Deployment or StatefulSet Configuration

To enable the use of Kubernetes Leases, set the environment variable NIFI_CLUSTER_LEADER_ELECTION_IMPLEMENTATION=KubernetesLeaderElectionManager. For the ConfigMap state provider, set the environment variable NIFI_STATE_MANAGEMENT_PROVIDER_CLUSTER=kubernetes-provider. Resource prefixes are set via the environment variables NIFI_CLUSTER_LEADER_ELECTION_KUBERNETES_LEASE_PREFIX and NIFI_KUBERNETES_CONFIGMAP_NAME_PREFIX. For more details on Kubernetes resource naming, see the section Resource Naming. See the example below for a Deployment:

apiVersion: apps/v1
kind: Deployment
...
spec:
  template:
    spec:
      serviceAccountName: nifi-sa
      containers:
        env:
        ...
        - name: NIFI_CLUSTER_LEADER_ELECTION_IMPLEMENTATION
          value: 'KubernetesLeaderElectionManager'
        - name: NIFI_STATE_MANAGEMENT_PROVIDER_CLUSTER
          value: 'kubernetes-provider'
        - name: NIFI_CLUSTER_LEADER_ELECTION_KUBERNETES_LEASE_PREFIX
          value: 'cluster1-prefix'
        - name: NIFI_KUBERNETES_CONFIGMAP_NAME_PREFIX
          value: 'cluster1-prefix'
        ...

Cluster State Migration from ZooKeeper

If you have an existing cluster with Apache NiFi 1.x and rely on Cluster State in your flows, you may need to migrate state to the new state provider when switching to the Kubernetes ConfigMap State Provider.

To do this, set the nifi.state.management.provider.cluster.previous property in nifi.properties to the name of the previous state provider (for ZooKeeper, it's usually zk-provider). Also, set up the necessary connection properties (e.g., environment variables NIFI_ZK_CONNECT_STRING, NIFI_ZK_ROOT_NODE). For more details, refer to the admin guide.

Once migration is complete, you should unset the nifi.state.management.provider.cluster.previous property and remove the environment variables necessary for the ZooKeeper connection.

Resource Naming

Apache NiFi creates leases with the following names for leader election:

  1. <prefix>-cluster-coordinator
  2. <prefix>-primary-node

Here, <prefix> is defined by the nifi.cluster.leader.election.kubernetes.lease.prefix property, which can also be set via the environment variable NIFI_CLUSTER_LEADER_ELECTION_KUBERNETES_LEASE_PREFIX in the Docker container. The separator "-" is added only if the prefix is not empty. By default, the prefix is empty.

For storing state for each component, a cluster state ConfigMap with the name <prefix>-nifi-component-<UUID of component> is created when state is filled for the first time, and removed when the component is deleted. If state is cleared, the ConfigMap contents are also cleared. <prefix> is defined by the ConfigMap Name Prefix property in the state-management.xml file, and can also be set via the environment variable NIFI_KUBERNETES_CONFIGMAP_NAME_PREFIX in the Docker container. The separator "-" is added only if the prefix is not empty. By default, the prefix is empty. Only ConfigMaps matching this name format are modified or deleted by the state provider.