-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Using SASL GSSAPI with librdkafka in a cross‐realm scenario with Windows SSPI and MIT Kerberos
Emanuele Sabellico edited this page Aug 25, 2025
·
11 revisions
In this tutorial we'll see how to set up SASL/GSSAPI in a cross-realm scenario with Windows Active Directory and MIT Kerberos. On Windows librdkafka uses SSPI to automatically authenticate with current user. If cross-realm trust is set up, Windows users can directly authenticate as Kafka principals.
Everything will be set up on a Windows Server instance, by using WSL2 for running Apache Kafka 4.0 on Ubuntu.
- Go to Server Manager
- Select "Add roles and Features"
- Check "Active Directory Domain Services"
- Install it
- After installation: "Promote this server to domain controller"
- Add a new forest with name
testwindomain.com(NetBIOS name:TESTWINDOMAIN) - Finish the configuration and restart
- To login now you've to use
TESTWINDOMAIN\<admin_user>
winget.exe install Microsoft.DotNet.SDK.8winget.exe install Git.SDK
- Enable the WSL feature with this PowerShell command
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux, VirtualMachinePlatform
- Install the Ubuntu distribution
wsl --install Ubuntu
- Install Linux dependencies
apt update && apt install -y krb5* openjdk-21-jdk
- Get WSL IP address on PS
wsl hostname -I
- Go to Windows DNS Manager and add a new Forward Lookup Zone (Primary) with name
testkafkadomain.com - Add two hosts:
kdcandkafka, both using the WSL IP address.
- Edit
/etc/krb5.conf - Change default realm in
/etc/krb5.conftoTESTKAFKADOMAIN.COM - Add realm configuration:
TESTKAFKADOMAIN.COM = {
kdc = kdc.testkafkadomain.com
admin_server = kdc.testkafkadomain.com
default_domain = testkafkadomain.com
}
- Create the Kerberos DB
kdb5_util create -s
- Create an ACL at
/etc/krb5kdc/kadm5.aclwith contents*/[email protected] * - Restart the servers:
systemctl restart krb5-admin-serversystemctl restart krb5-kdc
- Create the principal for Kafka brokers and add a keytab for it
mkdir /etc/security/keytabskadmin.local addprinc -randkey kafka/kafka.testkafkadomain.comkadmin.local ktadd -k /etc/security/keytabs/kafka.testkafkadomain.com.keytab kafka/kafka.testkafkadomain.com
- Download and extract the binaries
wget -O kafka_2.13-4.0.0.tgz https://dlcdn.apache.org/kafka/4.0.0/kafka_2.13-4.0.0.tgz && \ tar -xvf kafka_2.13-4.0.0.tgz && cd kafka_2.13-4.0.0
- Edit
config/server.properties - Add
SASL_PLAINTEXT://kafka.testkafkadomain.com:9094toadvertised.listenersand the corresponding value tolisteners. - Add these new properties:
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*)s/(.*)@(.*)/$1_$2/,RULE:[2:$1@$0](.*)s/(.*)@(.*)/$1_$2/
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
super.users=User:kafka_TESTKAFKADOMAIN.COM
- The rules are respectively for
primary@REALMandprimary/instance@REALM, they strip the instance if present and replace@with_as it's not a valid character for Kafka principals. This allows to keep users coming from separate realm as different Kafka principals. The super user is an example of the result of this transformation. - Change
log.dirsto<home_directory>/kafka_2.13-4.0.0/log-dirs - Change
inter.broker.listener.nametoSASL_PLAINTEXT - Change
CONTROLLERsecurity protocol map toSASL_PLAINTEXTinlistener.security.protocol.map - Set advertised.listeners to
PLAINTEXT://localhost:9092,SASL_PLAINTEXT://kafka.testkafkadomain.com:9094,CONTROLLER://kafka.testkafkadomain.com:9093 - Edit
kafka_server_jaas.confinsidekafka_2.13-4.0.0, use this configuration:
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka.testkafkadomain.com.keytab"
principal="kafka/[email protected]";
};
- Still inside
kafka_2.13-4.0.0, create the logs directorymkdir log-dirs
- Create a cluster ID
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
- Format the storage with the generated ID
bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties
- Tell Kafka to use the jaas file
export KAFKA_OPTS="-Djava.security.auth.login.config=$PWD/kafka_server_jaas.conf"
- Finally start Kafka
bin/kafka-server-start.sh config/server.properties
- Create a
command.propertiesfile with these contents:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka.testkafkadomain.com.keytab" \
principal="kafka/[email protected]";
- Allow the Windows producer to produce to Kafka topic
test1
bin/kafka-acls.sh --bootstrap-server kafka.testkafkadomain.com:9094 --command-config command.properties --add --allow-principal "User:kafka_producer_TESTWINDOMAIN.COM" --topic test1 --resource-pattern-type LITERAL --operation Write
This is the key part, we must add a shared principal with same password to obtain a cross realm TGT for a Windows realm principal to MIT Kerberos.
Replace the example password.
- On MIT Kerberos add the
krbtgtprimary fromTESTWINDOMAIN.COMrealm toTESTKAFKADOMAIN.COMonekadmin.local -q "addprinc -pw example krbtgt/[email protected]"
- On Windows add the corresponding trust with same password
netdom trust TESTKAFKADOMAIN.COM /Domain:TESTWINDOMAIN.COM /add /realm /passwordt:example
- On Windows add a
kdcforTESTKAFKADOMAIN.COMksetup /addkdc TESTKAFKADOMAIN.COM kdc.testkafkadomain.com
- On Windows add the mapping for hostnames like
.testkafkadomain.comto theTESTKAFKADOMAIN.COMrealmksetup /addhosttorealmmap .testkafkadomain.com TESTKAFKADOMAIN.COM
This is the final part. We now start a .NET producer on Windows using the logged in user and authenticating to Kafka using MIT Kerberos.
- Create new Active Directory user
TESTWINDOWDOMAIN\kafka_producerNew-ADUser -Name "kafka_producer" -SamAccountName "kafka_producer" -UserPrincipalName "[email protected]" -AccountPassword (ConvertTo-SecureString "Example@!" -AsPlainText -Force) -Enabled $true
- Add it to the
Administratorsgroup to allow to login locally (only for this example):Add-ADGroupMember -Identity "Administrators" -Members "kafka_producer"
- Start a new shell with user
TESTWINDOMAIN\kafka_producerrunas /user:TESTWINDOMAIN\kafka_producer cmd
- Clone
confluent-kafka-dotnetcd %USERPROFILE% && C:\git-sdk-64\cmd\git.exe clone https://github.com/confluentinc/confluent-kafka-dotnet.git
- Go to the
Producerexamplecd %USERPROFILE%\confluent-kafka-dotnet\examples\Producer
- Edit
Program.csand change the config to:var config = new ProducerConfig { BootstrapServers = brokerList, SecurityProtocol = SecurityProtocol.SaslPlaintext, SaslMechanism = SaslMechanism.Gssapi };
- Start the Producer and you can produce some messages
dotnet run kafka.testkafkadomain.com:9094 test1