provider-kafka is a Crossplane Provider that is used to
manage Kafka resources.
-
Create a provider secret containing a json like the following, see expected schema here:
{ "brokers":[ "kafka-dev-controller-0.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092", "kafka-dev-controller-1.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092", "kafka-dev-controller-2.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092" ], "sasl":{ "mechanism":"PLAIN", "username":"user1", "password":"<your-password>" } } -
Create a k8s secret containing above config:
kubectl -n crossplane-system create secret generic kafka-creds --from-file=credentials=kc.json -
Create a
ProviderConfig, see this as an example. -
Create a managed resource see, see this for an example creating a
Kafka topic.
Usually the only command you may need to run is:
make review
For more detailed development instructions, continue reading below.
The following instructions will setup a development environment where you will have a locally running Kafka installation (SASL-Plain enabled). To change the configuration of your instance further, please see available helm parameters here.
steps 1-5 can be done with
make test
-
(Optional) Create a local kind cluster unless you want to develop against an existing k8s cluster.
Or simply run:
make kind-setupormake unit-tests.initfor steps 1-2. -
Run
make kind-kafka-setupor manually with:Install the Kafka helm chart:
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update bitnami helm upgrade --install kafka-dev -n kafka-cluster bitnami/kafka \ --create-namespace \ --version 32.4.3 \ --set image.repository=bitnamilegacy/kafka \ --set auth.clientProtocol=sasl \ --set deleteTopicEnable=true \ --set authorizerClassName="kafka.security.authorizer.AclAuthorizer" \ --set controller.replicaCount=1 \ --waitUsername is
user1, obtain password using the following:export KAFKA_PASSWORD=$(kubectl get secret kafka-dev-user-passwords -oyaml | yq '.data.client-passwords | @base64d')
Create the Kubernetes secret to be used by the
ProviderConfigwith:cat <<EOF > /tmp/creds.json { "brokers": [ "kafka-dev-controller-headless.kafka-cluster.svc:9092" ], "sasl": { "mechanism": "PLAIN", "username": "user1", "password": "${KAFKA_PASSWORD}" } } EOF kubectl -n kafka-cluster create secret generic kafka-creds \ --from-file=credentials=/tmp/creds.json
-
Install kubefwd.
-
Run
kubefwdforkafka-clusternamespace which will make internal k8s services locally accessible:sudo kubefwd svc -n kafka-cluster -c ~/.kube/config -
To run tests, use the
KAFKA_PASSWORDenvironment variable from step 2 -
(optional) Install the kafka cli
-
Create a config file for the client with:
cat <<EOF > ~/.kcl/config.toml seed_brokers = ["kafka-dev-controller-0.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092","kafka-dev-controller-1.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092","kafka-dev-controller-2.kafka-dev-controller-headless.kafka-cluster.svc.cluster.local:9092"] timeout_ms = 10000 [sasl] method = "plain" user = "user1" pass = "${KAFKA_PASSWORD}" EOF
- Verify that cli could talk to the Kafka cluster:
export KCL_CONFIG_DIR=~/.kcl kcl metadata --all
-
-
(optional) or deploy RedPanda console with:
kubectl create -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: rp-console spec: replicas: 1 selector: matchLabels: app: rp-console template: metadata: labels: app: rp-console spec: containers: - name: rp-console image: docker.redpanda.com/redpandadata/console:latest ports: - containerPort: 8001 env: - name: KAFKA_TLS_ENABLED value: "false" - name: KAFKA_SASL_ENABLED value: "true" - name: KAFKA_SASL_USERNAME value: user1 - name: KAFKA_SASL_PASSWORD value: ${KAFKA_PASSWORD} - name: KAFKA_BROKERS value: kafka-dev-controller-headless.kafka-cluster.svc:9092 EOF
Run against a Kubernetes cluster:
# Install CRD and run provider locally
make dev
# Create a ProviderConfig pointing to the local Kafka cluster
kubectl apply -f - <<EOF
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
secretRef:
key: credentials
name: kafka-creds
namespace: kafka-cluster
source: Secret
EOFBuild package:
make buildBuild image:
make imagePush image:
make pushBuild binary:
make build