Skip to content

Commit c62baee

Browse files
committed
feat: add docs for kafka connector
Signed-off-by: Bence Csati <bence.csati@axoflow.com>
1 parent 32d3a7a commit c62baee

File tree

2 files changed

+134
-0
lines changed

2 files changed

+134
-0
lines changed

README.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ You can find guides per connector:
88

99
- [Azure connector](./connectors/azure/README.md#quickstart)
1010
- [AWS connector](./connectors/aws/README.md#quickstart)
11+
- [Kafka connector](./connectors/kafka/README.md#quickstart)
1112

1213
## Environment Variables
1314

@@ -46,6 +47,32 @@ You can find guides per connector:
4647
| `AWS_ACCESS_KEY_ID` | No | - | AWS access key ID for direct authentication |
4748
| `AWS_SECRET_ACCESS_KEY` | No | - | AWS secret access key for direct authentication |
4849

50+
### Kafka Provider
51+
52+
| Variable | Required | Default | Description |
53+
|----------|----------|---------|-------------|
54+
| `KAFKA_BROKERS` | No | `localhost:9092` | Kafka broker addresses (e.g., `broker1:9092,broker2:9092`) |
55+
| `KAFKA_PROTOCOL_VERSION` | No | `2.1.0` | Kafka protocol version |
56+
| `KAFKA_RESOLVE_CANONICAL_BOOTSTRAP_SERVERS_ONLY` | No | `false` | Resolve then reverse-lookup broker IPs during startup |
57+
| `KAFKA_LOGS_TOPIC` | No | `otlp_logs` | Topic for consuming logs |
58+
| `KAFKA_LOGS_ENCODING` | No | `otlp_json` | Encoding for logs (otlp_proto, otlp_json, raw, text, json, azure_resource_logs) |
59+
| `KAFKA_GROUP_ID` | No | `axocloudconnector` | Consumer group ID |
60+
| `KAFKA_CLIENT_ID` | No | `axocloudconnector` | Kafka client ID |
61+
| `KAFKA_INITIAL_OFFSET` | No | `latest` | Initial offset (`latest` or `earliest`) |
62+
| `KAFKA_GROUP_INSTANCE_ID` | No | - | Unique identifier for static consumer group membership |
63+
| `KAFKA_SESSION_TIMEOUT` | No | `10s` | Timeout for detecting client failures |
64+
| `KAFKA_HEARTBEAT_INTERVAL` | No | `3s` | Expected time between heartbeats |
65+
| `KAFKA_GROUP_REBALANCE_STRATEGY` | No | `range` | Partition assignment strategy (range, roundrobin, sticky) |
66+
| `KAFKA_MIN_FETCH_SIZE` | No | `1` | Minimum message bytes to fetch |
67+
| `KAFKA_DEFAULT_FETCH_SIZE` | No | `1048576` | Default message bytes to fetch (1MB) |
68+
| `KAFKA_MAX_FETCH_SIZE` | No | `0` | Maximum message bytes to fetch (0 = unlimited) |
69+
| `KAFKA_MAX_FETCH_WAIT` | No | `250ms` | Maximum wait time for min_fetch_size bytes |
70+
| `KAFKA_TLS_INSECURE` | No | `false` | Use insecure connection |
71+
| `KAFKA_TLS_CA_FILE` | No | - | Path to CA certificate file |
72+
| `KAFKA_TLS_CERT_FILE` | No | - | Path to client certificate file |
73+
| `KAFKA_TLS_KEY_FILE` | No | - | Path to client key file |
74+
| `KAFKA_TLS_INSECURE_SKIP_VERIFY` | No | `false` | Skip TLS certificate verification |
75+
4976
## Usage
5077

5178
### Local Development with Docker

connectors/kafka/README.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# Kafka Receiver Connector
2+
3+
This directory contains the Axoflow Kafka receiver connector which helps collecting logs from Kafka topics.
4+
5+
## Quickstart
6+
7+
Create a topic for your logs, e.g:
8+
9+
```sh
10+
kubectl apply -f - <<EOF
11+
apiVersion: v1
12+
kind: Pod
13+
metadata:
14+
name: kafka-client
15+
namespace: kafka
16+
spec:
17+
containers:
18+
- name: kafka-client
19+
image: docker.io/apache/kafka:4.1.1
20+
command: ["sleep", "infinity"]
21+
EOF
22+
kubectl wait --for=condition=ready pod/kafka-client -n kafka --timeout=60s
23+
24+
kubectl exec -n kafka kafka-client -- /opt/kafka/bin/kafka-topics.sh \
25+
--create \
26+
--topic otlp_logs \
27+
--partitions 3 \
28+
--replication-factor 1 \
29+
--bootstrap-server kafka.kafka.svc.cluster.local:9092 \
30+
--if-not-exists
31+
```
32+
33+
Make sure the required environment variables are set before running the connector.
34+
35+
### Using plaintext connection
36+
37+
```bash
38+
UUID_FULL=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
39+
AXOCLOUDCONNECTOR_DEVICE_ID=$(echo "$UUID_FULL" | cut -d'-' -f1)
40+
41+
docker run \
42+
--rm \
43+
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
44+
-e KAFKA_BROKERS="${KAFKA_BROKERS}" \
45+
-e KAFKA_LOGS_TOPIC="${KAFKA_LOGS_TOPIC}" \
46+
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
47+
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
48+
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
49+
ghcr.io/axoflow/axocloudconnectors:latest
50+
```
51+
52+
### Using TLS
53+
54+
```bash
55+
UUID_FULL=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
56+
AXOCLOUDCONNECTOR_DEVICE_ID=$(echo "$UUID_FULL" | cut -d'-' -f1)
57+
58+
docker run \
59+
--rm \
60+
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
61+
-v "${KAFKA_CERTS_PATH}:/certs:ro" \
62+
-e KAFKA_BROKERS="${KAFKA_BROKERS}" \
63+
-e KAFKA_LOGS_TOPIC="${KAFKA_LOGS_TOPIC}" \
64+
-e KAFKA_TLS_CA_FILE="/certs/ca.pem" \
65+
-e KAFKA_TLS_CERT_FILE="/certs/cert.pem" \
66+
-e KAFKA_TLS_KEY_FILE="/certs/key.pem" \
67+
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
68+
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
69+
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
70+
ghcr.io/axoflow/axocloudconnectors:latest
71+
```
72+
73+
## Deploy with Helm-chart
74+
75+
1. Set the required environment-variables.
76+
77+
### Example deploy with Axorouter in cluster
78+
79+
```bash
80+
make minikube-cluster
81+
make docker-build
82+
make minikube-load-image
83+
84+
kubectl create namespace cloudconnectors
85+
kubectl create secret generic kafka-credentials \
86+
--from-literal=brokers="<YOUR-KAFKA-BROKERS>" \
87+
--from-literal=logs-topic="<YOUR-KAFKA-LOGS-TOPIC>" \
88+
--namespace cloudconnectors \
89+
--dry-run=client -o yaml | kubectl apply -f -
90+
91+
UUID_FULL=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
92+
AXOCLOUDCONNECTOR_DEVICE_ID=$(echo "$UUID_FULL" | cut -d'-' -f1)
93+
94+
helm upgrade --install --wait --namespace cloudconnectors cloudconnectors ./charts/cloudconnectors \
95+
--set image.repository="axocloudconnectors" \
96+
--set image.tag="dev" \
97+
--set 'env[0].name=AXOROUTER_ENDPOINT' \
98+
--set 'env[0].value=axorouter.axoflow-local.svc.cluster.local:4317' \
99+
--set 'env[1].name=AXOCLOUDCONNECTOR_DEVICE_ID' \
100+
--set "env[1].value=${AXOCLOUDCONNECTOR_DEVICE_ID}" \
101+
--set 'env[2].name=KAFKA_BROKERS' \
102+
--set 'env[2].valueFrom.secretKeyRef.name=kafka-credentials' \
103+
--set 'env[2].valueFrom.secretKeyRef.key=brokers' \
104+
--set 'env[3].name=KAFKA_LOGS_TOPIC' \
105+
--set 'env[3].valueFrom.secretKeyRef.name=kafka-credentials' \
106+
--set 'env[3].valueFrom.secretKeyRef.key=logs-topic'
107+
```

0 commit comments

Comments
 (0)