You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md
+44-31Lines changed: 44 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,11 @@ weight: 5
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Run a Baseline test with Kafka
9
+
## Run a baseline test with Kafka
10
10
11
11
After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end.
12
12
Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper.
13
+
13
14
This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture.
14
15
15
16
To run this baseline test, open four terminal sessions:
@@ -20,47 +21,48 @@ To run this baseline test, open four terminal sessions:
20
21
-**Terminal 4:** Read messages as the consumer.
21
22
22
23
Each terminal has a specific role, helping you verify that Kafka works end-to-end on your Arm64 VM.
24
+
23
25
## Configure and format KRaft
24
26
25
27
KRaft (Kafka Raft) mode replaces ZooKeeper by managing metadata directly within the Kafka broker. This change improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters.
26
28
27
29
Before you start Kafka in KRaft mode, you need to configure the broker and initialize the storage directory. You only need to do this once for each broker.
28
30
31
+
## Edit the configuration file to update KRaft properties
29
32
30
-
## Edit the configuration file
31
-
Open the Kafka configuration file in an editor:
33
+
Use an editor to open the Kafka configuration file at `/opt/kafka/config/server.properties`. Use `sudo` so that you can save the file.
32
34
33
-
```console
34
-
vi /opt/kafka/config/server.properties
35
-
```
35
+
Ensure the following configuration entries are present for a single-node KRaft setup:
36
36
37
-
## Add or modify KRaft properties
38
-
Ensure the following configuration entries are present for a single-node KRaft setup:
37
+
```java
38
+
process.roles=controller,broker
39
+
node.id=1
40
+
controller.quorum.voters=1@localhost:9093
41
+
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
42
+
advertised.listeners=PLAINTEXT://localhost:9092
43
+
log.dirs=/tmp/kraft-combined-logs
44
+
```
39
45
40
-
```java
41
-
process.roles=controller,broker
42
-
node.id=1
43
-
controller.quorum.voters=1@localhost:9093
44
-
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
45
-
advertised.listeners=PLAINTEXT://localhost:9092
46
-
log.dirs=/tmp/kraft-combined-logs
47
-
```
48
-
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
46
+
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
49
47
50
48
## Format the storage directory
51
-
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
52
49
53
-
```console
54
-
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
55
-
```
56
-
You should see output similar to:
50
+
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
51
+
52
+
```console
53
+
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
54
+
```
55
+
56
+
You should see output similar to:
57
+
58
+
```output
59
+
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
60
+
```
57
61
58
-
```output
59
-
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
60
-
```
61
-
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
62
+
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
62
63
63
64
## Perform the baseline test
65
+
64
66
With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected.
Keep this terminal open and running. The broker process must stay active for all subsequent commands.
80
84
81
85
## Terminal 2 - create a topic
86
+
82
87
Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages:
Kafka is now running, and you’ve successfully created and verified a topic.
120
129
Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment.
121
130
122
131
## Terminal 3 - console producer (write message)
132
+
123
133
In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker.
After running the command, you’ll see an empty prompt. This means the producer is ready to send data.
130
-
Type the following message and press Enter:
139
+
140
+
After running the command, you'll see an empty prompt. This means the producer is ready to send data. Type the following message and press Enter:
131
141
132
142
```output
133
143
hello from azure arm vm
134
144
```
145
+
135
146
Each line you type is sent as a message to the Kafka topic and stored on disk by the broker.
136
147
137
148
## Terminal 4 - console consumer (read message)
149
+
138
150
Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log:
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed:
145
156
146
-
You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
157
+
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed.
158
+
159
+
You've now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
147
160
148
-
Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine.
161
+
Now you can proceed to benchmarking Kafka's performance on the Azure Cobalt 100 Arm virtual machine.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,12 +10,12 @@ layout: learningpathall
10
10
11
11
Apache Kafka includes official performance testing utilities that allow you to measure throughput, latency, and end-to-end efficiency of your messaging system. These tools`kafka-producer-perf-test.sh` and `kafka-consumer-perf-test.sh` are bundled with Kafka’s standard installation and are designed for realistic performance evaluation of producers and consumers.
12
12
13
-
## Steps for Kafka Benchmarking
13
+
## Steps for Kafka benchmarking
14
14
15
15
Before running the benchmarks, make sure your Kafka broker is already active in a separate terminal (as configured in the previous section).
16
16
Now open two new terminal sessions; one for running the producer benchmark, and the other for the consumer benchmark.
17
17
18
-
### Terminal A - Producer Benchmark
18
+
### Terminal 1 - Producer Benchmark
19
19
20
20
The Producer Performance Test measures how quickly Kafka can publish messages to a topic and reports key performance metrics such as throughput, average latency, and percentile latencies.
21
21
@@ -44,7 +44,7 @@ You should see output similar to:
44
44
|**Max latency**| The longest single message send time recorded. |
45
45
|**50th / 95th / 99th percentiles**| Distribution of message send times. For example, 95% of messages completed under 1,184 ms in the sample output. |
46
46
47
-
### Terminal B - Consumer benchmark
47
+
### Terminal 2 - Consumer benchmark
48
48
49
49
The Consumer Performance Test measures how efficiently Kafka can read and process messages from a topic. It reports metrics such as total messages consumed, data throughput, and fetch rates, helping validate overall consumer-side performance on your Azure Cobalt 100 (Arm64) VM.
0 commit comments