Skip to content

Commit dc10c85

Browse files
Merge pull request #2559 from jasonrandrews/review
Review Kafka Learning Path
2 parents 4fb156f + 9b31097 commit dc10c85

File tree

2 files changed

+47
-34
lines changed

2 files changed

+47
-34
lines changed

content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md

Lines changed: 44 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,11 @@ weight: 5
66
layout: learningpathall
77
---
88

9-
## Run a Baseline test with Kafka
9+
## Run a baseline test with Kafka
1010

1111
After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end.
1212
Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper.
13+
1314
This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture.
1415

1516
To run this baseline test, open four terminal sessions:
@@ -20,47 +21,48 @@ To run this baseline test, open four terminal sessions:
2021
- **Terminal 4:** Read messages as the consumer.
2122

2223
Each terminal has a specific role, helping you verify that Kafka works end-to-end on your Arm64 VM.
24+
2325
## Configure and format KRaft
2426

2527
KRaft (Kafka Raft) mode replaces ZooKeeper by managing metadata directly within the Kafka broker. This change improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters.
2628

2729
Before you start Kafka in KRaft mode, you need to configure the broker and initialize the storage directory. You only need to do this once for each broker.
2830

31+
## Edit the configuration file to update KRaft properties
2932

30-
## Edit the configuration file
31-
Open the Kafka configuration file in an editor:
33+
Use an editor to open the Kafka configuration file at `/opt/kafka/config/server.properties`. Use `sudo` so that you can save the file.
3234

33-
```console
34-
vi /opt/kafka/config/server.properties
35-
```
35+
Ensure the following configuration entries are present for a single-node KRaft setup:
3636

37-
## Add or modify KRaft properties
38-
Ensure the following configuration entries are present for a single-node KRaft setup:
37+
```java
38+
process.roles=controller,broker
39+
node.id=1
40+
controller.quorum.voters=1@localhost:9093
41+
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
42+
advertised.listeners=PLAINTEXT://localhost:9092
43+
log.dirs=/tmp/kraft-combined-logs
44+
```
3945

40-
```java
41-
process.roles=controller,broker
42-
node.id=1
43-
controller.quorum.voters=1@localhost:9093
44-
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
45-
advertised.listeners=PLAINTEXT://localhost:9092
46-
log.dirs=/tmp/kraft-combined-logs
47-
```
48-
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
46+
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
4947

5048
## Format the storage directory
51-
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
5249

53-
```console
54-
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
55-
```
56-
You should see output similar to:
50+
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
51+
52+
```console
53+
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
54+
```
55+
56+
You should see output similar to:
57+
58+
```output
59+
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
60+
```
5761

58-
```output
59-
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
60-
```
61-
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
62+
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
6263

6364
## Perform the baseline test
65+
6466
With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected.
6567

6668
You’ll use multiple terminals for this test:
@@ -70,28 +72,33 @@ Terminal 3: send messages (Producer)
7072
Terminal 4: read messages (Consumer)
7173

7274
## Terminal 1 - start Kafka broker
75+
7376
Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode:
7477

7578
```console
7679
cd /opt/kafka
7780
bin/kafka-server-start.sh config/server.properties
7881
```
82+
7983
Keep this terminal open and running. The broker process must stay active for all subsequent commands.
8084

8185
## Terminal 2 - create a topic
86+
8287
Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages:
8388

8489
```console
8590
cd /opt/kafka
8691
bin/kafka-topics.sh --create --topic test-topic-kafka --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
8792
```
93+
8894
You should see output similar to:
8995

9096
```output
9197
Created topic test-topic-kafka.
9298
```
9399

94100
## Verify topic creation
101+
95102
List available topics to confirm that your new topic was created successfully. Run the following command:
96103

97104
```console
@@ -110,39 +117,45 @@ If you see `test-topic-kafka` in the list, your topic was created and is ready f
110117
```console
111118
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
112119
```
120+
113121
You should see output similar to:
114122

115123
```output
116124
__consumer_offsets
117125
test-topic-kafka
118126
```
127+
119128
Kafka is now running, and you’ve successfully created and verified a topic.
120129
Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment.
121130

122131
## Terminal 3 - console producer (write message)
132+
123133
In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker.
124134

125135
```console
126136
cd /opt/kafka
127137
bin/kafka-console-producer.sh --topic test-topic-kafka --bootstrap-server localhost:9092
128138
```
129-
After running the command, you’ll see an empty prompt. This means the producer is ready to send data.
130-
Type the following message and press Enter:
139+
140+
After running the command, you'll see an empty prompt. This means the producer is ready to send data. Type the following message and press Enter:
131141

132142
```output
133143
hello from azure arm vm
134144
```
145+
135146
Each line you type is sent as a message to the Kafka topic and stored on disk by the broker.
136147

137148
## Terminal 4 - console consumer (read message)
149+
138150
Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log:
139151

140152
```console
141153
cd /opt/kafka
142154
bin/kafka-console-consumer.sh --topic test-topic-kafka --from-beginning --bootstrap-server localhost:9092
143155
```
144-
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed:
145156

146-
You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
157+
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed.
158+
159+
You've now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
147160

148-
Now you can proceed to benchmarking Kafkas performance on the Azure Cobalt 100 Arm virtual machine.
161+
Now you can proceed to benchmarking Kafka's performance on the Azure Cobalt 100 Arm virtual machine.

content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@ layout: learningpathall
1010

1111
Apache Kafka includes official performance testing utilities that allow you to measure throughput, latency, and end-to-end efficiency of your messaging system. These tools`kafka-producer-perf-test.sh` and `kafka-consumer-perf-test.sh` are bundled with Kafka’s standard installation and are designed for realistic performance evaluation of producers and consumers.
1212

13-
## Steps for Kafka Benchmarking
13+
## Steps for Kafka benchmarking
1414

1515
Before running the benchmarks, make sure your Kafka broker is already active in a separate terminal (as configured in the previous section).
1616
Now open two new terminal sessions; one for running the producer benchmark, and the other for the consumer benchmark.
1717

18-
### Terminal A - Producer Benchmark
18+
### Terminal 1 - Producer Benchmark
1919

2020
The Producer Performance Test measures how quickly Kafka can publish messages to a topic and reports key performance metrics such as throughput, average latency, and percentile latencies.
2121

@@ -44,7 +44,7 @@ You should see output similar to:
4444
| **Max latency** | The longest single message send time recorded. |
4545
| **50th / 95th / 99th percentiles** | Distribution of message send times. For example, 95% of messages completed under 1,184 ms in the sample output. |
4646

47-
### Terminal B - Consumer benchmark
47+
### Terminal 2 - Consumer benchmark
4848

4949
The Consumer Performance Test measures how efficiently Kafka can read and process messages from a topic. It reports metrics such as total messages consumed, data throughput, and fetch rates, helping validate overall consumer-side performance on your Azure Cobalt 100 (Arm64) VM.
5050

0 commit comments

Comments
 (0)