Skip to content

Commit d0db28e

Browse files
authored
Merge pull request #2538 from pareenaverma/content_review
Kafka on Azure LP Tech review
2 parents ebd09aa + 24b5390 commit d0db28e

File tree

4 files changed

+106
-83
lines changed

4 files changed

+106
-83
lines changed

content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,19 +7,19 @@ cascade:
77

88
minutes_to_complete: 30
99

10-
who_is_this_for: This Learning Path is designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.
10+
who_is_this_for: This is an advanced topic designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.
1111

1212
learning_objectives:
1313
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
1414
- Deploy Kafka on the Ubuntu virtual machine.
15-
- Perform Kafka baseline testing and benchmarking on both x86_64 and Arm64 virtual machines.
15+
- Perform Kafka baseline testing and benchmarking on Arm64 virtual machines.
1616

1717
prerequisites:
1818
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
1919
- Basic understanding of Linux command line.
2020
- Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms.
2121

22-
author: Jason Andrews
22+
author: Pareena Verma
2323

2424
### Tags
2525
skilllevels: Advanced
@@ -31,8 +31,6 @@ armips:
3131

3232
tools_software_languages:
3333
- Kafka
34-
- kafka-producer-perf-test.sh
35-
- kafka-consumer-perf-test.sh
3634

3735
operatingsystems:
3836
- Linux

content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md

Lines changed: 49 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -8,26 +8,30 @@ layout: learningpathall
88

99
## Run a Baseline test with Kafka
1010

11-
After installing Kafka on your Arm64 virtual machine, you can perform a simple baseline test to validate that Kafka runs correctly and produces the expected output.
11+
After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end.
12+
Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper.
13+
This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture.
1214

13-
Kafka 4.1.0 uses **KRaft**, which integrates the control and data planes, eliminating the need for a separate ZooKeeper instance.
14-
15-
We need 4 terminals to complete this test. The first will start the Kafka server, the second will create a topic, and the final two will send and receive messages, respectively.
15+
To perform this baseline test, you will use four terminal sessions:
16+
Terminal 1: Start the Kafka broker (in KRaft mode).
17+
Terminal 2: Create a topic.
18+
Terminal 3: Send messages (Producer).
19+
Terminal 4: Read messages (Consumer).
1620

1721
### Initial Setup: Configure & Format KRaft
18-
**KRaft** is Kafka's new metadata protocol that integrates the responsibilities of ZooKeeper directly into Kafka, simplifying deployment and improving scalability by making the brokers self-managing.
19-
20-
First, you must configure your `server.properties` file for KRaft and format the storage directory. These steps are done only once.
22+
KRaft (Kafka Raft) replaces ZooKeeper by embedding metadata management directly into the Kafka broker.
23+
This improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters.
24+
Before starting Kafka in KRaft mode, configure and initialize the storage directory. These steps are required only once per broker.
2125

22-
**1. Edit the Configuration File**: Open your `server.properties` file.
26+
1. Edit the Configuration File
27+
Open the Kafka configuration file in an editor:
2328

2429
```console
25-
nano /opt/kafka/config/server.properties
30+
vi /opt/kafka/config/server.properties
2631
```
2732

28-
**2. Add/Modify KRaft Properties:** Ensure the following lines are present and correctly configured for a single-node setup.
29-
30-
This configuration file sets up a single Kafka server to act as both a **controller** (managing cluster metadata) and a broker (handling data), running in **KRaft** mode. It defines the node's unique ID and specifies the local host as the sole participant in the **controller** quorum.
33+
2. Add or Modify KRaft Properties
34+
Ensure the following configuration entries are present for a single-node KRaft setup:
3135

3236
```java
3337
process.roles=controller,broker
@@ -37,28 +41,41 @@ listeners=PLAINTEXT://:9092,CONTROLLER://:9093
3741
advertised.listeners=PLAINTEXT://localhost:9092
3842
log.dirs=/tmp/kraft-combined-logs
3943
```
40-
**3. Format the Storage Directory:** Use the `kafka-storage.sh` tool to format the metadata directory.
44+
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
45+
46+
3. Format the Storage Directory
47+
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
4148

4249
```console
4350
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
4451
```
45-
You should see an output similar to:
52+
You should see output similar to:
4653

4754
```output
4855
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
4956
```
57+
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
5058

51-
Now, Perform the Baseline Test
59+
## Perform the Baseline Test
60+
With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected.
61+
62+
You’ll use multiple terminals for this test:
63+
Terminal 1: Start the Kafka broker.
64+
Terminal 2: Create and verify a topic.
65+
Terminal 3: Send messages (Producer).
66+
Terminal 4: Read messages (Consumer).
5267

5368
### Terminal 1 – Start Kafka Broker
54-
This command starts the Kafka broker (the main server that sends and receives messages) in KRaft mode. Keep this terminal open.
69+
Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode:
5570

5671
```console
5772
cd /opt/kafka
5873
bin/kafka-server-start.sh config/server.properties
5974
```
75+
Keep this terminal open and running. The broker process must stay active for all subsequent commands.
76+
6077
### Terminal 2 – Create a Topic
61-
This command creates a new Kafka topic named `test-topic-kafka` (like a channel where messages will be stored and shared) with 1 partition and 1 copy (replica).
78+
Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages:
6279

6380
```console
6481
cd /opt/kafka
@@ -70,7 +87,8 @@ You should see output similar to:
7087
Created topic test-topic-kafka.
7188
```
7289

73-
- **Verify topic**
90+
**Verify Topic Creation**
91+
List available topics to confirm that your new topic was created successfully:
7492

7593
```console
7694
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
@@ -81,24 +99,33 @@ You should see output similar to:
8199
__consumer_offsets
82100
test-topic-kafka
83101
```
102+
Kafka is now running, and you’ve successfully created and verified a topic.
103+
Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment.
84104

85105
### Terminal 3 – Console Producer (Write Message)
86-
This command starts the **Kafka Producer**, which lets you type and send messages into the `test-topic-kafka` topic. For example, when you type `hello from azure vm`, this message will be delivered to any Kafka consumer subscribed to that topic.
106+
In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker.
87107

88108
```console
89109
cd /opt/kafka
90110
bin/kafka-console-producer.sh --topic test-topic-kafka --bootstrap-server localhost:9092
91111
```
92-
You should see an empty prompt where you can start typing. Type `hello from azure arm vm` and press **Enter**.
112+
After running the command, you’ll see an empty prompt. This means the producer is ready to send data.
113+
Type the following message and press Enter:
114+
115+
```output
116+
hello from azure arm vm
117+
```
118+
Each line you type is sent as a message to the Kafka topic and stored on disk by the broker.
93119

94120
### Terminal 4 – Console Consumer (Read Message)
95-
This command starts the **Kafka Consumer**, which listens to the `test-topic-kafka` topic and displays all messages from the beginning.
121+
Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log.
96122

97123
```console
98124
cd /opt/kafka
99125
bin/kafka-console-consumer.sh --topic test-topic-kafka --from-beginning --bootstrap-server localhost:9092
100126
```
127+
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed:
101128

102-
You should see your message `hello from azure arm vm` displayed in this terminal, confirming that the producer's message was successfully received.
129+
You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
103130

104131
Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine.

0 commit comments

Comments
 (0)