You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,19 +7,19 @@ cascade:
7
7
8
8
minutes_to_complete: 30
9
9
10
-
who_is_this_for: This Learning Path is designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.
10
+
who_is_this_for: This is an advanced topic designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.
11
11
12
12
learning_objectives:
13
13
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
14
14
- Deploy Kafka on the Ubuntu virtual machine.
15
-
- Perform Kafka baseline testing and benchmarking on both x86_64 and Arm64 virtual machines.
15
+
- Perform Kafka baseline testing and benchmarking on Arm64 virtual machines.
16
16
17
17
prerequisites:
18
18
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
19
19
- Basic understanding of Linux command line.
20
20
- Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md
+49-22Lines changed: 49 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,26 +8,30 @@ layout: learningpathall
8
8
9
9
## Run a Baseline test with Kafka
10
10
11
-
After installing Kafka on your Arm64 virtual machine, you can perform a simple baseline test to validate that Kafka runs correctly and produces the expected output.
11
+
After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end.
12
+
Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper.
13
+
This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture.
12
14
13
-
Kafka 4.1.0 uses **KRaft**, which integrates the control and data planes, eliminating the need for a separate ZooKeeper instance.
14
-
15
-
We need 4 terminals to complete this test. The first will start the Kafka server, the second will create a topic, and the final two will send and receive messages, respectively.
15
+
To perform this baseline test, you will use four terminal sessions:
16
+
Terminal 1: Start the Kafka broker (in KRaft mode).
17
+
Terminal 2: Create a topic.
18
+
Terminal 3: Send messages (Producer).
19
+
Terminal 4: Read messages (Consumer).
16
20
17
21
### Initial Setup: Configure & Format KRaft
18
-
**KRaft** is Kafka's new metadata protocol that integrates the responsibilities of ZooKeeper directly into Kafka, simplifying deployment and improving scalability by making the brokers self-managing.
19
-
20
-
First, you must configure your `server.properties` file for KRaft and format the storage directory. These steps are done only once.
22
+
KRaft (Kafka Raft) replaces ZooKeeper by embedding metadata management directly into the Kafka broker.
23
+
This improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters.
24
+
Before starting Kafka in KRaft mode, configure and initialize the storage directory. These steps are required only once per broker.
21
25
22
-
**1. Edit the Configuration File**: Open your `server.properties` file.
26
+
1. Edit the Configuration File
27
+
Open the Kafka configuration file in an editor:
23
28
24
29
```console
25
-
nano /opt/kafka/config/server.properties
30
+
vi /opt/kafka/config/server.properties
26
31
```
27
32
28
-
**2. Add/Modify KRaft Properties:** Ensure the following lines are present and correctly configured for a single-node setup.
29
-
30
-
This configuration file sets up a single Kafka server to act as both a **controller** (managing cluster metadata) and a broker (handling data), running in **KRaft** mode. It defines the node's unique ID and specifies the local host as the sole participant in the **controller** quorum.
33
+
2. Add or Modify KRaft Properties
34
+
Ensure the following configuration entries are present for a single-node KRaft setup:
**3. Format the Storage Directory:** Use the `kafka-storage.sh` tool to format the metadata directory.
44
+
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.
45
+
46
+
3. Format the Storage Directory
47
+
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.
41
48
42
49
```console
43
50
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
44
51
```
45
-
You should see an output similar to:
52
+
You should see output similar to:
46
53
47
54
```output
48
55
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
49
56
```
57
+
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.
50
58
51
-
Now, Perform the Baseline Test
59
+
## Perform the Baseline Test
60
+
With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected.
61
+
62
+
You’ll use multiple terminals for this test:
63
+
Terminal 1: Start the Kafka broker.
64
+
Terminal 2: Create and verify a topic.
65
+
Terminal 3: Send messages (Producer).
66
+
Terminal 4: Read messages (Consumer).
52
67
53
68
### Terminal 1 – Start Kafka Broker
54
-
This command starts the Kafka broker (the main server that sends and receives messages) in KRaft mode. Keep this terminal open.
69
+
Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode:
Keep this terminal open and running. The broker process must stay active for all subsequent commands.
76
+
60
77
### Terminal 2 – Create a Topic
61
-
This command creates a new Kafka topic named `test-topic-kafka` (like a channel where messages will be stored and shared) with 1 partition and 1 copy (replica).
78
+
Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages:
62
79
63
80
```console
64
81
cd /opt/kafka
@@ -70,7 +87,8 @@ You should see output similar to:
70
87
Created topic test-topic-kafka.
71
88
```
72
89
73
-
-**Verify topic**
90
+
**Verify Topic Creation**
91
+
List available topics to confirm that your new topic was created successfully:
@@ -81,24 +99,33 @@ You should see output similar to:
81
99
__consumer_offsets
82
100
test-topic-kafka
83
101
```
102
+
Kafka is now running, and you’ve successfully created and verified a topic.
103
+
Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment.
84
104
85
105
### Terminal 3 – Console Producer (Write Message)
86
-
This command starts the **Kafka Producer**, which lets you type and send messages into the `test-topic-kafka` topic. For example, when you type `hello from azure vm`, this message will be delivered to any Kafka consumer subscribed to that topic.
106
+
In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker.
You should see an empty prompt where you can start typing. Type `hello from azure arm vm` and press **Enter**.
112
+
After running the command, you’ll see an empty prompt. This means the producer is ready to send data.
113
+
Type the following message and press Enter:
114
+
115
+
```output
116
+
hello from azure arm vm
117
+
```
118
+
Each line you type is sent as a message to the Kafka topic and stored on disk by the broker.
93
119
94
120
### Terminal 4 – Console Consumer (Read Message)
95
-
This command starts the **Kafka Consumer**, which listens to the `test-topic-kafka` topic and displays all messages from the beginning.
121
+
Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log.
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed:
101
128
102
-
You should see your message `hello from azure arm vm` displayed in this terminal, confirming that the producer's message was successfully received.
129
+
You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.
103
130
104
131
Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine.
0 commit comments