Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,19 +7,19 @@ cascade:

minutes_to_complete: 30

who_is_this_for: This Learning Path is designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.
who_is_this_for: This is an advanced topic designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors.

learning_objectives:
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
- Deploy Kafka on the Ubuntu virtual machine.
- Perform Kafka baseline testing and benchmarking on both x86_64 and Arm64 virtual machines.
- Perform Kafka baseline testing and benchmarking on Arm64 virtual machines.

prerequisites:
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
- Basic understanding of Linux command line.
- Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms.

author: Jason Andrews
author: Pareena Verma

### Tags
skilllevels: Advanced
Expand All @@ -31,8 +31,6 @@ armips:

tools_software_languages:
- Kafka
- kafka-producer-perf-test.sh
- kafka-consumer-perf-test.sh

operatingsystems:
- Linux
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,30 @@ layout: learningpathall

## Run a Baseline test with Kafka

After installing Kafka on your Arm64 virtual machine, you can perform a simple baseline test to validate that Kafka runs correctly and produces the expected output.
After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end.
Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper.
This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture.

Kafka 4.1.0 uses **KRaft**, which integrates the control and data planes, eliminating the need for a separate ZooKeeper instance.

We need 4 terminals to complete this test. The first will start the Kafka server, the second will create a topic, and the final two will send and receive messages, respectively.
To perform this baseline test, you will use four terminal sessions:
Terminal 1: Start the Kafka broker (in KRaft mode).
Terminal 2: Create a topic.
Terminal 3: Send messages (Producer).
Terminal 4: Read messages (Consumer).

### Initial Setup: Configure & Format KRaft
**KRaft** is Kafka's new metadata protocol that integrates the responsibilities of ZooKeeper directly into Kafka, simplifying deployment and improving scalability by making the brokers self-managing.

First, you must configure your `server.properties` file for KRaft and format the storage directory. These steps are done only once.
KRaft (Kafka Raft) replaces ZooKeeper by embedding metadata management directly into the Kafka broker.
This improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters.
Before starting Kafka in KRaft mode, configure and initialize the storage directory. These steps are required only once per broker.

**1. Edit the Configuration File**: Open your `server.properties` file.
1. Edit the Configuration File
Open the Kafka configuration file in an editor:

```console
nano /opt/kafka/config/server.properties
vi /opt/kafka/config/server.properties
```

**2. Add/Modify KRaft Properties:** Ensure the following lines are present and correctly configured for a single-node setup.

This configuration file sets up a single Kafka server to act as both a **controller** (managing cluster metadata) and a broker (handling data), running in **KRaft** mode. It defines the node's unique ID and specifies the local host as the sole participant in the **controller** quorum.
2. Add or Modify KRaft Properties
Ensure the following configuration entries are present for a single-node KRaft setup:

```java
process.roles=controller,broker
Expand All @@ -37,28 +41,41 @@ listeners=PLAINTEXT://:9092,CONTROLLER://:9093
advertised.listeners=PLAINTEXT://localhost:9092
log.dirs=/tmp/kraft-combined-logs
```
**3. Format the Storage Directory:** Use the `kafka-storage.sh` tool to format the metadata directory.
This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum.

3. Format the Storage Directory
Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID.

```console
bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties
```
You should see an output similar to:
You should see output similar to:

```output
Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1.
```
This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode.

Now, Perform the Baseline Test
## Perform the Baseline Test
With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected.

You’ll use multiple terminals for this test:
Terminal 1: Start the Kafka broker.
Terminal 2: Create and verify a topic.
Terminal 3: Send messages (Producer).
Terminal 4: Read messages (Consumer).

### Terminal 1 – Start Kafka Broker
This command starts the Kafka broker (the main server that sends and receives messages) in KRaft mode. Keep this terminal open.
Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode:

```console
cd /opt/kafka
bin/kafka-server-start.sh config/server.properties
```
Keep this terminal open and running. The broker process must stay active for all subsequent commands.

### Terminal 2 – Create a Topic
This command creates a new Kafka topic named `test-topic-kafka` (like a channel where messages will be stored and shared) with 1 partition and 1 copy (replica).
Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages:

```console
cd /opt/kafka
Expand All @@ -70,7 +87,8 @@ You should see output similar to:
Created topic test-topic-kafka.
```

- **Verify topic**
**Verify Topic Creation**
List available topics to confirm that your new topic was created successfully:

```console
bin/kafka-topics.sh --list --bootstrap-server localhost:9092
Expand All @@ -81,24 +99,33 @@ You should see output similar to:
__consumer_offsets
test-topic-kafka
```
Kafka is now running, and you’ve successfully created and verified a topic.
Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment.

### Terminal 3 – Console Producer (Write Message)
This command starts the **Kafka Producer**, which lets you type and send messages into the `test-topic-kafka` topic. For example, when you type `hello from azure vm`, this message will be delivered to any Kafka consumer subscribed to that topic.
In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker.

```console
cd /opt/kafka
bin/kafka-console-producer.sh --topic test-topic-kafka --bootstrap-server localhost:9092
```
You should see an empty prompt where you can start typing. Type `hello from azure arm vm` and press **Enter**.
After running the command, you’ll see an empty prompt. This means the producer is ready to send data.
Type the following message and press Enter:

```output
hello from azure arm vm
```
Each line you type is sent as a message to the Kafka topic and stored on disk by the broker.

### Terminal 4 – Console Consumer (Read Message)
This command starts the **Kafka Consumer**, which listens to the `test-topic-kafka` topic and displays all messages from the beginning.
Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log.

```console
cd /opt/kafka
bin/kafka-console-consumer.sh --topic test-topic-kafka --from-beginning --bootstrap-server localhost:9092
```
If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed:

You should see your message `hello from azure arm vm` displayed in this terminal, confirming that the producer's message was successfully received.
You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication.

Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine.
Loading