diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md index 99685a606..0952b8dda 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md @@ -1,23 +1,19 @@ --- -title: Deploy Kafka on the Microsoft Azure Cobalt 100 processors - -draft: true -cascade: - draft: true +title: Deploy Apache Kafka on Arm-based Microsoft Azure Cobalt 100 virtual machines minutes_to_complete: 30 -who_is_this_for: This is an advanced topic designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors. +who_is_this_for: This is an advanced topic for developers looking to migrate their Apache Kafka workloads from x86_64 to Arm-based platforms, specifically on Microsoft Azure Cobalt 100 (arm64) virtual machines. learning_objectives: - - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image. - - Deploy Kafka on the Ubuntu virtual machine. - - Perform Kafka baseline testing and benchmarking on Arm64 virtual machines. + - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image + - Deploy Kafka on an Ubuntu virtual machine + - Perform Kafka baseline testing and benchmarking on Arm64 virtual machines prerequisites: - - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6). - - Basic understanding of Linux command line. - - Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms. + - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6) + - Basic understanding of Linux command line + - Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms author: Pareena Verma diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/background.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/background.md index 48990a4d0..74ffcdbd1 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/background.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/background.md @@ -8,7 +8,7 @@ layout: "learningpathall" ## Cobalt 100 Arm-based processor -Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance. +Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each virtual CPU (vCPU), ensuring consistent and predictable performance. To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). @@ -17,4 +17,4 @@ Apache Kafka is a high-performance, open-source distributed event streaming plat It allows you to publish, subscribe to, store, and process streams of records in a fault-tolerant and scalable manner. Kafka stores data in topics, which are partitioned and replicated across a cluster to ensure durability and high availability. -Kafka is widely used for messaging, log aggregation, event sourcing, real-time analytics, and integrating large-scale data systems. Learn more from the [Apache Kafka official website](https://kafka.apache.org/) and its [official documentation](https://kafka.apache.org/documentation). +Kafka is widely used for messaging, log aggregation, event sourcing, real-time analytics, and integrating large-scale data systems. Learn more from the [Apache Kafka official website](https://kafka.apache.org/) and the [Apache Kafka documentation](https://kafka.apache.org/documentation). diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md index e53e41266..8fd3d3cff 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md @@ -1,5 +1,5 @@ --- -title: Baseline Testing +title: Run baseline testing with Kafka on Azure Arm VM weight: 5 ### FIXED, DO NOT MODIFY @@ -12,60 +12,64 @@ After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machi Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper. This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture. -To perform this baseline test, you will use four terminal sessions: -Terminal 1: Start the Kafka broker (in KRaft mode). -Terminal 2: Create a topic. -Terminal 3: Send messages (Producer). -Terminal 4: Read messages (Consumer). +To run this baseline test, open four terminal sessions: -### Initial Setup: Configure & Format KRaft -KRaft (Kafka Raft) replaces ZooKeeper by embedding metadata management directly into the Kafka broker. -This improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters. -Before starting Kafka in KRaft mode, configure and initialize the storage directory. These steps are required only once per broker. +- **Terminal 1:** Start the Kafka broker in KRaft mode. +- **Terminal 2:** Create a topic. +- **Terminal 3:** Send messages as the producer. +- **Terminal 4:** Read messages as the consumer. -1. Edit the Configuration File -Open the Kafka configuration file in an editor: +Each terminal has a specific role, helping you verify that Kafka works end-to-end on your Arm64 VM. +## Configure and format KRaft -```console -vi /opt/kafka/config/server.properties -``` +KRaft (Kafka Raft) mode replaces ZooKeeper by managing metadata directly within the Kafka broker. This change improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters. -2. Add or Modify KRaft Properties -Ensure the following configuration entries are present for a single-node KRaft setup: +Before you start Kafka in KRaft mode, you need to configure the broker and initialize the storage directory. You only need to do this once for each broker. -```java -process.roles=controller,broker -node.id=1 -controller.quorum.voters=1@localhost:9093 -listeners=PLAINTEXT://:9092,CONTROLLER://:9093 -advertised.listeners=PLAINTEXT://localhost:9092 -log.dirs=/tmp/kraft-combined-logs -``` -This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum. -3. Format the Storage Directory -Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID. + ## Edit the configuration file + Open the Kafka configuration file in an editor: -```console -bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties -``` -You should see output similar to: + ```console + vi /opt/kafka/config/server.properties + ``` -```output -Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1. -``` -This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode. +## Add or modify KRaft properties + Ensure the following configuration entries are present for a single-node KRaft setup: + + ```java + process.roles=controller,broker + node.id=1 + controller.quorum.voters=1@localhost:9093 + listeners=PLAINTEXT://:9092,CONTROLLER://:9093 + advertised.listeners=PLAINTEXT://localhost:9092 + log.dirs=/tmp/kraft-combined-logs + ``` + This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum. + +## Format the storage directory + Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID. + + ```console + bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties + ``` + You should see output similar to: + + ```output + Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1. + ``` + This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode. -## Perform the Baseline Test +## Perform the baseline test With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected. You’ll use multiple terminals for this test: -Terminal 1: Start the Kafka broker. -Terminal 2: Create and verify a topic. -Terminal 3: Send messages (Producer). -Terminal 4: Read messages (Consumer). +Terminal 1: start the Kafka broker +Terminal 2: create and verify a topic +Terminal 3: send messages (Producer) +Terminal 4: read messages (Consumer) -### Terminal 1 – Start Kafka Broker +## Terminal 1 - start Kafka broker Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode: ```console @@ -74,7 +78,7 @@ bin/kafka-server-start.sh config/server.properties ``` Keep this terminal open and running. The broker process must stay active for all subsequent commands. -### Terminal 2 – Create a Topic +## Terminal 2 - create a topic Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages: ```console @@ -87,8 +91,21 @@ You should see output similar to: Created topic test-topic-kafka. ``` -**Verify Topic Creation** -List available topics to confirm that your new topic was created successfully: +## Verify topic creation +List available topics to confirm that your new topic was created successfully. Run the following command: + +```console +bin/kafka-topics.sh --list --bootstrap-server localhost:9092 +``` + +The expected output is: + +```output +__consumer_offsets +test-topic-kafka +``` + +If you see `test-topic-kafka` in the list, your topic was created and is ready for use. ```console bin/kafka-topics.sh --list --bootstrap-server localhost:9092 @@ -102,7 +119,7 @@ test-topic-kafka Kafka is now running, and you’ve successfully created and verified a topic. Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment. -### Terminal 3 – Console Producer (Write Message) +## Terminal 3 - console producer (write message) In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker. ```console @@ -117,8 +134,8 @@ hello from azure arm vm ``` Each line you type is sent as a message to the Kafka topic and stored on disk by the broker. -### Terminal 4 – Console Consumer (Read Message) -Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log. +## Terminal 4 - console consumer (read message) +Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log: ```console cd /opt/kafka diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md index dde683c1b..cc4d62813 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md @@ -1,5 +1,5 @@ --- -title: Benchmarking with Official Kafka Tools +title: Benchmark with official Kafka tools weight: 6 ### FIXED, DO NOT MODIFY @@ -13,7 +13,7 @@ Apache Kafka includes official performance testing utilities that allow you to m ## Steps for Kafka Benchmarking Before running the benchmarks, make sure your Kafka broker is already active in a separate terminal (as configured in the previous section). -Now open two new terminal sessions — one for running the producer benchmark and another for the consumer benchmark. +Now open two new terminal sessions; one for running the producer benchmark, and the other for the consumer benchmark. ### Terminal A - Producer Benchmark @@ -107,9 +107,6 @@ The producer sustained a throughput of ~257,500 records/sec (~24.5 MB/sec) with The 95th percentile latency (1168 ms) and 99th percentile (1220 ms) show predictable network and I/O performance. Kafka maintained consistent throughput, even under full-speed production, with no message loss or broker errors reported. -### Benchmark Comparison Insights -When analyzing performance on Azure Cobalt 100 Arm64 virtual machines: - **Producer efficiency**: The producer reached ~23–25 MB/sec throughput with average latencies below 900 ms, demonstrating stable delivery rates for high-volume workloads. - **Consumer scalability**: The consumer maintained ~262K messages/sec throughput with near-linear scaling of fetch performance — exceeding 1.85M messages/sec internally. - **Performance stability**: Both producer and consumer benchmarks showed low jitter and consistent latency distribution across iterations, confirming Kafka’s predictable behavior on Arm-based VMs. +### Benchmark comparison insights +When analyzing performance on Azure Cobalt 100 Arm64 virtual machines, you’ll notice that Kafka delivers stable and predictable results for both producers and consumers. The producer consistently achieves throughput between 23 MB/sec and 25 MB/sec, with average latencies below 900 ms. This means you can rely on efficient message delivery, even when handling high-volume workloads. On the consumer side, throughput remains strong at around 262,000 messages per second, and fetch performance scales nearly linearly, often exceeding 1.85 million messages per second internally. Throughout multiple benchmark runs, both producer and consumer tests demonstrate low jitter and consistent latency distribution, confirming that Kafka maintains reliable performance on Arm-based virtual machines. diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/create-instance.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/create-instance.md index 9571395aa..f9cfe7acd 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/create-instance.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/create-instance.md @@ -1,5 +1,5 @@ --- -title: Create an Arm based cloud virtual machine using Microsoft Cobalt 100 CPU +title: Create an Arm-based cloud virtual machine using Microsoft Cobalt 100 CPU weight: 3 ### FIXED, DO NOT MODIFY @@ -8,43 +8,44 @@ layout: learningpathall ## Introduction -There are several ways to create an Arm-based Cobalt 100 virtual machine : the Microsoft Azure console, the Azure CLI tool, or using your choice of IaC (Infrastructure as Code). This guide will use the Azure console to create a virtual machine with Arm-based Cobalt 100 Processor. +You can create an Arm-based Cobalt 100 virtual machine in several ways: using the Microsoft Azure portal, the Azure CLI, or Infrastructure as Code (IaC) tools. This Learning Path uses the Azure portal to walk you through creating a virtual machine with an Arm-based Cobalt 100 processor, which is from the D-Series v6 general-purpose virtual machines. -This learning path focuses on the general-purpose virtual machine of the D series. Please read the guide on [Dpsv6 size series](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series) offered by Microsoft Azure. +These VMs are designed for a wide range of workloads and offer Arm-based performance with the Cobalt 100 CPU. To learn more about the Dpsv6 size series, see the official [Dpsv6 size series guide](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series) from Microsoft Azure. -If you have never used the Microsoft Cloud Platform before, please review the microsoft [guide to Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu). +If you have never used the Microsoft Cloud Platform before, see the Microsoft guide on how to [Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu). -#### Create an Arm-based Azure Virtual Machine +## Create an Arm-based Azure virtual machine -Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to "Virtual Machines". -1. Select "Create", and click on "Virtual Machine" from the drop-down list. -2. Inside the "Basic" tab, fill in the Instance details such as "Virtual machine name" and "Region". -3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture. -4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list. +Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to **Virtual Machines**. -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines") +- Select **Create**, and click on **Virtual Machine** from the drop-down list. +- Inside the **Basic** tab, fill in the Instance details such as **Virtual machine name** and **Region**. +- Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select **Arm64** as the VM architecture. +- In the **Size** field, click on **See all sizes** and select the D-Series v6 family of virtual machines. Select **D4ps_v6** from the list. -5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. -6. Fill in the Administrator username for your VM. -7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key. -8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports. +![Azure portal showing the selection of the D-Series v6 family of virtual machines, with D4ps_v6 highlighted as the chosen size. The interface displays a list of available VM sizes, including CPU, memory, and pricing details. The wider environment is the Azure portal's virtual machine creation workflow, with a clean and organized layout. The tone is neutral and informative, focused on guiding users through the selection process. Visible text includes D-Series v6, D4ps_v6, CPU, memory, and price columns. alt-text#center](images/instance.png "Selecting the D-Series v6 family of virtual machines") -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules") +- Select **SSH public key** as an authentication type. Azure automatically generates an SSH key pair for you and allows you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. +- Fill in the administrator username for your VM. +- Select **Generate new key pair**, and select **RSA SSH Format** as the SSH key type. RSA can offer better security with keys longer than 3072 bits. Give a key pair name to your SSH key. +- In the **Inbound port rules**, select **HTTP (80)** and **SSH (22)** as the inbound ports. -9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following: +![Azure portal interface displaying the Inbound port rules configuration step for an Azure Cobalt 100 Arm64 virtual machine (D4ps_v6). The main focus is on selecting HTTP port 80 and SSH port 22 as allowed inbound ports. The wider environment is the Azure portal's virtual machine creation workflow, with a clean and organized layout. Visible text includes Inbound port rules, HTTP 80, SSH 22, and options to add or remove ports. The tone is neutral and instructional, guiding users through network security settings for the VM. alt-text#center](images/instance1.png "Allow inbound port rules") -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM") +- Click on the **Review + Create** tab and review the configuration for your virtual machine. It should look like the following: -10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button. +![Azure portal interface displaying the Review and Create step for an Azure Cobalt 100 Arm64 virtual machine. The primary subject is the summary panel showing selected configuration details, including Ubuntu Pro 24.04 LTS as the operating system, D4ps_v6 as the VM size, Arm64 architecture, and SSH public key authentication. The wider environment is the Azure portal's virtual machine creation workflow, with a clean and organized layout. Visible text includes Review and Create, Ubuntu Pro 24.04 LTS, D4ps_v6, Arm64, SSH public key, and configuration summary fields. The tone is neutral and informative, guiding users through the final review before VM deployment. alt-text#center](images/ubuntu-pro.png "Review and create an Azure Cobalt 100 Arm64 VM") -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources") +- Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button. -11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details. +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Download private key and create resources") -![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal") +Your virtual machine is now ready and running. To connect, use SSH with your private key and the VM's public IP address. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "VM deployment confirmation in Azure portal") {{% notice Note %}} -To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). +To learn more about Arm-based virtual machine in Azure, refer to *Getting Started with Microsoft Azure* in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). {{% /notice %}} diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md index b8b7a0627..0b1020f12 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md @@ -10,7 +10,7 @@ layout: learningpathall This section guides you through installing the latest version of Apache Kafka on an Ubuntu Pro 24.04 (Arm64) virtual machine running on Azure Cobalt 100. Kafka is a high-throughput, distributed event streaming platform used for real-time data pipelines and messaging applications. -### Install Java +## Install Java Apache Kafka runs on the Java Virtual Machine (JVM), so Java must be installed before setting up Kafka. Use the following commands to update your package index and install the default JDK: ```console @@ -19,7 +19,7 @@ sudo apt install -y default-jdk ``` This installs the Java Development Kit (JDK), which includes the JVM, compiler, and standard libraries required for running Kafka services. -### Download and Install Kafka +## Download and install Kafka Use the following commands to download and install Apache Kafka 4.1.0 in the /opt directory, extract the archive, and set appropriate permissions for your user. This prepares your system to run Kafka without requiring elevated privileges later. @@ -35,7 +35,7 @@ Kafka [3.5.0 release announcement](https://kafka.apache.org/blog#apache_kafka_35 The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Apache Kafka version 3.5.0 as the minimum recommended on Arm platforms. {{% /notice %}} -### Check installed Kafka version +## Check installed Kafka version After extraction, verify that Kafka was installed successfully by checking the version: