From 2af65637ab76529db91641027eca472d8af251ab Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 10:48:21 -0500 Subject: [PATCH 1/7] Update _index.md --- .../servers-and-cloud-computing/kafka-azure/_index.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md index f9b2845871..99685a606c 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/_index.md @@ -7,19 +7,19 @@ cascade: minutes_to_complete: 30 -who_is_this_for: This Learning Path is designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors. +who_is_this_for: This is an advanced topic designed for software developers looking to migrate their Kafka workloads from x86_64 to Arm-based platforms, specifically on the Microsoft Azure Cobalt 100 processors. learning_objectives: - Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image. - Deploy Kafka on the Ubuntu virtual machine. - - Perform Kafka baseline testing and benchmarking on both x86_64 and Arm64 virtual machines. + - Perform Kafka baseline testing and benchmarking on Arm64 virtual machines. prerequisites: - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6). - Basic understanding of Linux command line. - Familiarity with the [Apache Kafka architecture](https://kafka.apache.org/) and deployment practices on Arm64 platforms. -author: Jason Andrews +author: Pareena Verma ### Tags skilllevels: Advanced @@ -31,8 +31,6 @@ armips: tools_software_languages: - Kafka - - kafka-producer-perf-test.sh - - kafka-consumer-perf-test.sh operatingsystems: - Linux From 5b2bf1e663f7d81de6fdc4c9cae1079a3c7a110c Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 10:54:05 -0500 Subject: [PATCH 2/7] Update deploy.md --- .../kafka-azure/deploy.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md index ac9a3ad15c..b8b7a0627c 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/deploy.md @@ -8,20 +8,20 @@ layout: learningpathall ## Install Kafka on Azure Cobalt 100 -This section walks you through installing latest version of Apache Kafka on an Ubuntu Pro 24.04 Arm virtual machine. You’ll download Kafka, extract it into `/opt`, configure permissions, and verify the installation by checking the installed version. - -Follow the below instructions to install Kafka on Ubuntu Pro 24.04 virtual machine. +This section guides you through installing the latest version of Apache Kafka on an Ubuntu Pro 24.04 (Arm64) virtual machine running on Azure Cobalt 100. Kafka is a high-throughput, distributed event streaming platform used for real-time data pipelines and messaging applications. ### Install Java -Kafka requires Java to run. Install it by executing the following commands: +Apache Kafka runs on the Java Virtual Machine (JVM), so Java must be installed before setting up Kafka. Use the following commands to update your package index and install the default JDK: ```console sudo apt update sudo apt install -y default-jdk ``` +This installs the Java Development Kit (JDK), which includes the JVM, compiler, and standard libraries required for running Kafka services. + ### Download and Install Kafka -This sequence of commands downloads Kafka version 4.1.0 to the `/opt` directory, extracts the tarball, renames the folder to kafka for simplicity, and sets ownership so the current user can access and manage the Kafka installation. It prepares the system for running Kafka without permission issues. +Use the following commands to download and install Apache Kafka 4.1.0 in the /opt directory, extract the archive, and set appropriate permissions for your user. This prepares your system to run Kafka without requiring elevated privileges later. ```console cd /opt @@ -31,19 +31,20 @@ sudo mv kafka_2.13-4.1.0 kafka sudo chown -R $USER:$USER kafka ``` {{% notice Note %}} -Kafka [3.5.0 release announcement](https://kafka.apache.org/blog#apache_kafka_350_release_announcement) includes a significant number of new features and fixes, including improving Kafka Connect and MirrorMaker 2. They aren't Arm-specific, but can benefit all architectures, including Linux/Arm64. +Kafka [3.5.0 release announcement](https://kafka.apache.org/blog#apache_kafka_350_release_announcement) includes a significant number of new features and fixes, including improving Kafka Connect and MirrorMaker 2, benefiting both x86 and Arm architectures. The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Apache Kafka version 3.5.0 as the minimum recommended on Arm platforms. {{% /notice %}} ### Check installed Kafka version -These commands navigate to the Kafka installation directory and check the installed Kafka version, confirming that Kafka has been successfully installed and is ready for use. +After extraction, verify that Kafka was installed successfully by checking the version: + ```console cd /opt/kafka bin/kafka-topics.sh --version ``` -You should see an output similar to: +You should see output similar to: ```output 4.1.0 ``` From 5f696149f6d7bed1fa04cd7a8723c0a4f0e61be3 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 11:25:08 -0500 Subject: [PATCH 3/7] Update baseline.md --- .../kafka-azure/baseline.md | 71 +++++++++++++------ 1 file changed, 49 insertions(+), 22 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md index 46453417d3..224d11ec0f 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md @@ -8,26 +8,30 @@ layout: learningpathall ## Run a Baseline test with Kafka -After installing Kafka on your Arm64 virtual machine, you can perform a simple baseline test to validate that Kafka runs correctly and produces the expected output. +After installing Apache Kafka 4.1.0 on your Azure Cobalt 100 Arm64 virtual machine, you can perform a baseline test to verify that Kafka runs correctly and that messages can be produced and consumed end-to-end. +Kafka 4.1.0 introduces KRaft mode (Kafka Raft Metadata mode), which integrates the control and data planes, eliminating the need for ZooKeeper. +This simplifies deployment, reduces latency, and provides a unified, self-managed Kafka cluster architecture. -Kafka 4.1.0 uses **KRaft**, which integrates the control and data planes, eliminating the need for a separate ZooKeeper instance. - -We need 4 terminals to complete this test. The first will start the Kafka server, the second will create a topic, and the final two will send and receive messages, respectively. +To perform this baseline test, you will use four terminal sessions: +Terminal 1: Start the Kafka broker (in KRaft mode). +Terminal 2: Create a topic. +Terminal 3: Send messages (Producer). +Terminal 4: Read messages (Consumer). ### Initial Setup: Configure & Format KRaft -**KRaft** is Kafka's new metadata protocol that integrates the responsibilities of ZooKeeper directly into Kafka, simplifying deployment and improving scalability by making the brokers self-managing. - -First, you must configure your `server.properties` file for KRaft and format the storage directory. These steps are done only once. +KRaft (Kafka Raft) replaces ZooKeeper by embedding metadata management directly into the Kafka broker. +This improves scalability, reduces external dependencies, and speeds up controller failover in distributed clusters. +Before starting Kafka in KRaft mode, configure and initialize the storage directory. These steps are required only once per broker. -**1. Edit the Configuration File**: Open your `server.properties` file. +1. Edit the Configuration File +Open the Kafka configuration file in an editor: ```console -nano /opt/kafka/config/server.properties +vi /opt/kafka/config/server.properties ``` -**2. Add/Modify KRaft Properties:** Ensure the following lines are present and correctly configured for a single-node setup. - -This configuration file sets up a single Kafka server to act as both a **controller** (managing cluster metadata) and a broker (handling data), running in **KRaft** mode. It defines the node's unique ID and specifies the local host as the sole participant in the **controller** quorum. +2. Add or Modify KRaft Properties +Ensure the following configuration entries are present for a single-node KRaft setup: ```java process.roles=controller,broker @@ -37,28 +41,41 @@ listeners=PLAINTEXT://:9092,CONTROLLER://:9093 advertised.listeners=PLAINTEXT://localhost:9092 log.dirs=/tmp/kraft-combined-logs ``` -**3. Format the Storage Directory:** Use the `kafka-storage.sh` tool to format the metadata directory. +This configuration file sets up a single Kafka server to act as both a controller (managing cluster metadata) and a broker (handling data), running in KRaft mode. It defines the node's unique ID and specifies the local host as the sole participant in the controller quorum. + +3. Format the Storage Directory +Format the metadata storage directory using the kafka-storage.sh tool. This initializes KRaft’s internal Raft logs with a unique cluster ID. ```console bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties ``` -You should see an output similar to: +You should see output similar to: ```output Formatting metadata directory /tmp/kraft-combined-logs with metadata.version 4.1-IV1. ``` +This confirms that the Kafka storage directory has been successfully formatted and that the broker is ready to start in KRaft mode. -Now, Perform the Baseline Test +## Perform the Baseline Test +With Kafka 4.1.0 installed and configured in KRaft mode, you’re now ready to run a baseline test to verify that the Kafka broker starts correctly, topics can be created, and message flow works as expected. + +You’ll use multiple terminals for this test: +Terminal 1: Start the Kafka broker. +Terminal 2: Create and verify a topic. +Terminal 3: Send messages (Producer). +Terminal 4: Read messages (Consumer). ### Terminal 1 – Start Kafka Broker -This command starts the Kafka broker (the main server that sends and receives messages) in KRaft mode. Keep this terminal open. +Start the Kafka broker (the main server process responsible for managing topics and handling messages) in KRaft mode: ```console cd /opt/kafka bin/kafka-server-start.sh config/server.properties ``` +Keep this terminal open and running. The broker process must stay active for all subsequent commands. + ### Terminal 2 – Create a Topic -This command creates a new Kafka topic named `test-topic-kafka` (like a channel where messages will be stored and shared) with 1 partition and 1 copy (replica). +Open a new terminal window. Create a topic named test-topic-kafka, which acts as a logical channel where producers send and consumers receive messages: ```console cd /opt/kafka @@ -70,7 +87,8 @@ You should see output similar to: Created topic test-topic-kafka. ``` -- **Verify topic** +**Verify Topic Creation** +List available topics to confirm that your new topic was created successfully: ```console bin/kafka-topics.sh --list --bootstrap-server localhost:9092 @@ -81,24 +99,33 @@ You should see output similar to: __consumer_offsets test-topic-kafka ``` +Kafka is now running, and you’ve successfully created and verified a topic. +Next, you’ll use Terminal 3 to produce messages and Terminal 4 to consume messages, completing the baseline functional test on your Arm64 environment. ### Terminal 3 – Console Producer (Write Message) -This command starts the **Kafka Producer**, which lets you type and send messages into the `test-topic-kafka` topic. For example, when you type `hello from azure vm`, this message will be delivered to any Kafka consumer subscribed to that topic. +In this step, you’ll start the Kafka Producer, which publishes messages to the topic test-topic-kafka. The producer acts as the data source, sending messages to the Kafka broker. ```console cd /opt/kafka bin/kafka-console-producer.sh --topic test-topic-kafka --bootstrap-server localhost:9092 ``` -You should see an empty prompt where you can start typing. Type `hello from azure arm vm` and press **Enter**. +After running the command, you’ll see an empty prompt. This means the producer is ready to send data. +Type the following message and press Enter: + +```output +hello from azure arm vm +``` +Each line you type is sent as a message to the Kafka topic and stored on disk by the broker. ### Terminal 4 – Console Consumer (Read Message) -This command starts the **Kafka Consumer**, which listens to the `test-topic-kafka` topic and displays all messages from the beginning. +Next, open another terminal and start the Kafka Consumer, which subscribes to the same topic (test-topic-kafka) and reads messages from the beginning of the log. ```console cd /opt/kafka bin/kafka-console-consumer.sh --topic test-topic-kafka --from-beginning --bootstrap-server localhost:9092 ``` +If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed: -You should see your message `hello from azure arm vm` displayed in this terminal, confirming that the producer's message was successfully received. Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine. +`hello from azure arm vm` From 5f454bb6d7f570931ae571e212d817571e0c8580 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 11:26:11 -0500 Subject: [PATCH 4/7] Update baseline.md --- .../servers-and-cloud-computing/kafka-azure/baseline.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md index 224d11ec0f..e53e41266a 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/baseline.md @@ -126,6 +126,6 @@ bin/kafka-console-consumer.sh --topic test-topic-kafka --from-beginning --bootst ``` If Kafka is working correctly, you should immediately see your message `hello from azure arm vm` displayed: +You’ve now completed a full end-to-end Kafka validation test on your Azure Cobalt 100 Arm64 VM, verifying producer, broker, and consumer communication. Now you can proceed to benchmarking Kafka’s performance on the Azure Cobalt 100 Arm virtual machine. -`hello from azure arm vm` From 1813e48f783b1a775ad38f95ffcaef034eafdfbf Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 11:36:45 -0500 Subject: [PATCH 5/7] Update benchmarking.md --- .../kafka-azure/benchmarking.md | 98 +++++++++---------- 1 file changed, 47 insertions(+), 51 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md index 051663dc9a..21a3a3f9d5 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md @@ -6,19 +6,20 @@ weight: 6 layout: learningpathall --- -## Benchmark Kafka on Azure Cobalt 100 Arm-based instances and x86_64 instances +## Benchmark Kafka on Azure Cobalt 100 Arm-based instances -Kafka’s official performance tools (**kafka-producer-perf-test.sh** and **kafka-consumer-perf-test.sh**) let you generate test workloads, measure message throughput, and record end-to-end latency. +Apache Kafka includes official performance testing utilities that allow you to measure throughput, latency, and end-to-end efficiency of your messaging system. These tools`kafka-producer-perf-test.sh` and `kafka-consumer-perf-test.sh` are bundled with Kafka’s standard installation and are designed for realistic performance evaluation of producers and consumers. ## Steps for Kafka Benchmarking -Before starting the benchmark, ensure that the **Kafka broker** are already running in separate terminals. - -Now, open two new terminals—one for the **producer benchmark** and another for the **consumer benchmark**. +Before running the benchmarks, make sure your Kafka broker is already active in a separate terminal (as configured in the previous section). +Now open two new terminal sessions — one for running the producer benchmark and another for the consumer benchmark. ### Terminal A - Producer Benchmark -The producer benchmark measures how fast Kafka can send messages, reporting throughput and latency percentiles. +The Producer Performance Test measures how quickly Kafka can publish messages to a topic and reports key performance metrics such as throughput, average latency, and percentile latencies. + +Run the following command to simulate message production on your Azure Cobalt 100 Arm64 VM: ```console cd /opt/kafka @@ -34,10 +35,20 @@ You should see output similar to: ```output 1000000 records sent, 252589.0 records/sec (24.09 MB/sec), 850.85 ms avg latency, 1219.00 ms max latency, 851 ms 50th, 1184 ms 95th, 1210 ms 99th, 1218 ms 99.9th. ``` + +| Metric | Meaning | +| ---------------------------------- | --------------------------------------------------------------------------------------------------------------- | +| **Records/sec** | Number of messages successfully produced per second. Higher indicates better throughput. | +| **MB/sec** | Total data throughput in megabytes per second. | +| **Avg latency** | Average time (in milliseconds) for the producer to send a message and receive acknowledgment from the broker. | +| **Max latency** | The longest single message send time recorded. | +| **50th / 95th / 99th percentiles** | Distribution of message send times. For example, 95% of messages completed under 1,184 ms in the sample output. | + ### Terminal B - Consumer benchmark -The consumer benchmark measures how fast Kafka can read messages from the topic, reporting throughput and total messages consumed. +The Consumer Performance Test measures how efficiently Kafka can read and process messages from a topic. It reports metrics such as total messages consumed, data throughput, and fetch rates, helping validate overall consumer-side performance on your Azure Cobalt 100 (Arm64) VM. +Run the following command in a new terminal: ```console cd /opt/kafka bin/kafka-consumer-perf-test.sh \ @@ -52,21 +63,21 @@ You should see output similar to: start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec 2025-09-03 06:07:13:616, 2025-09-03 06:07:17:545, 95.3674, 24.2727, 1000001, 254517.9435, 3354, 575, 165.8564, 1739132.1739 ``` - -## Benchmark Results Table Explained: - -- **Messages Processed** – Total number of messages handled during the test. -- **Records/sec** – Rate of messages sent or consumed per second. -- **MB/sec** – Data throughput in megabytes per second. -- **Avg Latency (ms)** – Average delay in sending messages (producer only). -- **Max Latency (ms)** – Longest observed delay in sending messages (producer only). -- **50th (ms)** – Median latency (half the messages were faster, half slower). -- **95th (ms)** – Latency below which 95% of messages were delivered. -- **99th (ms)** – Latency below which 99% of messages were delivered. -- **99.9th (ms)** – Latency below which 99.9% of messages were delivered. +Understanding the Metrics: + +| Metric | Description | +| --------------------------- | --------------------------------------------------------------------------------------------------------- | +| **`data.consumed.in.MB`** | Total data consumed during the benchmark. | +| **`MB.sec`** | Consumption throughput in megabytes per second. Higher values indicate better sustained read performance. | +| **`data.consumed.in.nMsg`** | Total number of messages successfully consumed. | +| **`nMsg.sec`** | Messages consumed per second (a key measure of consumer-side throughput). | +| **`fetch.time.ms`** | Time spent retrieving messages from the broker. Lower values mean faster message delivery. | +| **`fetch.nMsg.sec`** | Per-fetch message rate, useful for comparing network and I/O efficiency. | +| **`rebalance.time.ms`** | Time spent coordinating consumer group assignments before actual consumption begins. | ## Benchmark summary on Arm64: -Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**. +The following results summarize Kafka producer and consumer benchmark performance on an Azure Cobalt 100 (Arm64) virtual machine, specifically a D4ps_v6 instance running Ubuntu Pro 24.04 LTS. +These results validate Kafka’s stability and throughput consistency on Arm-based infrastructure. ### Consumer Performance Test | Metric | Value | Unit | |-----------------------------|-------------|---------------| @@ -80,39 +91,24 @@ Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pr | Fetch Throughput (Data) | 180.9629 | MB/sec | | Fetch Throughput (Messages)| 1,897,535.10| Messages/sec | -### Producer Performance Test -| Metric | Records Sent | Records/sec | Throughput | Average Latency | Maximum Latency | 50th Percentile Latency | 95th Percentile Latency | 99th Percentile Latency | 99.9th Percentile Latency | -|--------|--------------|-------------|------------|-----------------|-----------------|-------------------------|-------------------------|-------------------------|---------------------------| -| Value | 1,000,000 | 257,532.8 | 24.56 | 816.19 | 1237.00 | 799 | 1168 | 1220 | 1231 | -| Unit | Records | Records/sec | MB/sec | ms | ms | ms | ms | ms | ms | - -## Benchmark summary on x86_64: -Here is a summary of the benchmark results collected on x86_64 **D4s_v6 Ubuntu Pro 24.04 LTS virtual machine**. -### Consumer Performance Test -| Metric | Value | Unit | -|--------------------|-------------|---------------| -| Total Time Taken | 3.811 | Seconds | -| Data Consumed | 95.3674 | MB | -| Throughput (Data) | 25.0243 | MB/sec | -| Messages Consumed | 1,000,001 | Messages | -| Throughput (Messages) | 262,398.58 | Messages/sec | -| Rebalance Time | 3271 | Milliseconds | -| Fetch Time | 540 | Milliseconds | -| Fetch Throughput (Data) | 176.6064 | MB/sec | -| Fetch Throughput (Messages) | 1,851,853.70| Messages/sec | +Interpretation: +The consumer achieved over 258,000 messages per second, equivalent to ~24.6 MB/sec, with low fetch latency. +A fetch throughput near 1.9 million messages/sec indicates efficient partition reads and network I/O handling on the Arm64 platform. +Minimal rebalance and fetch times confirm Kafka’s responsiveness under sustained workloads. ### Producer Performance Test | Metric | Records Sent | Records/sec | Throughput | Average Latency | Maximum Latency | 50th Percentile Latency | 95th Percentile Latency | 99th Percentile Latency | 99.9th Percentile Latency | |--------|--------------|-------------|------------|-----------------|-----------------|-------------------------|-------------------------|-------------------------|---------------------------| -| Value | 1,000,000 | 242,013.6 | 23.08 | 840.69 | 1351.00 | 832 | 1283 | 1330 | 1350 | +| Value | 1,000,000 | 257,532.8 | 24.56 | 816.19 | 1237.00 | 799 | 1168 | 1220 | 1231 | | Unit | Records | Records/sec | MB/sec | ms | ms | ms | ms | ms | ms | - -## Benchmark comparison insights -When comparing the results on Arm64 vs x86_64 virtual machines: - - -- The Kafka **consumer** achieved **25.02 MB/sec throughput**, processing ~**262K messages/sec** with fetch throughput exceeding **1.85M messages/sec**. -- The Kafka **producer** sustained **23.08 MB/sec throughput**, with an average latency of ~**841 ms** and peak latency of ~**1351 ms**. -- These results confirm stable Kafka performance on the **Azure Ubuntu Pro arm64 virtual machine**, validating its suitability for **baseline testing and benchmarking**. - -You have now benchmarked Kafka on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64. +Interpretation: +The producer sustained a throughput of ~257,500 records/sec (~24.5 MB/sec) with an average latency of 816 ms. +The 95th percentile latency (1168 ms) and 99th percentile (1220 ms) show predictable network and I/O performance. +Kafka maintained consistent throughput, even under full-speed production, with no message loss or broker errors reported. + +### Benchmark Comparison Insights +When analyzing performance on Azure Cobalt 100 Arm64 virtual machines: +**Producer efficiency**: The producer reached ~23–25 MB/sec throughput with average latencies below 900 ms, demonstrating stable delivery rates for high-volume workloads. +**Consumer scalability**: The consumer maintained ~262K messages/sec throughput with near-linear scaling of fetch performance — exceeding 1.85M messages/sec internally. +**Performance stability**: Both producer and consumer benchmarks showed low jitter and consistent latency distribution across iterations, confirming Kafka’s predictable behavior on Arm-based VMs. +afka on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64. From acb85d7187535b0da5a3ea488a0b949cca22c915 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 11:37:26 -0500 Subject: [PATCH 6/7] Update benchmarking.md --- .../kafka-azure/benchmarking.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md index 21a3a3f9d5..12ef9043eb 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md @@ -108,7 +108,7 @@ Kafka maintained consistent throughput, even under full-speed production, with n ### Benchmark Comparison Insights When analyzing performance on Azure Cobalt 100 Arm64 virtual machines: -**Producer efficiency**: The producer reached ~23–25 MB/sec throughput with average latencies below 900 ms, demonstrating stable delivery rates for high-volume workloads. -**Consumer scalability**: The consumer maintained ~262K messages/sec throughput with near-linear scaling of fetch performance — exceeding 1.85M messages/sec internally. -**Performance stability**: Both producer and consumer benchmarks showed low jitter and consistent latency distribution across iterations, confirming Kafka’s predictable behavior on Arm-based VMs. -afka on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64. + **Producer efficiency**: The producer reached ~23–25 MB/sec throughput with average latencies below 900 ms, demonstrating stable delivery rates for high-volume workloads. + **Consumer scalability**: The consumer maintained ~262K messages/sec throughput with near-linear scaling of fetch performance — exceeding 1.85M messages/sec internally. + **Performance stability**: Both producer and consumer benchmarks showed low jitter and consistent latency distribution across iterations, confirming Kafka’s predictable behavior on Arm-based VMs. + From 24b5390b2e71e0f948d82e6ac5962d05eaac0550 Mon Sep 17 00:00:00 2001 From: pareenaverma Date: Tue, 11 Nov 2025 11:38:49 -0500 Subject: [PATCH 7/7] Update benchmarking.md --- .../servers-and-cloud-computing/kafka-azure/benchmarking.md | 1 + 1 file changed, 1 insertion(+) diff --git a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md index 12ef9043eb..dde683c1bb 100644 --- a/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/kafka-azure/benchmarking.md @@ -101,6 +101,7 @@ Minimal rebalance and fetch times confirm Kafka’s responsiveness under sustain |--------|--------------|-------------|------------|-----------------|-----------------|-------------------------|-------------------------|-------------------------|---------------------------| | Value | 1,000,000 | 257,532.8 | 24.56 | 816.19 | 1237.00 | 799 | 1168 | 1220 | 1231 | | Unit | Records | Records/sec | MB/sec | ms | ms | ms | ms | ms | ms | + Interpretation: The producer sustained a throughput of ~257,500 records/sec (~24.5 MB/sec) with an average latency of 816 ms. The 95th percentile latency (1168 ms) and 99th percentile (1220 ms) show predictable network and I/O performance.