diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md index 3b6bf27f6..41cd6a239 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md @@ -1,5 +1,5 @@ --- -title: Add Arm nodes to your Azure Kubernetes Services cluster using a multi-architecture nginx container image +title: Build hybrid AKS clusters with Arm nodes and nginx draft: true cascade: @@ -7,14 +7,14 @@ cascade: minutes_to_complete: 60 -who_is_this_for: This Learning Path is for developers who want to compare the performance of x64 and arm64 deployments by running nginx on a hybrid Azure Kubernetes Service (AKS) cluster using nginx's multi-architecture container image. Once you've seen how easy it is to add arm64 nodes to an existing cluster, you'll be ready to explore arm64-based nodes for other workloads in your environment. - +who_is_this_for: This Learning Path is for developers who want to understand nginx performance on x64 and arm64 deployments by running a hybrid Azure Kubernetes Service (AKS) cluster. learning_objectives: - - Create a hybrid AKS cluster with x64 and arm64 nodes. - - Deploy nginx's multi-architecture container image, pods, and services to the AKS cluster. - - Smoke test nginx from each architecture in the cluster to verify proper installation. - - Performance test against each architecture in the cluster to better understand performance. + - Create a hybrid AKS cluster with x64 and arm64 nodes + - Deploy nginx's multi-architecture container image, pods, and services to the AKS cluster + - Smoke test nginx from each architecture in the cluster to verify proper installation + - Test the performance of each architecture in the cluster + - Apply the same process to other kubernetes workloads prerequisites: @@ -35,7 +35,6 @@ armips: operatingsystems: - Linux - - macOS tools_software_languages: - nginx diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md index 22b15d9af..5561d9d82 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md @@ -6,15 +6,15 @@ weight: 70 layout: learningpathall --- -## Apply configuration updates +## Install btop monitoring tool on nginx pods -Now that you have all your nginx deployments running across Intel and ARM architectures, you can monitor performance across each architecture using wrk to generate load and btop to monitor system performance. +Now that you have all your nginx deployments running across Intel and Arm architectures, you can monitor performance across each architecture using wrk to generate load and btop to monitor system performance. {{% notice Note %}} -This tutorial uses wrk to generate load, which is readily available on apt and brew package managers. [wrk2](https://github.com/giltene/wrk2) is a modern fork of wrk with additional features. wrk was chosen for this tutorial due to its ease of install, but if you prefer to install and use wrk2 (or other http load generators) for your testing, feel free to do so. +This tutorial uses [wrk](https://github.com/wg/wrk) to generate load, which is readily available on apt and brew package managers. [wrk2](https://github.com/giltene/wrk2) is a modern fork of wrk with additional features. wrk was chosen for this tutorial due to its ease of installation, but if you prefer to install and use wrk2 (or other http load generators) for your testing, feel free to do so. {{% /notice %}} -### Apply performance configuration +### Install btop and apply optimized configuration The `nginx_util.sh` script includes a `put config` command that will: @@ -22,7 +22,7 @@ The `nginx_util.sh` script includes a `put config` command that will: - Install btop monitoring tool on all pods for system monitoring - Restart pods with the new configuration -1. Run the following command to apply the configuration updates: +Run the following command to apply the configuration updates: ```bash ./nginx_util.sh put btop @@ -41,9 +41,9 @@ Installing btop on nginx-intel-deployment-6f5bff9667-zdrqc... ✅ btop installed on all pods! ``` -### Verify configuration updates +### Check pod restart status -2. Check that all pods have restarted with the new configuration: +Check that all pods have restarted with the new configuration: ```bash kubectl get pods -n nginx @@ -52,57 +52,72 @@ kubectl get pods -n nginx You should see all pods with recent restart times. {{% notice Note %}} -Because pods are ephemeral, btop will need to be reinstalled if the pods are deleted or restarted. If you get an error saying btop is not found, simply rerun the `./nginx_util.sh put btop` command to reinstall it. +Because pods are ephemeral, btop will need to be reinstalled if the pods are deleted or restarted. If you get an error saying btop is not found, rerun the `./nginx_util.sh put btop` command to reinstall it. {{% /notice %}} -### Monitor pod performance +### Set up real-time performance monitoring -You can now login to any pod and use btop to monitor system performance. There are many variables which may affect an individual workload's performance, btop (like top), is a great first step in understanding those variables. +You can now log in to any pod and use btop to monitor system performance. There are many variables that can affect an individual workload's performance, and btop (like top) is a great first step in understanding those variables. {{% notice Note %}} -When performing load generation tests from your laptop, local system and network settings may interfere with proper load generation between your machine and the remote cluster services. To mitigate these issues, its suggested to install the nginx_util.sh (or whichever tool you wish to use) on a [remote Azure instance](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/azure/) in the same region and zone as your K8s cluster (us-west-2 if you follow these tutorial instructions exactly) for best results. If you aren't seeing at least 70K+ requests/s to either K8s service endpoint, switching to a better located/tuned system is advised. +When performing load generation tests from your laptop, local system and network settings may interfere with proper load generation between your machine and the remote cluster services. To mitigate these issues, it's suggested to install the `nginx_util.sh` script on a [remote Azure instance](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/azure/) in the same region and zone as your K8s cluster for best results. If you aren't seeing at least 70K+ requests/s to either K8s service endpoint, switching to a better located system is advised. {{% /notice %}} -Bringing up two btop terminals, one for each pod, is a convenient way to view performance in realtime. To bring up btop on both Arm and Intel pods: +Running two btop terminals, one for each pod, is a convenient way to view performance in real time. -1. Open a new terminal window or tab. -2. Within the terminal, run the `login arm` command from the nginx utility script to enter the pod: +To bring up btop on both Arm and Intel pods: + +1. Open two new terminal windows +2. In one terminal, run `login arm` from the nginx utility script to enter the pod +3. In the second terminal, run `login intel` from the nginx utility script to enter the pod +4. Once inside each pod, run btop to see real-time system monitoring + +The commands are shown below. + +For the Arm terminal: ```bash -# Login to AMD pod (replace with intel or arm as needed) ./nginx_util.sh login arm ``` -3. Once inside the pod, run btop to see real-time system monitoring: +For the Intel terminal: + +```bash +./nginx_util.sh login intel +``` + +In both terminals run: ```bash btop --utf-force ``` -4. Repeat, from Step 1, but this time, using the `login intel` command. -You should now see something similar to below, that is, one terminal for each Arm and Intel, running btop: +You should now see something similar to the image below, with one terminal for each Arm and Intel pod running btop: ![Project Overview](images/btop_idle.png) -To visualize performance with btop against the Arm and Intel pods via the load balancer service endpoints, you can use the nginx_util.sh wrapper to generate the load two both simultaneoulsy: +To visualize performance with btop against the Arm and Intel pods via the load balancer service endpoints, you can use the `nginx_util.sh` wrapper to generate load to both simultaneously: ```bash ./nginx_util.sh wrk both ``` -This runs wrk with predefined setting (1 thread, 50 simultaneous connections) to generate load to the K8s architecture-specific endpoints. While it runs (for a default of 30s), you can observe some performance characteristics from the btop outputs: +This runs wrk with predefined settings (1 thread, 50 simultaneous connections) to generate load to the K8s architecture-specific endpoints. + +While it runs (for a default of 30s), you can observe some performance characteristics from the btop outputs: ![Project Overview](images/under_load.png) -Of particular interest is memory and CPU resource usage per pod. For Intel, figure 1 shows memory usage for the process, with figure 2 showing total cpu usage. Figures 3 and 4 show us the same metrics, but for Arm. +Of particular interest is memory and CPU resource usage per pod. For Intel, red marker 1 shows memory usage for the process, and red marker 2 shows total CPU usage. + +Red markers 3 and 4 show the same metrics for Arm. ![Project Overview](images/mem_and_cpu.png) -In addition to the visual metrics, the script also returns runtime results including requests per second, and latencies: +In addition to the visual metrics, the script also returns runtime results including requests per second and latencies: ```output -azureuser@gcohen-locust-1:/tmp/1127$ ./nginx_util.sh wrk both Running wrk against both architectures in parallel... Intel: wrk -t1 -c50 -d30 http://172.193.227.195/ @@ -134,9 +149,9 @@ Transfer/sec: 26.24MB Both tests completed ``` -### Experimenting with wrk +### Customize load testing parameters -The nginx_util.sh script shows the results of the load generation, as well as the command lines used to generate them. +The `nginx_util.sh` script shows the results of the load generation, as well as the command lines used to generate them. ```output ... @@ -146,21 +161,23 @@ ARM: wrk -t1 -c50 -d30 http://20.252.73.72/ ``` -Feel free to experiment increasing/decreasing client threads, connections, and durations to better understand the performance characteristics under different scenarios. +Feel free to experiment with by increasing and decreasing client threads, connections, and durations to better understand the performance characteristics under different scenarios. -For example, to generate load using 500 connections across 4 threads to the Arm service for five minutes (300s), you could use the following commandline: +For example, to generate load using 500 connections across 4 threads to the Arm service for 5 minutes (300s), you can use the following command: ```bash wrk -t4 -c500 -d300 http://20.252.73.72/ ``` -As mentioned earlier, unless your local system is tuned to handle load generation, you may find better traffic generation results by running on a VM. If aren't seeing at least 70K+ requests/s to either K8s service endpoint when running `wrk`, switching to a better located/tuned system is advised. - ## Next Steps -You learned in this learning path how to run a sample nginx workload on a dual-architecture (Arm and Intel) Azure Kubernetes Service. Once setup, you learned how to generate load with the wrk utility, and monitor runtime metrics with btop. If you wish to continue experimenting with this learning path, some ideas you may wish to explore include: +You have learned how to run a sample nginx workload on a dual-architecture (Arm and Intel) Azure Kubernetes Service. + +You learned how to generate load with the wrk utility and monitor runtime metrics with btop. + +Here are some ideas for further exploration: * What do the performance curves look like between the two architectures as a function of load? * How do larger instance types scale versus smaller ones? -Most importantly, you now possess the knowledge needed to begin experimenting with your own workloads on Arm-based AKS nodes to identify performance and efficiency opportunities unique to your own environments. +You now have the knowledge to experiment with your own workloads on Arm-based AKS nodes to identify performance and efficiency opportunities unique to your own environments. diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md index 2bb4e063d..f3cb97605 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md @@ -8,7 +8,7 @@ layout: learningpathall ## Test utility script -You'll create a utility script to test and manage your nginx services across all architectures. This script will be used throughout the tutorial to test services, apply configurations, and access pods. +In this section, you'll create a utility script to test and manage your nginx services across both architectures. The script will be used throughout the Learning Path to test services, apply configurations, and access pods. ### Script functionality @@ -18,18 +18,19 @@ The `nginx_util.sh` script provides three main functions: - **`put btop`** - Install btop monitoring tool on all pods - **`login intel|arm`** - Interactive bash access to architecture-specific pods -The script conveniently bundles test and logging commands into a single place, making it easy to test, troubleshoot, and view services. You'll use it throughout the tutorial to test services, apply configurations, and access pods across all architectures. +The script conveniently bundles test and logging commands into a single place, making it easy to test, troubleshoot, and view services. - -### Create the utility script +### Download the utility script {{% notice Note %}} -The following utility `nginx_util.sh` is provided for convenience. +The following utility `nginx_util.sh` is provided for your convenience. It's a wrapper for kubectl and other commands, utilizing [curl](https://curl.se/). Make sure you have curl installed before running. + +You can click on the link below to review the code before downloading. {{% /notice %}} -Copy and paste the following command into a terminal to download and create the `nginx_util.sh` script: +Copy and paste the following commands into a terminal to download and create the `nginx_util.sh` script: ```bash curl -o nginx_util.sh https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/nginx_util.sh @@ -42,9 +43,10 @@ In the folder you ran the curl command, you should now see the `nginx_util.sh` s ./nginx_util.sh ``` -The output should include usage instructions: +The output presents the usage instructions: + ```output Invalid first argument. Use 'curl', 'wrk', 'put', or 'login'. ``` -With it working, you're now ready to deploy nginx to the Intel nodes in the cluster. \ No newline at end of file +You're now ready to deploy nginx to the Intel nodes in the cluster. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md index d3c05a8d6..db0c57b57 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md @@ -1,20 +1,20 @@ --- -title: Deploy nginx ARM to the cluster +title: Deploy nginx on Arm weight: 50 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Add ARM deployment and service +## Add the Arm deployment and service -In this section, you'll add nginx on ARM nodes to your existing cluster, completing your multi-architecture Intel/ARM environment for comprehensive performance comparison. +In this section, you'll add nginx on Arm to your existing cluster, completing your multi-architecture Intel/Arm environment for comprehensive performance comparison. When applied, the **arm_nginx.yaml** file creates the following K8s objects: - - **Deployment** (`nginx-arm-deployment`) - Pulls the multi-architecture nginx image from DockerHub, launches a pod on the ARM node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf` + - **Deployment** (`nginx-arm-deployment`) - Pulls the multi-architecture nginx image from DockerHub, launches a pod on the Arm node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf` - **Service** (`nginx-arm-svc`) - Load balancer targeting pods with both `app: nginx-multiarch` and `arch: arm` labels -Copy and paste the following commands into a terminal to download and apply the ARM deployment and service: +Copy and paste the following commands into a terminal to download and apply the Arm deployment and service: ```bash curl -o arm_nginx.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/arm_nginx.yaml @@ -30,9 +30,9 @@ service/nginx-arm-svc created ### Examining the deployment configuration -Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see settings optimized for ARM architecture: +Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see settings optimized for the Arm architecture: -* The `nodeSelector` `kubernetes.io/arch: arm64`. This ensures that the deployment only runs on ARM nodes, utilizing the arm64 version of the nginx container image. +The `nodeSelector` value of `kubernetes.io/arch: arm64` ensures that the deployment only runs on Arm nodes, utilizing the `arm64` version of the nginx container image. ```yaml spec: @@ -40,7 +40,7 @@ Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see setting kubernetes.io/arch: arm64 ``` -* The service selector uses both `app: nginx-multiarch` and `arch: arm` labels to target only ARM pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. +The service selector uses both `app: nginx-multiarch` and `arch: arm` labels to target only Arm pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. ```yaml selector: @@ -50,7 +50,7 @@ Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see setting ### Verify the deployment -1. Get the status of nodes, pods and services by running: +Get the status of nodes, pods and services by running: ```bash kubectl get nodes,pods,svc -nnginx @@ -78,16 +78,18 @@ You can also verify the shared ConfigMap is available: kubectl get configmap -nnginx ``` +The output is similar to: + ```output NAME DATA AGE nginx-config 1 10m ``` -When the pods show `Running` and the service shows a valid `External IP`, you're ready to test the nginx ARM service. +When the pods show `Running` and the service shows a valid `External IP`, you're ready to test the nginx Arm service. -### Test the nginx web service on ARM +### Test the nginx web service on Arm -2. Run the following to make an HTTP request to the ARM nginx service using the script you created earlier: +Run the following command to make an HTTP request to the Arm nginx service using the script you created earlier: ```bash ./nginx_util.sh curl arm @@ -107,7 +109,7 @@ Response: Served by: nginx-arm-deployment-5bf8df95db-wznff ``` -If you see output similar to above, you have successfully added ARM nodes to your cluster running nginx. +If you see similar output, you have successfully added Arm nodes to your cluster running nginx. ### Compare both architectures diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md index 36807d0f1..f13d707d0 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md @@ -1,5 +1,5 @@ --- -title: Deploy nginx Intel to the cluster +title: Deploy nginx on Intel x86 weight: 30 ### FIXED, DO NOT MODIFY @@ -8,20 +8,20 @@ layout: learningpathall ## Deployment and service -In this section, you'll add a new namespace, deployment, and service for nginx on Intel. The end result will be a K8s cluster running nginx accessible via the Internet through a load balancer. +In this section, you'll add a new namespace, deployment, and service for nginx on Intel x86. The result will be a K8s cluster running nginx accessible via the Internet through a load balancer. To better understand the individual components, the configuration is split into three files: -1. **namespace.yaml** - Creates a new namespace called `nginx`, which contains all your K8s nginx objects +`namespace.yaml` - Creates a new namespace called `nginx`, which contains all your K8s nginx objects -2. **nginx-configmap.yaml** - Creates a shared ConfigMap (`nginx-config`) containing performance-optimized nginx configuration used by both Intel and ARM deployments +`nginx-configmap.yaml` - Creates a shared ConfigMap (`nginx-config`) containing performance-optimized nginx configuration used by both Intel and Arm deployments -3. **intel_nginx.yaml** - Creates the following K8s objects: +`intel_nginx.yaml` - Creates the following K8s objects: - **Deployment** (`nginx-intel-deployment`) - Pulls a multi-architecture [nginx image](https://hub.docker.com/_/nginx) from DockerHub, launches a pod on the Intel node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf` - **Service** (`nginx-intel-svc`) - Load balancer targeting pods with both `app: nginx-multiarch` and `arch: intel` labels -The following commands download, create, and apply the namespace, ConfigMap, and Intel nginx deployment and service configuration: +Run the following commands to download, create, and apply the namespace, ConfigMap, and Intel nginx deployment and service configuration: ```bash curl -o namespace.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/namespace.yaml @@ -44,10 +44,15 @@ deployment.apps/nginx-intel-deployment created service/nginx-intel-svc created ``` -### Examining the deployment configuration -Taking a closer look at the `intel_nginx.yaml` deployment file, you'll see some settings that ensure the deployment runs as we expect on the Intel node: +### Examine the deployment configuration -* The `nodeSelector` `kubernetes.io/arch: amd64`. This ensures that the deployment only runs on x86_64 nodes, utilizing the amd64 version of the nginx container image. +Take a closer look at the `intel_nginx.yaml` deployment file, you'll see some settings that ensure the deployment runs on the Intel x86 node. + +{{% notice Note %}} +The `amd64` architecture label represents x86_64 nodes, which can be either AMD or Intel processors. In this tutorial, we're using Intel x64 nodes. +{{% /notice %}} + +The `nodeSelector` value is set to `kubernetes.io/arch: amd64`. This ensures that the deployment only runs on x86_64 nodes, utilizing the amd64 version of the nginx container image. ```yaml spec: @@ -55,18 +60,14 @@ Taking a closer look at the `intel_nginx.yaml` deployment file, you'll see some kubernetes.io/arch: amd64 ``` -{{% notice Note %}} -The `amd64` architecture label represents x86_64 nodes, which can be either AMD or Intel processors. In this tutorial, we're using Intel x64 nodes. -{{% /notice %}} - -* The A `sessionAffinity` tag, which removes sticky connections to the target pods. This removes persistent connections to the same pod on each request. +The `sessionAffinity` tag removes sticky connections to the target pods. This removes persistent connections to the same pod on each request. ```yaml spec: sessionAffinity: None ``` -* The service selector uses both `app: nginx-multiarch` and `arch: intel` labels to target only Intel pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. +The service selector uses both `app: nginx-multiarch` and `arch: intel` labels to target only Intel pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. ```yaml selector: @@ -74,13 +75,14 @@ spec: arch: intel ``` -* Since the final goal is running nginx on multiple architectures, the deployment uses the standard nginx image from DockerHub. This image supports multiple architectures, including amd64 (Intel), arm64 (ARM), and others. +Because the final goal is to run nginx on multiple architectures, the deployment uses the standard nginx image from DockerHub. This image supports multiple architectures, including amd64 (Intel) and arm64 (Arm). ```yaml containers: - image: nginx:latest name: nginx ``` + {{% notice Note %}} Optionally, you can set the `default Namespace` to `nginx` to simplify future commands by removing the need to specify the `-nnginx` flag each time: ```bash @@ -88,10 +90,11 @@ kubectl config set-context --current --namespace=nginx ``` {{% /notice %}} -### Verify the deployment has completed -You've deployed the objects, so now it's time to verify everything is running as expected. +### Verify the deployment is complete + +It's time to verify everything is running as expected. -1. Confirm the nodes, pods, and services are running: +Confirm the nodes, pods, and services are running: ```bash kubectl get nodes,pods,svc -nnginx @@ -126,7 +129,7 @@ With the pods in a `Ready` state and the service showing a valid `External IP`, ### Test the Intel service -4. Run the following to make an HTTP request to the Intel nginx service: +Run the following to make an HTTP request to the Intel nginx service: ```bash ./nginx_util.sh curl intel @@ -146,6 +149,6 @@ Response: Served by: nginx-intel-deployment-758584d5c6-2nhnx ``` -If you see output similar to above, you've successfully configured your AKS cluster with an Intel node, running an nginx deployment and service with the nginx multi-architecture container image. +If you see similar output, you've successfully configured your AKS cluster with an Intel node, running an nginx deployment and service with the nginx multi-architecture container image. diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md index 36a874d81..2f6b2695b 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md @@ -1,20 +1,20 @@ --- -title: Deploy nginx multiarch service to the cluster +title: Deploy a nginx multiarch service weight: 60 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Add multiarch service +## Add a multi-architecture service to your cluster -You now have nginx running on Intel and ARM nodes with architecture-specific services. In this section, you'll create a multiarch service that can route to any available nginx pod regardless of architecture, providing load balancing across all architectures. +You now have nginx running on Intel and Arm nodes with architecture-specific services. In this section, you'll create a multi-architecture service that can route to any available nginx pod regardless of architecture, providing load balancing across both architectures. ### Create the multiarch service -The multiarch service targets all pods with the `app: nginx-multiarch` label (all nginx deployments share this label). It uses `sessionAffinity: None` to ensure requests are distributed across all available pods without stickiness, and can route to Intel or ARM pods based on availability and load balancing algorithms. +The multiarch service targets all pods with the `app: nginx-multiarch` label (all nginx deployments share this label). It uses `sessionAffinity: None` to ensure requests are distributed across all available pods without stickiness, and can route to Intel or Arm pods based on availability and load balancing algorithms. -1. Run the following command to download and apply the multiarch service: +Run the following commands to download and apply the multiarch service: ```bash curl -sO https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/main/multiarch_nginx.yaml @@ -27,7 +27,7 @@ You see the following response: service/nginx-multiarch-svc created ``` -2. Get the status of all services by running: +Next, get the status of all services by running: ```bash kubectl get svc -nnginx @@ -42,7 +42,7 @@ nginx-intel-svc LoadBalancer 10.0.226.250 20.80.128.191 80:30080/TCP nginx-multiarch-svc LoadBalancer 10.0.40.169 20.99.208.140 80:30083/TCP 38s ``` -3. Check which pods the multiarch service can route to: +Check which pods the multiarch service can route to: ```bash kubectl get endpoints nginx-multiarch-svc -nnginx @@ -55,9 +55,11 @@ NAME ENDPOINTS AGE nginx-multiarch-svc 10.244.0.21:80,10.244.1.1:80 47s ``` +You are ready to test the multiarch service. + ### Test the nginx multiarch service -4. Run the following to make HTTP requests to the multiarch nginx service: +Run the following to make HTTP requests to the multiarch nginx service: ```bash ./nginx_util.sh curl multiarch @@ -77,7 +79,7 @@ Response: Served by: nginx-arm-deployment-5bf8df95db-wznff ``` -5. Run the command multiple times to see load balancing across architectures: +Run the command multiple times to see load balancing across architectures: ```bash ./nginx_util.sh curl multiarch @@ -85,9 +87,9 @@ Served by: nginx-arm-deployment-5bf8df95db-wznff ./nginx_util.sh curl multiarch ``` -The responses will show requests being served by different architecture deployments (intel or arm), demonstrating that the multiarch service distributes load across all available pods. +The responses will show requests being served by different architecture deployments (Intel or Arm), demonstrating that the multiarch service distributes the load across the available pods. -### Compare architecture-specific vs multiarch routing +### Compare architecture-specific versus multiarch routing Now you can compare the behavior: diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md index 2754e359a..bab71b30f 100644 --- a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md @@ -18,9 +18,10 @@ Many developers begin their journey with Arm on K8s by adding Arm nodes to an ex 2. Leveraging the multi-architectural container image of your existing x64 workload expedites the migration to Arm with minimal deployment modifications. 3. With both x64 and Arm workloads running in the same cluster, comparing performance across them is simplified. -Even if you don't already have an existing AKS cluster, you're covered, as this learning path will walk you through bringing up an initial AKS environment and nginx workload on x64. From there, you'll add Arm-based nodes running the same exact workload. You'll see how to smoke test it (simple tests to verify functionality), and then performance test it (slightly more involved) to better understand the performance characteristics of each architecture. +This Learning Path explains how to create an initial AKS environment and install nginx on x64. From there, you'll add Arm-based nodes running the same exact workload. You'll see how to run simple tests to verify functionality, and then run performance testing to better understand the performance characteristics of each architecture. + +### Login to Azure using the Azure CLI -### Login to Azure via azure-cli To begin, login to your Azure account using the Azure CLI: ```bash @@ -28,12 +29,14 @@ az login ``` ### Create the cluster and resource + Once logged in, create the resource group and AKS cluster with two node pools: one with Intel-based nodes (Standard_D2s_v6), and one with Arm-based (Standard_D2ps_v6) nodes. {{% notice Note %}} -This tutorial uses the `westus2` region, which supports both Intel and Arm VM sizes. You can choose a different region if you prefer, but ensure it supports both VMs and AKS. +This tutorial uses the `westus2` region, which supports both Intel and Arm VM sizes. You can choose a different region if you prefer, but ensure it supports both VM types and AKS. {{% /notice %}} +Set the environment variables as shown below and run the `az aks` commands on your command line. ```bash # Set environment variables @@ -107,16 +110,15 @@ A message similar to the following should be displayed: ```output Kubernetes control plane is running at https://nginx-on-a-nginx-on-arm-rg-dd0bfb-eenbox6p.hcp.westus2.azmk8s.io:443 -... ``` -With the cluster running, verify the node pools are ready (and you're ready to continue to the next chapter), with the following command: +With the cluster running, verify the node pools are ready with the following command: ```bash kubectl get nodes -o wide ``` -You should see output similar to this: +You should see output similar to: ```output NAME STATUS ROLES AGE VERSION @@ -125,4 +127,4 @@ aks-intel-39600573-vmss000002 Ready 6h8m v1.32.7 ``` -With all nodes showing `Ready` status, you're ready to continue to the next chapter. +With all nodes showing `Ready` status, you're ready to continue to the next section.