diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md new file mode 100644 index 000000000..5d940e23d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md @@ -0,0 +1,63 @@ +--- +title: Add Arm nodes to your Azure Kubernetes Services cluster using a multi-architecture nginx container image + +minutes_to_complete: 60 + +who_is_this_for: This Learning Path is for developers who want to compare the performance of x64 and arm64 deployments by running nginx on a hybrid Azure Kubernetes Service (AKS) cluster using nginx's multi-architecture container image. Once you've seen how easy it is to add arm64 nodes to an existing cluster, you'll be ready to explore arm64-based nodes for other workloads in your environment. + + +learning_objectives: + - Create a hybrid AKS cluster with x64 and arm64 nodes. + - Deploy nginx's multi-architecture container image, pods, and services to the AKS cluster. + - Smoke test nginx from each architecture in the cluster to verify proper installation. + - Performance test against each architecture in the cluster to better understand performance. + + +prerequisites: + - An [Azure account](https://azure.microsoft.com/en-us/free/). + - A local machine with [jq](https://jqlang.org/download/), [curl](https://curl.se/download.html), [wrk](https://github.com/wg/wrk), [Azure CLI](/install-guides/azure-cli/) and [kubectl](/install-guides/kubectl/) installed. + +author: + - Geremy Cohen + +### Tags +skilllevels: Introductory + +subjects: Containers and Virtualization +cloud_service_providers: Microsoft Azure + +armips: + - Neoverse + +operatingsystems: + - Linux + - macOS + +tools_software_languages: + - nginx + - Web Server + +further_reading: + - resource: + title: nginx - High Performance Load Balancer, Web Server, & Reverse Proxy + link: https://nginx.org/ + type: documentation + - resource: + title: nginx Docker Hub + link: https://hub.docker.com/_/nginx + type: documentation + - resource: + title: Azure Kubernetes Service (AKS) documentation + link: https://docs.microsoft.com/en-us/azure/aks/ + type: documentation + - resource: + title: Learn how to tune Nginx + link: https://learn.arm.com/learning-paths/servers-and-cloud-computing/nginx_tune/ + type: documentation + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md new file mode 100644 index 000000000..22b15d9af --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md @@ -0,0 +1,166 @@ +--- +title: Monitor performance with wrk and btop +weight: 70 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Apply configuration updates + +Now that you have all your nginx deployments running across Intel and ARM architectures, you can monitor performance across each architecture using wrk to generate load and btop to monitor system performance. + +{{% notice Note %}} +This tutorial uses wrk to generate load, which is readily available on apt and brew package managers. [wrk2](https://github.com/giltene/wrk2) is a modern fork of wrk with additional features. wrk was chosen for this tutorial due to its ease of install, but if you prefer to install and use wrk2 (or other http load generators) for your testing, feel free to do so. +{{% /notice %}} + +### Apply performance configuration + +The `nginx_util.sh` script includes a `put config` command that will: + +- Apply a performance-optimized nginx configuration to all pods +- Install btop monitoring tool on all pods for system monitoring +- Restart pods with the new configuration + +1. Run the following command to apply the configuration updates: + +```bash +./nginx_util.sh put btop +``` + +You will see output similar to the following: + +```output +Installing btop on all nginx pods... +Installing btop on nginx-amd-deployment-56b547bb47-vgbjj... +✓ btop installed on nginx-amd-deployment-56b547bb47-vgbjj +Installing btop on nginx-arm-deployment-66cb47ddc9-fgmsd... +✓ btop installed on nginx-arm-deployment-66cb547ddc9-fgmsd +Installing btop on nginx-intel-deployment-6f5bff9667-zdrqc... +✓ btop installed on nginx-intel-deployment-6f5bff9667-zdrqc +✅ btop installed on all pods! +``` + +### Verify configuration updates + +2. Check that all pods have restarted with the new configuration: + +```bash +kubectl get pods -n nginx +``` + +You should see all pods with recent restart times. + +{{% notice Note %}} +Because pods are ephemeral, btop will need to be reinstalled if the pods are deleted or restarted. If you get an error saying btop is not found, simply rerun the `./nginx_util.sh put btop` command to reinstall it. +{{% /notice %}} + + +### Monitor pod performance + +You can now login to any pod and use btop to monitor system performance. There are many variables which may affect an individual workload's performance, btop (like top), is a great first step in understanding those variables. + +{{% notice Note %}} +When performing load generation tests from your laptop, local system and network settings may interfere with proper load generation between your machine and the remote cluster services. To mitigate these issues, its suggested to install the nginx_util.sh (or whichever tool you wish to use) on a [remote Azure instance](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/azure/) in the same region and zone as your K8s cluster (us-west-2 if you follow these tutorial instructions exactly) for best results. If you aren't seeing at least 70K+ requests/s to either K8s service endpoint, switching to a better located/tuned system is advised. +{{% /notice %}} + +Bringing up two btop terminals, one for each pod, is a convenient way to view performance in realtime. To bring up btop on both Arm and Intel pods: + +1. Open a new terminal window or tab. +2. Within the terminal, run the `login arm` command from the nginx utility script to enter the pod: + +```bash +# Login to AMD pod (replace with intel or arm as needed) +./nginx_util.sh login arm +``` + +3. Once inside the pod, run btop to see real-time system monitoring: + +```bash +btop --utf-force +``` +4. Repeat, from Step 1, but this time, using the `login intel` command. + +You should now see something similar to below, that is, one terminal for each Arm and Intel, running btop: + +![Project Overview](images/btop_idle.png) + +To visualize performance with btop against the Arm and Intel pods via the load balancer service endpoints, you can use the nginx_util.sh wrapper to generate the load two both simultaneoulsy: + +```bash +./nginx_util.sh wrk both +``` + +This runs wrk with predefined setting (1 thread, 50 simultaneous connections) to generate load to the K8s architecture-specific endpoints. While it runs (for a default of 30s), you can observe some performance characteristics from the btop outputs: + +![Project Overview](images/under_load.png) + +Of particular interest is memory and CPU resource usage per pod. For Intel, figure 1 shows memory usage for the process, with figure 2 showing total cpu usage. Figures 3 and 4 show us the same metrics, but for Arm. + +![Project Overview](images/mem_and_cpu.png) + +In addition to the visual metrics, the script also returns runtime results including requests per second, and latencies: + +```output +azureuser@gcohen-locust-1:/tmp/1127$ ./nginx_util.sh wrk both +Running wrk against both architectures in parallel... + +Intel: wrk -t1 -c50 -d30 http://172.193.227.195/ +ARM: wrk -t1 -c50 -d30 http://20.252.73.72/ + +======================================== + +INTEL RESULTS: +Running 30s test @ http://172.193.227.195/ + 1 threads and 50 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 752.40us 1.03ms 28.95ms 94.01% + Req/Sec 84.49k 12.14k 103.08k 73.75% + 2528743 requests in 30.10s, 766.88MB read +Requests/sec: 84010.86 +Transfer/sec: 25.48MB + +ARM RESULTS: +Running 30s test @ http://20.252.73.72/ + 1 threads and 50 connections + Thread Stats Avg Stdev Max +/- Stdev + Latency 621.56us 565.90us 19.75ms 95.43% + Req/Sec 87.54k 10.22k 107.96k 82.39% + 2620567 requests in 30.10s, 789.72MB read +Requests/sec: 87062.21 +Transfer/sec: 26.24MB + +======================================== +Both tests completed +``` + +### Experimenting with wrk + +The nginx_util.sh script shows the results of the load generation, as well as the command lines used to generate them. + +```output +... +Intel: wrk -t1 -c50 -d30 http://172.193.227.195/ +ARM: wrk -t1 -c50 -d30 http://20.252.73.72/ +... +``` + + +Feel free to experiment increasing/decreasing client threads, connections, and durations to better understand the performance characteristics under different scenarios. + +For example, to generate load using 500 connections across 4 threads to the Arm service for five minutes (300s), you could use the following commandline: + +```bash +wrk -t4 -c500 -d300 http://20.252.73.72/ +``` + +As mentioned earlier, unless your local system is tuned to handle load generation, you may find better traffic generation results by running on a VM. If aren't seeing at least 70K+ requests/s to either K8s service endpoint when running `wrk`, switching to a better located/tuned system is advised. + +## Next Steps + +You learned in this learning path how to run a sample nginx workload on a dual-architecture (Arm and Intel) Azure Kubernetes Service. Once setup, you learned how to generate load with the wrk utility, and monitor runtime metrics with btop. If you wish to continue experimenting with this learning path, some ideas you may wish to explore include: + +* What do the performance curves look like between the two architectures as a function of load? +* How do larger instance types scale versus smaller ones? + +Most importantly, you now possess the knowledge needed to begin experimenting with your own workloads on Arm-based AKS nodes to identify performance and efficiency opportunities unique to your own environments. diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md new file mode 100644 index 000000000..2bb4e063d --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md @@ -0,0 +1,50 @@ +--- +title: Create the test utility +weight: 20 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Test utility script + +You'll create a utility script to test and manage your nginx services across all architectures. This script will be used throughout the tutorial to test services, apply configurations, and access pods. + +### Script functionality + +The `nginx_util.sh` script provides three main functions: + +- **`curl intel|arm|multiarch`** - Test nginx services and show which pod served the request +- **`put btop`** - Install btop monitoring tool on all pods +- **`login intel|arm`** - Interactive bash access to architecture-specific pods + +The script conveniently bundles test and logging commands into a single place, making it easy to test, troubleshoot, and view services. You'll use it throughout the tutorial to test services, apply configurations, and access pods across all architectures. + + +### Create the utility script + +{{% notice Note %}} +The following utility `nginx_util.sh` is provided for convenience. + +It's a wrapper for kubectl and other commands, utilizing [curl](https://curl.se/). Make sure you have curl installed before running. +{{% /notice %}} + +Copy and paste the following command into a terminal to download and create the `nginx_util.sh` script: + +```bash +curl -o nginx_util.sh https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/nginx_util.sh +chmod +x nginx_util.sh +``` + +In the folder you ran the curl command, you should now see the `nginx_util.sh` script. Test it by running: + +```bash +./nginx_util.sh +``` + +The output should include usage instructions: +```output +Invalid first argument. Use 'curl', 'wrk', 'put', or 'login'. +``` + +With it working, you're now ready to deploy nginx to the Intel nodes in the cluster. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md new file mode 100644 index 000000000..d3c05a8d6 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md @@ -0,0 +1,121 @@ +--- +title: Deploy nginx ARM to the cluster +weight: 50 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Add ARM deployment and service + +In this section, you'll add nginx on ARM nodes to your existing cluster, completing your multi-architecture Intel/ARM environment for comprehensive performance comparison. + +When applied, the **arm_nginx.yaml** file creates the following K8s objects: + - **Deployment** (`nginx-arm-deployment`) - Pulls the multi-architecture nginx image from DockerHub, launches a pod on the ARM node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf` + - **Service** (`nginx-arm-svc`) - Load balancer targeting pods with both `app: nginx-multiarch` and `arch: arm` labels + +Copy and paste the following commands into a terminal to download and apply the ARM deployment and service: + +```bash +curl -o arm_nginx.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/arm_nginx.yaml +kubectl apply -f arm_nginx.yaml +``` + +You will see output similar to: + +```output +deployment.apps/nginx-arm-deployment created +service/nginx-arm-svc created +``` + +### Examining the deployment configuration + +Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see settings optimized for ARM architecture: + +* The `nodeSelector` `kubernetes.io/arch: arm64`. This ensures that the deployment only runs on ARM nodes, utilizing the arm64 version of the nginx container image. + +```yaml + spec: + nodeSelector: + kubernetes.io/arch: arm64 +``` + +* The service selector uses both `app: nginx-multiarch` and `arch: arm` labels to target only ARM pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. + +```yaml + selector: + app: nginx-multiarch + arch: arm +``` + +### Verify the deployment + +1. Get the status of nodes, pods and services by running: + +```bash +kubectl get nodes,pods,svc -nnginx +``` + +Your output should be similar to the following, showing two nodes, two pods, and two services: + +```output +NAME STATUS ROLES AGE VERSION +node/aks-arm-56500727-vmss000000 Ready 59m v1.32.7 +node/aks-intel-31372303-vmss000000 Ready 63m v1.32.7 + +NAME READY STATUS RESTARTS AGE +pod/nginx-arm-deployment-5bf8df95db-wznff 1/1 Running 0 36s +pod/nginx-intel-deployment-78bb8885fd-mw24f 1/1 Running 0 9m21s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/nginx-arm-svc LoadBalancer 10.0.241.154 48.192.64.197 80:30082/TCP 36s +service/nginx-intel-svc LoadBalancer 10.0.226.250 20.80.128.191 80:30080/TCP 9m22s +``` + +You can also verify the shared ConfigMap is available: + +```bash +kubectl get configmap -nnginx +``` + +```output +NAME DATA AGE +nginx-config 1 10m +``` + +When the pods show `Running` and the service shows a valid `External IP`, you're ready to test the nginx ARM service. + +### Test the nginx web service on ARM + +2. Run the following to make an HTTP request to the ARM nginx service using the script you created earlier: + +```bash +./nginx_util.sh curl arm +``` + +You get back the HTTP response, as well as information about which pod served it: + +```output +Using service endpoint 48.192.64.197 for curl on arm service +Response: +{ + "message": "nginx response", + "timestamp": "2025-10-24T22:04:59+00:00", + "server": "nginx-arm-deployment-5bf8df95db-wznff", + "request_uri": "/" +} +Served by: nginx-arm-deployment-5bf8df95db-wznff +``` + +If you see output similar to above, you have successfully added ARM nodes to your cluster running nginx. + +### Compare both architectures + +Now you can test both architectures and compare their responses: + +```bash +./nginx_util.sh curl intel +./nginx_util.sh curl arm +``` + +Each command will route to its respective architecture-specific service, allowing you to compare performance and verify that your multi-architecture cluster is working correctly. diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md new file mode 100644 index 000000000..36807d0f1 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-intel.md @@ -0,0 +1,151 @@ +--- +title: Deploy nginx Intel to the cluster +weight: 30 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Deployment and service + +In this section, you'll add a new namespace, deployment, and service for nginx on Intel. The end result will be a K8s cluster running nginx accessible via the Internet through a load balancer. + +To better understand the individual components, the configuration is split into three files: + +1. **namespace.yaml** - Creates a new namespace called `nginx`, which contains all your K8s nginx objects + +2. **nginx-configmap.yaml** - Creates a shared ConfigMap (`nginx-config`) containing performance-optimized nginx configuration used by both Intel and ARM deployments + +3. **intel_nginx.yaml** - Creates the following K8s objects: + - **Deployment** (`nginx-intel-deployment`) - Pulls a multi-architecture [nginx image](https://hub.docker.com/_/nginx) from DockerHub, launches a pod on the Intel node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf` + - **Service** (`nginx-intel-svc`) - Load balancer targeting pods with both `app: nginx-multiarch` and `arch: intel` labels + + +The following commands download, create, and apply the namespace, ConfigMap, and Intel nginx deployment and service configuration: + +```bash +curl -o namespace.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/namespace.yaml +kubectl apply -f namespace.yaml + +curl -o nginx-configmap.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/nginx-configmap.yaml +kubectl apply -f nginx-configmap.yaml + +curl -o intel_nginx.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/intel_nginx.yaml +kubectl apply -f intel_nginx.yaml + +``` + +You will see output similar to: + +```output +namespace/nginx created +configmap/nginx-config created +deployment.apps/nginx-intel-deployment created +service/nginx-intel-svc created +``` + +### Examining the deployment configuration +Taking a closer look at the `intel_nginx.yaml` deployment file, you'll see some settings that ensure the deployment runs as we expect on the Intel node: + +* The `nodeSelector` `kubernetes.io/arch: amd64`. This ensures that the deployment only runs on x86_64 nodes, utilizing the amd64 version of the nginx container image. + +```yaml + spec: + nodeSelector: + kubernetes.io/arch: amd64 +``` + +{{% notice Note %}} +The `amd64` architecture label represents x86_64 nodes, which can be either AMD or Intel processors. In this tutorial, we're using Intel x64 nodes. +{{% /notice %}} + +* The A `sessionAffinity` tag, which removes sticky connections to the target pods. This removes persistent connections to the same pod on each request. + +```yaml +spec: + sessionAffinity: None +``` + +* The service selector uses both `app: nginx-multiarch` and `arch: intel` labels to target only Intel pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing. + +```yaml + selector: + app: nginx-multiarch + arch: intel +``` + +* Since the final goal is running nginx on multiple architectures, the deployment uses the standard nginx image from DockerHub. This image supports multiple architectures, including amd64 (Intel), arm64 (ARM), and others. + +```yaml + containers: + - image: nginx:latest + name: nginx +``` +{{% notice Note %}} +Optionally, you can set the `default Namespace` to `nginx` to simplify future commands by removing the need to specify the `-nnginx` flag each time: +```bash +kubectl config set-context --current --namespace=nginx +``` +{{% /notice %}} + +### Verify the deployment has completed +You've deployed the objects, so now it's time to verify everything is running as expected. + +1. Confirm the nodes, pods, and services are running: + +```bash +kubectl get nodes,pods,svc -nnginx +``` + +Your output should be similar to the following, showing two nodes, one pod, and one service: + +```output +NAME STATUS ROLES AGE VERSION +node/aks-arm-56500727-vmss000000 Ready 50m v1.32.7 +node/aks-intel-31372303-vmss000000 Ready 55m v1.32.7 + +NAME READY STATUS RESTARTS AGE +pod/nginx-intel-deployment-78bb8885fd-mw24f 1/1 Running 0 38s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/nginx-intel-svc LoadBalancer 10.0.226.250 20.80.128.191 80:30080/TCP 39s +``` + +You can also verify the ConfigMap was created: + +```bash +kubectl get configmap -nnginx +``` + +```output +NAME DATA AGE +nginx-config 1 51s +``` + +With the pods in a `Ready` state and the service showing a valid `External IP`, you're now ready to test the nginx Intel service. + +### Test the Intel service + +4. Run the following to make an HTTP request to the Intel nginx service: + +```bash +./nginx_util.sh curl intel +``` + +You get back the HTTP response, as well as information about which pod served it: + +```output +Using service endpoint 20.3.71.69 for curl on intel service +Response: +{ + "message": "nginx response", + "timestamp": "2025-10-24T16:49:29+00:00", + "server": "nginx-intel-deployment-758584d5c6-2nhnx", + "request_uri": "/" +} +Served by: nginx-intel-deployment-758584d5c6-2nhnx +``` + +If you see output similar to above, you've successfully configured your AKS cluster with an Intel node, running an nginx deployment and service with the nginx multi-architecture container image. + + diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md new file mode 100644 index 000000000..36a874d81 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-multiarch.md @@ -0,0 +1,98 @@ +--- +title: Deploy nginx multiarch service to the cluster +weight: 60 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Add multiarch service + +You now have nginx running on Intel and ARM nodes with architecture-specific services. In this section, you'll create a multiarch service that can route to any available nginx pod regardless of architecture, providing load balancing across all architectures. + +### Create the multiarch service + +The multiarch service targets all pods with the `app: nginx-multiarch` label (all nginx deployments share this label). It uses `sessionAffinity: None` to ensure requests are distributed across all available pods without stickiness, and can route to Intel or ARM pods based on availability and load balancing algorithms. + +1. Run the following command to download and apply the multiarch service: + +```bash +curl -sO https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/main/multiarch_nginx.yaml +kubectl apply -f multiarch_nginx.yaml +``` + +You see the following response: + +```output +service/nginx-multiarch-svc created +``` + +2. Get the status of all services by running: + +```bash +kubectl get svc -nnginx +``` + +Your output should be similar to the following, showing three services: + +```output +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx-arm-svc LoadBalancer 10.0.241.154 48.192.64.197 80:30082/TCP 7m52s +nginx-intel-svc LoadBalancer 10.0.226.250 20.80.128.191 80:30080/TCP 16m +nginx-multiarch-svc LoadBalancer 10.0.40.169 20.99.208.140 80:30083/TCP 38s +``` + +3. Check which pods the multiarch service can route to: + +```bash +kubectl get endpoints nginx-multiarch-svc -nnginx +``` + +You should see both architecture pods listed as endpoints: + +```output +NAME ENDPOINTS AGE +nginx-multiarch-svc 10.244.0.21:80,10.244.1.1:80 47s +``` + +### Test the nginx multiarch service + +4. Run the following to make HTTP requests to the multiarch nginx service: + +```bash +./nginx_util.sh curl multiarch +``` + +You get back the HTTP response from one of the available pods: + +```output +Using service endpoint 20.99.208.140 for curl on multiarch service +Response: +{ + "message": "nginx response", + "timestamp": "2025-10-24T22:12:23+00:00", + "server": "nginx-arm-deployment-5bf8df95db-wznff", + "request_uri": "/" +} +Served by: nginx-arm-deployment-5bf8df95db-wznff +``` + +5. Run the command multiple times to see load balancing across architectures: + +```bash +./nginx_util.sh curl multiarch +./nginx_util.sh curl multiarch +./nginx_util.sh curl multiarch +``` + +The responses will show requests being served by different architecture deployments (intel or arm), demonstrating that the multiarch service distributes load across all available pods. + +### Compare architecture-specific vs multiarch routing + +Now you can compare the behavior: + +- **Architecture-specific**: `./nginx_util.sh curl intel` always routes to Intel pods +- **Architecture-specific**: `./nginx_util.sh curl arm` always routes to ARM pods +- **Multiarch**: `./nginx_util.sh curl multiarch` routes to any available pod + +This multiarch service provides high availability and load distribution across your entire multi-architecture cluster. diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/btop_idle.png b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/btop_idle.png new file mode 100644 index 000000000..cc5d83874 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/btop_idle.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/mem_and_cpu.png b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/mem_and_cpu.png new file mode 100644 index 000000000..d3091fc1f Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/mem_and_cpu.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/under_load.png b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/under_load.png new file mode 100644 index 000000000..66e7bb658 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/images/under_load.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md new file mode 100644 index 000000000..2754e359a --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/spin_up_aks_cluster.md @@ -0,0 +1,128 @@ +--- +title: Create the AKS Cluster +weight: 10 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Project Overview + +Arm CPUs are widely used in web server workloads on Kubernetes (k8s). In this Learning Path, you'll learn how to deploy [nginx](https://nginx.org/) on Arm-based CPUs within a heterogeneous (x64 and arm64) K8s cluster on Azure's AKS. + +### Benefits of the multi-architecture approach + +Many developers begin their journey with Arm on K8s by adding Arm nodes to an existing x64-based cluster. This has many advantages: + +1. Since you are already familiar with K8s on x64, you can leverage that knowledge to quickly get the core components up and running. +2. Leveraging the multi-architectural container image of your existing x64 workload expedites the migration to Arm with minimal deployment modifications. +3. With both x64 and Arm workloads running in the same cluster, comparing performance across them is simplified. + +Even if you don't already have an existing AKS cluster, you're covered, as this learning path will walk you through bringing up an initial AKS environment and nginx workload on x64. From there, you'll add Arm-based nodes running the same exact workload. You'll see how to smoke test it (simple tests to verify functionality), and then performance test it (slightly more involved) to better understand the performance characteristics of each architecture. + +### Login to Azure via azure-cli +To begin, login to your Azure account using the Azure CLI: + +```bash +az login +``` + +### Create the cluster and resource +Once logged in, create the resource group and AKS cluster with two node pools: one with Intel-based nodes (Standard_D2s_v6), and one with Arm-based (Standard_D2ps_v6) nodes. + +{{% notice Note %}} +This tutorial uses the `westus2` region, which supports both Intel and Arm VM sizes. You can choose a different region if you prefer, but ensure it supports both VMs and AKS. +{{% /notice %}} + + +```bash +# Set environment variables +export RESOURCE_GROUP=nginx-on-arm-rg +export LOCATION=westus2 +export CLUSTER_NAME=nginx-on-arm + +# Create resource group +az group create --name $RESOURCE_GROUP --location $LOCATION + +# Create AKS cluster with Intel node pool in zone 2 +az aks create \ + --resource-group $RESOURCE_GROUP \ + --name $CLUSTER_NAME \ + --location $LOCATION \ + --zones 2 \ + --node-count 1 \ + --node-vm-size Standard_D2s_v6 \ + --nodepool-name intel \ + --generate-ssh-keys + +# Add ARM node pool in zone 2 +az aks nodepool add \ + --resource-group $RESOURCE_GROUP \ + --cluster-name $CLUSTER_NAME \ + --name arm \ + --zones 2 \ + --node-count 1 \ + --node-vm-size Standard_D2ps_v6 + +``` + +Each command returns JSON output. Verify that `"provisioningState": "Succeeded"` appears in each response. + +### Connect to the cluster + +Verify `kubectl` is available by running: + +```bash +kubectl version --client +``` + +The output should look similar to: + +```output +Client Version: v1.34.1 +Kustomize Version: v5.7.1 +``` + +If `kubectl` is installed the version information is printed. If you don't see the version information printed refer to the [Azure CLI](/install-guides/azure-cli) and [kubectl](/install-guides/kubectl/) install guides. + +Next, set up your newly-created K8s cluster credentials using the Azure CLI: + +```bash +az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME +``` + +You should see: + +```output +Merged "nginx-on-arm" as current context in /home/user/.kube/config +``` + +To verify you're connected to the cluster: + +```bash +kubectl cluster-info +``` + +A message similar to the following should be displayed: + +```output +Kubernetes control plane is running at https://nginx-on-a-nginx-on-arm-rg-dd0bfb-eenbox6p.hcp.westus2.azmk8s.io:443 +... +``` + +With the cluster running, verify the node pools are ready (and you're ready to continue to the next chapter), with the following command: + +```bash +kubectl get nodes -o wide +``` + +You should see output similar to this: + +```output +NAME STATUS ROLES AGE VERSION +aks-arm-13087205-vmss000002 Ready 6h8m v1.32.7 +aks-intel-39600573-vmss000002 Ready 6h8m v1.32.7 +``` + + +With all nodes showing `Ready` status, you're ready to continue to the next chapter. diff --git a/download_configmaps.sh b/download_configmaps.sh new file mode 100755 index 000000000..a92d353c0 --- /dev/null +++ b/download_configmaps.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +# Download nginx_arm configmap +kubectl get configmap nginx-arm -n nginx -o yaml > nginx_arm_configmap.yaml + +# Download nginx_intel configmap +kubectl get configmap nginx-intel -n nginx -o yaml > nginx_intel_configmap.yaml + +echo "Downloaded configmaps:" +echo "- nginx_arm_configmap.yaml" +echo "- nginx_intel_configmap.yaml"