Skip to content

Commit e8bad70

Browse files
Merge pull request #2516 from jasonrandrews/review
first review of nginx on AKS Learning Path
2 parents eb4de78 + eb2bc77 commit e8bad70

File tree

7 files changed

+126
-99
lines changed

7 files changed

+126
-99
lines changed

content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/_index.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
---
2-
title: Add Arm nodes to your Azure Kubernetes Services cluster using a multi-architecture nginx container image
2+
title: Build hybrid AKS clusters with Arm nodes and nginx
33

44
draft: true
55
cascade:
66
draft: true
77

88
minutes_to_complete: 60
99

10-
who_is_this_for: This Learning Path is for developers who want to compare the performance of x64 and arm64 deployments by running nginx on a hybrid Azure Kubernetes Service (AKS) cluster using nginx's multi-architecture container image. Once you've seen how easy it is to add arm64 nodes to an existing cluster, you'll be ready to explore arm64-based nodes for other workloads in your environment.
11-
10+
who_is_this_for: This Learning Path is for developers who want to understand nginx performance on x64 and arm64 deployments by running a hybrid Azure Kubernetes Service (AKS) cluster.
1211

1312
learning_objectives:
14-
- Create a hybrid AKS cluster with x64 and arm64 nodes.
15-
- Deploy nginx's multi-architecture container image, pods, and services to the AKS cluster.
16-
- Smoke test nginx from each architecture in the cluster to verify proper installation.
17-
- Performance test against each architecture in the cluster to better understand performance.
13+
- Create a hybrid AKS cluster with x64 and arm64 nodes
14+
- Deploy nginx's multi-architecture container image, pods, and services to the AKS cluster
15+
- Smoke test nginx from each architecture in the cluster to verify proper installation
16+
- Test the performance of each architecture in the cluster
17+
- Apply the same process to other kubernetes workloads
1818

1919

2020
prerequisites:
@@ -35,7 +35,6 @@ armips:
3535

3636
operatingsystems:
3737
- Linux
38-
- macOS
3938

4039
tools_software_languages:
4140
- nginx

content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/apply-configuration.md

Lines changed: 48 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -6,23 +6,23 @@ weight: 70
66
layout: learningpathall
77
---
88

9-
## Apply configuration updates
9+
## Install btop monitoring tool on nginx pods
1010

11-
Now that you have all your nginx deployments running across Intel and ARM architectures, you can monitor performance across each architecture using wrk to generate load and btop to monitor system performance.
11+
Now that you have all your nginx deployments running across Intel and Arm architectures, you can monitor performance across each architecture using wrk to generate load and btop to monitor system performance.
1212

1313
{{% notice Note %}}
14-
This tutorial uses wrk to generate load, which is readily available on apt and brew package managers. [wrk2](https://github.com/giltene/wrk2) is a modern fork of wrk with additional features. wrk was chosen for this tutorial due to its ease of install, but if you prefer to install and use wrk2 (or other http load generators) for your testing, feel free to do so.
14+
This tutorial uses [wrk](https://github.com/wg/wrk) to generate load, which is readily available on apt and brew package managers. [wrk2](https://github.com/giltene/wrk2) is a modern fork of wrk with additional features. wrk was chosen for this tutorial due to its ease of installation, but if you prefer to install and use wrk2 (or other http load generators) for your testing, feel free to do so.
1515
{{% /notice %}}
1616

17-
### Apply performance configuration
17+
### Install btop and apply optimized configuration
1818

1919
The `nginx_util.sh` script includes a `put config` command that will:
2020

2121
- Apply a performance-optimized nginx configuration to all pods
2222
- Install btop monitoring tool on all pods for system monitoring
2323
- Restart pods with the new configuration
2424

25-
1. Run the following command to apply the configuration updates:
25+
Run the following command to apply the configuration updates:
2626

2727
```bash
2828
./nginx_util.sh put btop
@@ -41,9 +41,9 @@ Installing btop on nginx-intel-deployment-6f5bff9667-zdrqc...
4141
✅ btop installed on all pods!
4242
```
4343

44-
### Verify configuration updates
44+
### Check pod restart status
4545

46-
2. Check that all pods have restarted with the new configuration:
46+
Check that all pods have restarted with the new configuration:
4747

4848
```bash
4949
kubectl get pods -n nginx
@@ -52,57 +52,72 @@ kubectl get pods -n nginx
5252
You should see all pods with recent restart times.
5353

5454
{{% notice Note %}}
55-
Because pods are ephemeral, btop will need to be reinstalled if the pods are deleted or restarted. If you get an error saying btop is not found, simply rerun the `./nginx_util.sh put btop` command to reinstall it.
55+
Because pods are ephemeral, btop will need to be reinstalled if the pods are deleted or restarted. If you get an error saying btop is not found, rerun the `./nginx_util.sh put btop` command to reinstall it.
5656
{{% /notice %}}
5757

5858

59-
### Monitor pod performance
59+
### Set up real-time performance monitoring
6060

61-
You can now login to any pod and use btop to monitor system performance. There are many variables which may affect an individual workload's performance, btop (like top), is a great first step in understanding those variables.
61+
You can now log in to any pod and use btop to monitor system performance. There are many variables that can affect an individual workload's performance, and btop (like top) is a great first step in understanding those variables.
6262

6363
{{% notice Note %}}
64-
When performing load generation tests from your laptop, local system and network settings may interfere with proper load generation between your machine and the remote cluster services. To mitigate these issues, its suggested to install the nginx_util.sh (or whichever tool you wish to use) on a [remote Azure instance](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/azure/) in the same region and zone as your K8s cluster (us-west-2 if you follow these tutorial instructions exactly) for best results. If you aren't seeing at least 70K+ requests/s to either K8s service endpoint, switching to a better located/tuned system is advised.
64+
When performing load generation tests from your laptop, local system and network settings may interfere with proper load generation between your machine and the remote cluster services. To mitigate these issues, it's suggested to install the `nginx_util.sh` script on a [remote Azure instance](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/azure/) in the same region and zone as your K8s cluster for best results. If you aren't seeing at least 70K+ requests/s to either K8s service endpoint, switching to a better located system is advised.
6565
{{% /notice %}}
6666

67-
Bringing up two btop terminals, one for each pod, is a convenient way to view performance in realtime. To bring up btop on both Arm and Intel pods:
67+
Running two btop terminals, one for each pod, is a convenient way to view performance in real time.
6868

69-
1. Open a new terminal window or tab.
70-
2. Within the terminal, run the `login arm` command from the nginx utility script to enter the pod:
69+
To bring up btop on both Arm and Intel pods:
70+
71+
1. Open two new terminal windows
72+
2. In one terminal, run `login arm` from the nginx utility script to enter the pod
73+
3. In the second terminal, run `login intel` from the nginx utility script to enter the pod
74+
4. Once inside each pod, run btop to see real-time system monitoring
75+
76+
The commands are shown below.
77+
78+
For the Arm terminal:
7179

7280
```bash
73-
# Login to AMD pod (replace with intel or arm as needed)
7481
./nginx_util.sh login arm
7582
```
7683

77-
3. Once inside the pod, run btop to see real-time system monitoring:
84+
For the Intel terminal:
85+
86+
```bash
87+
./nginx_util.sh login intel
88+
```
89+
90+
In both terminals run:
7891

7992
```bash
8093
btop --utf-force
8194
```
82-
4. Repeat, from Step 1, but this time, using the `login intel` command.
8395

84-
You should now see something similar to below, that is, one terminal for each Arm and Intel, running btop:
96+
You should now see something similar to the image below, with one terminal for each Arm and Intel pod running btop:
8597

8698
![Project Overview](images/btop_idle.png)
8799

88-
To visualize performance with btop against the Arm and Intel pods via the load balancer service endpoints, you can use the nginx_util.sh wrapper to generate the load two both simultaneoulsy:
100+
To visualize performance with btop against the Arm and Intel pods via the load balancer service endpoints, you can use the `nginx_util.sh` wrapper to generate load to both simultaneously:
89101

90102
```bash
91103
./nginx_util.sh wrk both
92104
```
93105

94-
This runs wrk with predefined setting (1 thread, 50 simultaneous connections) to generate load to the K8s architecture-specific endpoints. While it runs (for a default of 30s), you can observe some performance characteristics from the btop outputs:
106+
This runs wrk with predefined settings (1 thread, 50 simultaneous connections) to generate load to the K8s architecture-specific endpoints.
107+
108+
While it runs (for a default of 30s), you can observe some performance characteristics from the btop outputs:
95109

96110
![Project Overview](images/under_load.png)
97111

98-
Of particular interest is memory and CPU resource usage per pod. For Intel, figure 1 shows memory usage for the process, with figure 2 showing total cpu usage. Figures 3 and 4 show us the same metrics, but for Arm.
112+
Of particular interest is memory and CPU resource usage per pod. For Intel, red marker 1 shows memory usage for the process, and red marker 2 shows total CPU usage.
113+
114+
Red markers 3 and 4 show the same metrics for Arm.
99115

100116
![Project Overview](images/mem_and_cpu.png)
101117

102-
In addition to the visual metrics, the script also returns runtime results including requests per second, and latencies:
118+
In addition to the visual metrics, the script also returns runtime results including requests per second and latencies:
103119

104120
```output
105-
azureuser@gcohen-locust-1:/tmp/1127$ ./nginx_util.sh wrk both
106121
Running wrk against both architectures in parallel...
107122
108123
Intel: wrk -t1 -c50 -d30 http://172.193.227.195/
@@ -134,9 +149,9 @@ Transfer/sec: 26.24MB
134149
Both tests completed
135150
```
136151

137-
### Experimenting with wrk
152+
### Customize load testing parameters
138153

139-
The nginx_util.sh script shows the results of the load generation, as well as the command lines used to generate them.
154+
The `nginx_util.sh` script shows the results of the load generation, as well as the command lines used to generate them.
140155

141156
```output
142157
...
@@ -146,21 +161,23 @@ ARM: wrk -t1 -c50 -d30 http://20.252.73.72/
146161
```
147162

148163

149-
Feel free to experiment increasing/decreasing client threads, connections, and durations to better understand the performance characteristics under different scenarios.
164+
Feel free to experiment with by increasing and decreasing client threads, connections, and durations to better understand the performance characteristics under different scenarios.
150165

151-
For example, to generate load using 500 connections across 4 threads to the Arm service for five minutes (300s), you could use the following commandline:
166+
For example, to generate load using 500 connections across 4 threads to the Arm service for 5 minutes (300s), you can use the following command:
152167

153168
```bash
154169
wrk -t4 -c500 -d300 http://20.252.73.72/
155170
```
156171

157-
As mentioned earlier, unless your local system is tuned to handle load generation, you may find better traffic generation results by running on a VM. If aren't seeing at least 70K+ requests/s to either K8s service endpoint when running `wrk`, switching to a better located/tuned system is advised.
158-
159172
## Next Steps
160173

161-
You learned in this learning path how to run a sample nginx workload on a dual-architecture (Arm and Intel) Azure Kubernetes Service. Once setup, you learned how to generate load with the wrk utility, and monitor runtime metrics with btop. If you wish to continue experimenting with this learning path, some ideas you may wish to explore include:
174+
You have learned how to run a sample nginx workload on a dual-architecture (Arm and Intel) Azure Kubernetes Service.
175+
176+
You learned how to generate load with the wrk utility and monitor runtime metrics with btop.
177+
178+
Here are some ideas for further exploration:
162179

163180
* What do the performance curves look like between the two architectures as a function of load?
164181
* How do larger instance types scale versus smaller ones?
165182

166-
Most importantly, you now possess the knowledge needed to begin experimenting with your own workloads on Arm-based AKS nodes to identify performance and efficiency opportunities unique to your own environments.
183+
You now have the knowledge to experiment with your own workloads on Arm-based AKS nodes to identify performance and efficiency opportunities unique to your own environments.

content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/create-test-utility.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ layout: learningpathall
88

99
## Test utility script
1010

11-
You'll create a utility script to test and manage your nginx services across all architectures. This script will be used throughout the tutorial to test services, apply configurations, and access pods.
11+
In this section, you'll create a utility script to test and manage your nginx services across both architectures. The script will be used throughout the Learning Path to test services, apply configurations, and access pods.
1212

1313
### Script functionality
1414

@@ -18,18 +18,19 @@ The `nginx_util.sh` script provides three main functions:
1818
- **`put btop`** - Install btop monitoring tool on all pods
1919
- **`login intel|arm`** - Interactive bash access to architecture-specific pods
2020

21-
The script conveniently bundles test and logging commands into a single place, making it easy to test, troubleshoot, and view services. You'll use it throughout the tutorial to test services, apply configurations, and access pods across all architectures.
21+
The script conveniently bundles test and logging commands into a single place, making it easy to test, troubleshoot, and view services.
2222

23-
24-
### Create the utility script
23+
### Download the utility script
2524

2625
{{% notice Note %}}
27-
The following utility `nginx_util.sh` is provided for convenience.
26+
The following utility `nginx_util.sh` is provided for your convenience.
2827

2928
It's a wrapper for kubectl and other commands, utilizing [curl](https://curl.se/). Make sure you have curl installed before running.
29+
30+
You can click on the link below to review the code before downloading.
3031
{{% /notice %}}
3132

32-
Copy and paste the following command into a terminal to download and create the `nginx_util.sh` script:
33+
Copy and paste the following commands into a terminal to download and create the `nginx_util.sh` script:
3334

3435
```bash
3536
curl -o nginx_util.sh https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/nginx_util.sh
@@ -42,9 +43,10 @@ In the folder you ran the curl command, you should now see the `nginx_util.sh` s
4243
./nginx_util.sh
4344
```
4445

45-
The output should include usage instructions:
46+
The output presents the usage instructions:
47+
4648
```output
4749
Invalid first argument. Use 'curl', 'wrk', 'put', or 'login'.
4850
```
4951

50-
With it working, you're now ready to deploy nginx to the Intel nodes in the cluster.
52+
You're now ready to deploy nginx to the Intel nodes in the cluster.

content/learning-paths/servers-and-cloud-computing/multiarch_nginx_on_aks/deploy-arm.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
---
2-
title: Deploy nginx ARM to the cluster
2+
title: Deploy nginx on Arm
33
weight: 50
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
88

9-
## Add ARM deployment and service
9+
## Add the Arm deployment and service
1010

11-
In this section, you'll add nginx on ARM nodes to your existing cluster, completing your multi-architecture Intel/ARM environment for comprehensive performance comparison.
11+
In this section, you'll add nginx on Arm to your existing cluster, completing your multi-architecture Intel/Arm environment for comprehensive performance comparison.
1212

1313
When applied, the **arm_nginx.yaml** file creates the following K8s objects:
14-
- **Deployment** (`nginx-arm-deployment`) - Pulls the multi-architecture nginx image from DockerHub, launches a pod on the ARM node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf`
14+
- **Deployment** (`nginx-arm-deployment`) - Pulls the multi-architecture nginx image from DockerHub, launches a pod on the Arm node, and mounts the shared ConfigMap as `/etc/nginx/nginx.conf`
1515
- **Service** (`nginx-arm-svc`) - Load balancer targeting pods with both `app: nginx-multiarch` and `arch: arm` labels
1616

17-
Copy and paste the following commands into a terminal to download and apply the ARM deployment and service:
17+
Copy and paste the following commands into a terminal to download and apply the Arm deployment and service:
1818

1919
```bash
2020
curl -o arm_nginx.yaml https://raw.githubusercontent.com/geremyCohen/nginxOnAKS/refs/heads/main/arm_nginx.yaml
@@ -30,17 +30,17 @@ service/nginx-arm-svc created
3030

3131
### Examining the deployment configuration
3232

33-
Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see settings optimized for ARM architecture:
33+
Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see settings optimized for the Arm architecture:
3434

35-
* The `nodeSelector` `kubernetes.io/arch: arm64`. This ensures that the deployment only runs on ARM nodes, utilizing the arm64 version of the nginx container image.
35+
The `nodeSelector` value of `kubernetes.io/arch: arm64` ensures that the deployment only runs on Arm nodes, utilizing the `arm64` version of the nginx container image.
3636

3737
```yaml
3838
spec:
3939
nodeSelector:
4040
kubernetes.io/arch: arm64
4141
```
4242
43-
* The service selector uses both `app: nginx-multiarch` and `arch: arm` labels to target only ARM pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing.
43+
The service selector uses both `app: nginx-multiarch` and `arch: arm` labels to target only Arm pods. This dual-label approach allows for both architecture-specific and multi-architecture service routing.
4444

4545
```yaml
4646
selector:
@@ -50,7 +50,7 @@ Taking a closer look at the `arm_nginx.yaml` deployment file, you'll see setting
5050

5151
### Verify the deployment
5252

53-
1. Get the status of nodes, pods and services by running:
53+
Get the status of nodes, pods and services by running:
5454

5555
```bash
5656
kubectl get nodes,pods,svc -nnginx
@@ -78,16 +78,18 @@ You can also verify the shared ConfigMap is available:
7878
kubectl get configmap -nnginx
7979
```
8080

81+
The output is similar to:
82+
8183
```output
8284
NAME DATA AGE
8385
nginx-config 1 10m
8486
```
8587

86-
When the pods show `Running` and the service shows a valid `External IP`, you're ready to test the nginx ARM service.
88+
When the pods show `Running` and the service shows a valid `External IP`, you're ready to test the nginx Arm service.
8789

88-
### Test the nginx web service on ARM
90+
### Test the nginx web service on Arm
8991

90-
2. Run the following to make an HTTP request to the ARM nginx service using the script you created earlier:
92+
Run the following command to make an HTTP request to the Arm nginx service using the script you created earlier:
9193

9294
```bash
9395
./nginx_util.sh curl arm
@@ -107,7 +109,7 @@ Response:
107109
Served by: nginx-arm-deployment-5bf8df95db-wznff
108110
```
109111

110-
If you see output similar to above, you have successfully added ARM nodes to your cluster running nginx.
112+
If you see similar output, you have successfully added Arm nodes to your cluster running nginx.
111113

112114
### Compare both architectures
113115

0 commit comments

Comments
 (0)