You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/gke-multi-arch-axion/_index.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,25 +1,25 @@
1
1
---
2
2
title: From x86 to Arm on GKE - Build, Deploy, and Migrate with Google Axion
3
+
3
4
draft: true
4
5
cascade:
5
6
draft: true
6
7
7
8
minutes_to_complete: 90
8
9
9
-
who_is_this_for: This learning path is for cloud, platform, and SRE engineers operating Kubernetes on Google Cloud who need a prescriptive path to build multi‑architecture images and migrate services from x86 to Arm (Google Axion) using production‑grade practices.
10
+
who_is_this_for: This is an advanced topic for cloud, platform, and site reliability engineers operating Kubernetes on Google Cloud who need a prescriptive path to build multi-architecture images and migrate services from x86 to Arm using Google Axion processors.
10
11
11
12
learning_objectives:
12
-
- Prepare Dockerfiles for multi-architecture builds (minimal, safe edits so services compile and run on amd64 & arm64).
13
-
- Create a dual-architecture GKE Standard cluster with two node pools, amd64 and arm64 (Axion-based C4A).
14
-
- Build and publish multi-architecture images to Artifact Registry using Docker Buildx (Kubernetes driver) - BuildKit pods run natively on both pools (no QEMU or extra build VMs).
15
-
- Deploy to amd64 first, then migrate to arm64 using Kustomize overlays and progressive rollout.
16
-
- Optionally automate builds and rollouts end-to-end with Cloud Build and Skaffold.
13
+
- Prepare Dockerfiles for multi-architecture builds by adding arm64 support
14
+
- Create a dual-architecture GKE standard cluster with two node pools, amd64 and arm64
15
+
- Build and publish multi-architecture images to Artifact Registry using Docker Buildx without using QEMU to emulate Arm instructions
16
+
- Deploy a Kubernetes application amd64 first, then migrate to arm64 using Kustomize overlays and progressive rollout
17
+
- Optionally automate builds and rollouts with Cloud Build and Skaffold
17
18
18
19
prerequisites:
19
-
- A [Google Cloud account](https://console.cloud.google.com/) with billing enabled.
20
-
- Cloud Shell access (used as the control plane, includes gcloud, kubectl, and Docker Buildx).
21
-
- (Optional if not using Cloud Shell) A Linux/macOS workstation with Docker (Buildx enabled), kubectl, the Google Cloud CLI (gcloud), and Git.
22
-
- Basic familiarity with Docker, Kubernetes, and gcloud.
20
+
- A [Google Cloud account](https://console.cloud.google.com/) with billing enabled
21
+
- A local Linux or macOS computer or Cloud Shell access with Docker, Kubernetes CLI (kubectl), Google Cloud CLI (gcloud), and Git installed
22
+
- Basic familiarity with Docker, Kubernetes, and gcloud
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/gke-multi-arch-axion/cloud-build.md
+27-16Lines changed: 27 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,28 +1,35 @@
1
1
---
2
-
title: Automate builds and rollout with Cloud Build & Skaffold
2
+
title: Automate builds and rollout with Cloud Build and Skaffold
3
3
weight: 6
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
Google [**Cloud Build**](https://cloud.google.com/build/docs/set-up) is a managed CI/CD service that runs your containerized build and deploy steps in isolated runners. In this page you'll automate the flow you performed manually: **build multi-arch images, deploy to GKE on amd64, then migrate to arm64**, and print the app's external IP.
9
+
Google [**Cloud Build**](https://cloud.google.com/build/docs/set-up) is a managed CI/CD service that runs your containerized build and deploy steps in isolated runners.
10
10
11
-
## What this pipeline does
12
-
- Authenticates Docker to **Artifact Registry**.
13
-
- Builds and pushes **amd64 + arm64** images with **Docker Buildx** (QEMU enabled in the runner).
14
-
- Connects to your **GKE** cluster.
15
-
- Applies the **amd64** Kustomize overlay, verifies pods, then applies the **arm64** overlay and verifies again.
16
-
- Prints the **frontend-external** LoadBalancer IP at the end.
11
+
In this section, you'll automate the flow you performed manually: build multi-arch images, deploy to GKE on amd64, then migrate to arm64, and print the app's external IP.
17
12
13
+
## What does this pipeline do?
14
+
15
+
The pipeline performs the following steps:
16
+
17
+
- Authenticates Docker to your Artifact Registry
18
+
- Builds and pushes amd64 and arm64 images with Docker Buildx, with QEMU enabled in the runner
19
+
- Connects to your GKE cluster
20
+
- Applies the amd64 Kustomize overlay, verifies pods, then applies the arm64 overlay and verifies pods again
21
+
- Prints the frontend-external LoadBalancer IP at the end
18
22
19
23
{{% notice Tip %}}
20
-
Run this from the **microservices-demo** repo root in **Cloud Shell**. Ensure you completed earlier pages (GAR created, images path/tag decided, GKE cluster with amd64 + arm64 node pools, and Kustomize overlays present).
24
+
Run this from the microservices-demo repo root in Cloud Shell. Ensure you completed the previous steps.
21
25
{{% /notice %}}
22
26
23
-
## Grant IAM to the Cloud Build service account
27
+
## Grant IAM permission to the Cloud Build service account
28
+
24
29
Cloud Build runs as a per-project service account: `<PROJECT_NUMBER>@cloudbuild.gserviceaccount.com`. Grant it the minimal roles needed to build, push, log, and interact with GKE.
25
30
31
+
Grant the required roles:
32
+
26
33
```bash
27
34
# Uses env vars set earlier: PROJECT_ID, REGION, CLUSTER_NAME, GAR
Create a `skaffold.yaml` file for Cloud Build. This lets Cloud Build handle image builds and uses Skaffold only to apply the Kustomize overlays.
41
50
42
-
This will let Cloud Build handle image builds and use Skaffold only to apply the Kustomize overlays.
51
+
Create the configuration:
43
52
44
53
```yaml
45
54
# From the repo root (microservices-demo)
@@ -97,9 +106,11 @@ YAML
97
106
98
107
```
99
108
100
-
## Create cloudbuild.yaml
109
+
## Create a YAML file for Cloud Build
110
+
111
+
This pipeline installs Docker with Buildx in the runner, enables QEMU, builds two services as examples (extend as desired), connects to your cluster, deploys to amd64, verifies, migrates to arm64, verifies, and prints the external IP. 
101
112
102
-
This pipeline installs `Docker + Buildx` in the runner, enables QEMU, builds two services as examples (extend as desired), connects to your cluster, deploys to amd64, verifies, migrates to arm64, verifies, and prints the external IP. 
113
+
Run the commands to create the `cloudbuild.yaml` file.
103
114
104
115
```yaml
105
116
cat > cloudbuild.yaml <<'YAML'
@@ -239,7 +250,7 @@ In production, add one build step per microservice (or a loop) and enable cachin
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/gke-multi-arch-axion/gke-build-push.md
+38-26Lines changed: 38 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,22 @@
1
1
---
2
-
title: Provision a Dual-Arch GKE Cluster and Publish Multi-Arch Images
2
+
title: Provision a dual-architecture GKE cluster and publish images
3
3
weight: 4
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
Now create a **GKE cluster** with **two node pools** (amd64 & arm64), then build and push multi-arch images natively on those node pools. Each architecture uses its own BuildKit pod, and no QEMU emulation is involved.
9
+
You are ready to create a GKE cluster with two node pools (amd64 and arm64), then build and push multi-arch images natively on those node pools.
10
10
11
-
#### Networking (VPC-native / IP aliasing)
11
+
Each architecture uses its own BuildKit pod, and no QEMU emulation is required.
12
12
13
-
GKE uses **VPC-native (IP aliasing)** and requires **two secondary ranges** on the chosen subnet: one for **Pods** and one for **Services**.
14
-
-**Default VPC:** Skip this step. GKE will create the secondary ranges automatically.
15
-
-**Custom VPC/subnet:** Set variables and add/verify secondary ranges:
13
+
## Networking configuration
14
+
15
+
GKE uses VPC-native (IP aliasing) and requires two secondary ranges on the chosen subnet: one for Pods and one for Services.
16
+
17
+
For the default VPC, GKE creates the secondary ranges automatically.
18
+
19
+
Run the commands below in your terminal, adjusting the environment variables as needed for your account:
16
20
17
21
```bash
18
22
# Set/confirm network variables (adjust to your environment)
This avoids users on default VPC accidentally setting NETWORK/SUBNET and passing the wrong flags later.
33
36
34
-
### Create the GKE cluster
37
+
This approach prevents users on the default VPC from accidentally setting NETWORK/SUBNET variables and passing incorrect flags later.
38
+
39
+
## Create the GKE cluster
35
40
36
-
Create a GKE Standard cluster with VPC-native (IP aliasing) enabled and no default node pool (you'll add amd64 and arm64 pools next). The command below works for both default and custom VPCs: if NETWORK, SUBNET, and the secondary range variables are unset, GKE uses the default VPC and manages ranges automatically.
41
+
Create a GKE Standard cluster with VPC-native (IP aliasing) enabled and no default node pool. You'll add amd64 and arm64 pools in the next step.
37
42
38
-
Create the cluster with no default node pool and add node pools explicitly.
43
+
The command below works for both default and custom VPCs. If the NETWORK, SUBNET, and secondary range variables are unset, GKE uses the default VPC and manages ranges automatically.
39
44
40
45
```bash
41
46
# Cluster vars (reuses earlier PROJECT_ID/REGION/ZONE)
Create an x86 (amd64) pool and an Arm (arm64) pool. Use machine types available in your region (e.g., c4-standard-* for x86 and c4a-standard-* for Axion).
54
+
Now create an x86 (amd64) pool and an Arm (arm64) pool. Use machine types available in your region. The commands below use `c4-standard-*` for x86 and `c4a-standard-*` for Axion:
You should see nodes for both architectures. In zonal clusters (or when a pool has --num-nodes=1 in a single zone), expect one amd64 and one arm64 node. In regional clusters, --num-nodes is per zone, with three zones you'll see three amd64 and three arm64 nodes.
71
75
72
-
### Create the Buildx builder on GKE (native, one pod per arch)
76
+
You should see nodes for both architectures. In zonal clusters (or when a pool has `--num-nodes=1` in a single zone), expect one amd64 and one arm64 node. In regional clusters, `--num-nodes` is per zone, so with three zones you'll see three amd64 and three arm64 nodes.
73
77
74
-
Now run a BuildKit pod on an amd64 node and another on an arm64 node. Buildx will route each platform's build to the matching pod - native builds, no emulation.
78
+
## Create the Buildx builder on GKE
75
79
76
-
```bash
80
+
Now run a BuildKit pod on an amd64 node and another on an arm64 node. Buildx routes each platform's build to the matching pod. These are native builds with no QEMU emulation.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/gke-multi-arch-axion/gke-deploy.md
+30-20Lines changed: 30 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,20 @@
1
1
---
2
-
title: Prepare Manifests and Deploy on GKE(migration to arm64)
2
+
title: Prepare manifests and deploy on GKE
3
3
weight: 5
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
Point the app manifests at your Artifact Registry images, add Kustomize overlays to target node architecture, deploy to the x86 (amd64) pool, then migrate the same workloads to the Arm (arm64) pool.
9
+
You'll now configure the application manifests to use your Artifact Registry images and create Kustomize overlays for different CPU architectures. This allows you to deploy the same application to both x86 and Arm node pools.
10
10
11
-
###Prepare deployment manifests
11
+
## Prepare deployment manifests
12
12
13
-
Replace public sample image references with your Artifact Registry path and **tag(:v1)**, then create Kustomize overlays to select nodes by architecture.
13
+
Replace sample image references with your Artifact Registry path and tag, then create Kustomize overlays to select nodes by architecture.
14
14
15
-
#### Point base manifests at your images
15
+
### Point base manifests at your images
16
+
17
+
Replace the image references with your references:
16
18
17
19
```bash
18
20
# Replace the sample repo path with your GAR (from earlier: ${GAR})
Result: the **base** references your images, and **overlays** control per-arch placement.
82
+
You now have updated manifests that reference your container images and Kustomize overlays that target specific CPU architectures.
76
83
77
-
###Deploy to the x86 (amd64) pool
84
+
## Deploy to the x86 (amd64) pool
78
85
79
-
Render the amd64 Kustomize overlay (adds nodeSelector: kubernetes.io/arch=amd64) and apply it to the cluster. Run from the repository root after updating base manifests to your ${GAR} and setting your kube-context to this cluster.
86
+
Render the amd64 Kustomize overlay (adds `nodeSelector: kubernetes.io/arch=amd64`) and apply it to the cluster.
87
+
88
+
Run from the repository root after updating base manifests and setting your kube-context to this cluster:
0 commit comments