Skip to content
Original file line number Diff line number Diff line change
@@ -1,24 +1,20 @@
---
title: From x86 to Arm on GKE - Build, Deploy, and Migrate with Google Axion

draft: true
cascade:
draft: true
title: Migrate x86 workloads to Arm on Google Kubernetes Engine with Axion processors

minutes_to_complete: 90

who_is_this_for: This is an advanced topic for cloud, platform, and site reliability engineers operating Kubernetes on Google Cloud who need a prescriptive path to build multi-architecture images and migrate services from x86 to Arm using Google Axion processors.
who_is_this_for: This is an advanced topic for cloud, platform, and site reliability engineers who operate Kubernetes on Google Cloud and need to build multi-architecture images and migrate services from x86 to Arm using Google Axion processors.

learning_objectives:
- Prepare Dockerfiles for multi-architecture builds by adding arm64 support
- Create a dual-architecture GKE standard cluster with two node pools, amd64 and arm64
- Build and publish multi-architecture images to Artifact Registry using Docker Buildx without using QEMU to emulate Arm instructions
- Deploy a Kubernetes application amd64 first, then migrate to arm64 using Kustomize overlays and progressive rollout
- Optionally automate builds and rollouts with Cloud Build and Skaffold
- Create a dual-architecture GKE standard cluster with amd64 and arm64 node pools
- Build and publish multi-architecture images to Artifact Registry using Docker Buildx
- Deploy a Kubernetes application on amd64, then migrate to arm64 using Kustomize overlays
- Automate builds and rollouts with Cloud Build and Skaffold

prerequisites:
- A [Google Cloud account](https://console.cloud.google.com/) with billing enabled
- A local Linux or macOS computer or Cloud Shell access with Docker, Kubernetes CLI (kubectl), Google Cloud CLI (gcloud), and Git installed
- A local Linux or macOS computer with Docker, Kubernetes CLI (kubectl), Google Cloud CLI (gcloud), and Git installed, or access to Google Cloud Shell
- Basic familiarity with Docker, Kubernetes, and gcloud

author:
Expand All @@ -40,11 +36,11 @@ tools_software_languages:

further_reading:
- resource:
title: GKE documentation
title: Google Kubernetes Engine documentation
link: https://cloud.google.com/kubernetes-engine/docs
type: documentation
- resource:
title: Create Arm-based clusters and node pools
title: Create standard clusters and node pools with Arm nodes
link: https://cloud.google.com/kubernetes-engine/docs/how-to/create-arm-clusters-nodes
type: documentation

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
# User change
title: "Explore the benefits of migrating microservices to Arm on GKE"

weight: 2

# Do not modify these elements
layout: "learningpathall"
---

## Overview

This Learning Path shows you how to migrate a microservices application from x86 to Arm on Google Kubernetes Engine (GKE) using multi-architecture container images. You'll work with Google's Online Boutique, a sample application built with multiple programming languages. The migration requires no code changes, making it a straightforward example of moving to Arm-based Google Axion processors.


## Why use Google Axion processors for GKE?

Google Axion processors bring modern Arm-based compute to GKE. You get strong price-performance and energy efficiency for cloud-native, scale-out services. With multi-architecture images and mixed node pools, you can migrate services from x86 to Arm gradually, with no major code changes.

## What is Google Axion?

[Google Axion](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) is Google Cloud's Arm-based CPU family built on Arm Neoverse, for general-purpose, cloud-native services and CPU-based AI. You can deploy it for workloads like web apps and web servers, containerized microservices, open-source databases, in-memory caches, data analytics, media processing, and CPU-based AI inference and data processing. On GKE, you can leverage Axion through the C4A and N4A VM families, paired with Google's Titanium offloads to free CPU cycles for application work.

## Why migrate to Arm on GKE?
There are three clear benefits to consider when considerring migrating to Arm on GKE:

- Price-performance: you can run more workload per unit of cost, which is particularly valuable for scale-out services that need to handle increasing traffic efficiently.
- Energy efficiency: you reduce power usage for always-on microservices, lowering both operational costs and environmental impact.
- Compatibility: you can migrate containerized applications with build and deploy changes only—no code rewrites are required, making the transition straightforward.

## Learn about the Online Boutique sample application

[Online Boutique](https://github.com/GoogleCloudPlatform/microservices-demo) is a polyglot microservices storefront, complete with shopping cart, checkout, catalog, ads, and recommendations. It's implemented in Go, Java, Python, .NET, and Node.js, with ready-to-use Dockerfiles and Kubernetes manifests. It's a realistic example for demonstrating an x86 to Arm migration with minimal code changes.

## Multi-architecture on GKE (pragmatic path)

This Learning Path demonstrates a practical migration approach using Docker Buildx with a Kubernetes driver. Your builds run natively inside BuildKit pods on GKE node pools—no QEMU emulation needed. You'll add an Arm node pool alongside your existing x86 nodes, then use node selectors and affinity rules to control where services run. This lets you migrate safely, one service at a time.

## How this Learning Path demonstrates migration

You'll migrate the Online Boutique application from x86 to Arm step by step. You'll build multi-architecture container images and use mixed node pools, so you can test each service on Arm before you fully commit to the migration.

The migration process involves these steps:

- Open Google Cloud Shell and set up the environment variables.
- Enable required APIs, create an Artifact Registry repository, and authenticate Docker.
- Create a GKE Standard cluster with an amd64 node pool and add an arm64 (Axion-based C4A) node pool.
- Create a Buildx (Kubernetes driver) builder that targets both pools, then build and push multi-architecture images (amd64 and arm64) natively using BuildKit pods.
- Deploy to amd64 first (Kustomize overlay), validate, then migrate to arm64 (overlay) and verify.
- Automate builds and rollouts with Cloud Build and Skaffold.
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
---
title: Automate builds and rollout with Cloud Build and Skaffold
weight: 6
weight: 7

### FIXED, DO NOT MODIFY
layout: learningpathall
---

Google [**Cloud Build**](https://cloud.google.com/build/docs/set-up) is a managed CI/CD service that runs your containerized build and deploy steps in isolated runners.
## Automate the deployment with Cloud Build

In this section, you'll automate the flow you performed manually: build multi-arch images, deploy to GKE on amd64, then migrate to arm64, and print the app's external IP.
Google [Cloud Build](https://cloud.google.com/build/docs/set-up) is a managed CI/CD service that runs your containerized build and deploy steps in isolated runners.

In this section, you automate the flow you performed manually: build multi-arch images, deploy to GKE on amd64, then migrate to arm64, and print the app's external IP.

## What does this pipeline do?

Expand All @@ -19,9 +21,8 @@ The pipeline performs the following steps:
- Connects to your GKE cluster
- Applies the amd64 Kustomize overlay, verifies pods, then applies the arm64 overlay and verifies pods again
- Prints the frontend-external LoadBalancer IP at the end

{{% notice Tip %}}
Run this from the microservices-demo repo root in Cloud Shell. Ensure you completed the previous steps.
Run this command from the `microservices-demo` repository root in Cloud Shell. Make sure you've completed the previous steps.
{{% /notice %}}

## Grant IAM permission to the Cloud Build service account
Expand Down Expand Up @@ -110,7 +111,7 @@ YAML

This pipeline installs Docker with Buildx in the runner, enables QEMU, builds two services as examples (extend as desired), connects to your cluster, deploys to amd64, verifies, migrates to arm64, verifies, and prints the external IP. 

Run the commands to create the `cloudbuild.yaml` file.
Run the commands to create the `cloudbuild.yaml` file:

```yaml
cat > cloudbuild.yaml <<'YAML'
Expand Down Expand Up @@ -263,3 +264,33 @@ Open http://<EXTERNAL-IP> in your browser.
```

Open the URL to load the storefront and confirm the full build, deploy, and migrate flow is automated.

## What you've accomplished

Congratulations! You've successfully automated the entire build, deploy, and migration workflow using Cloud Build and Skaffold. Your multi-architecture application runs natively on Arm-powered GKE nodes, and you can deploy updates automatically with a single command.

You've learned how to:

- Update Dockerfiles to support native builds on both amd64 and arm64 architectures
- Create a dual-architecture GKE cluster with separate node pools for each platform
- Build multi-architecture container images using Docker Buildx with native BuildKit pods
- Deploy applications to amd64 nodes, then migrate them to arm64 nodes using Kustomize overlays
- Automate the entire workflow with Cloud Build and Skaffold for continuous deployment

## What's next

Now that you have a working multi-architecture deployment pipeline, you can explore these next steps:

- Optimize for Arm performance: profile your services on arm64 nodes to identify optimization opportunities. Arm Neoverse processors offer different performance characteristics than x86, so you might discover new ways to improve throughput or reduce latency.

- Expand your migration: add build steps for the remaining Online Boutique services. You can extend the `cloudbuild.yaml` file to build all services, not just the two examples provided.

- Implement progressive rollouts: use Skaffold profiles and Cloud Build triggers to set up canary deployments or blue-green deployments across architectures. This lets you test changes on a subset of traffic before rolling out to all users.

- Monitor architecture-specific metrics: set up monitoring dashboards in Cloud Monitoring to compare performance, resource usage, and cost between amd64 and arm64 deployments. This data helps you make informed decisions about your migration strategy.

- Explore cost optimization: review your GKE cluster costs and consider rightsizing your node pools. Arm-based C4A instances often provide better price-performance for cloud-native workloads, so you might reduce costs while maintaining or improving performance.




Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
---
title: Provision a dual-architecture GKE cluster and publish images
weight: 4
title: Build and deploy multi-architecture images on GKE
weight: 5

### FIXED, DO NOT MODIFY
layout: learningpathall
---

## Create a cluster and build the multi-arch images

You are ready to create a GKE cluster with two node pools (amd64 and arm64), then build and push multi-arch images natively on those node pools.

Each architecture uses its own BuildKit pod, and no QEMU emulation is required.
Expand Down Expand Up @@ -40,7 +42,7 @@ This approach prevents users on the default VPC from accidentally setting NETWOR

Create a GKE Standard cluster with VPC-native (IP aliasing) enabled and no default node pool. You'll add amd64 and arm64 pools in the next step.

The command below works for both default and custom VPCs. If the NETWORK, SUBNET, and secondary range variables are unset, GKE uses the default VPC and manages ranges automatically.
The command below works for both default and custom VPCs. If the NETWORK, SUBNET, and secondary range variables are unset, GKE uses the default VPC and manages ranges automatically:

```bash
# Cluster vars (reuses earlier PROJECT_ID/REGION/ZONE)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
---
title: Prepare manifests and deploy on GKE
weight: 5
weight: 6

### FIXED, DO NOT MODIFY
layout: learningpathall
---
## Prepare deployment manifests

You'll now configure the application manifests to use your Artifact Registry images and create Kustomize overlays for different CPU architectures. This allows you to deploy the same application to both x86 and Arm node pools.
You'll now configure the application manifests to use your Artifact Registry images and create Kustomize overlays for different CPU architectures. This allows you to deploy the same application to both x86 and Arm node pools by replacing sample image references with your Artifact Registry path and tag, then creating overlays to select nodes by architecture.

## Prepare deployment manifests

Replace sample image references with your Artifact Registry path and tag, then create Kustomize overlays to select nodes by architecture.
## Update base manifests to use your images

### Point base manifests at your images
Replace the sample image references in the base manifests with your Artifact Registry path and tag:

Replace the image references with your references:

Expand All @@ -29,7 +29,10 @@ find kustomize/base -name "*.yaml" -type f -exec \
grep -r "${GAR}" kustomize/base/ || true
```

### Create node-selector overlays
You’ve updated your deployment manifests to reference your own Artifact Registry images. This ensures your application uses the multi-architecture containers you built for Arm and x86.


## Create node-selector overlays

Create node-selector overlays for targeting specific architectures.

Expand Down Expand Up @@ -79,7 +82,7 @@ cat << 'EOF' > kustomize/overlays/arm64/node-selector.yaml
EOF
```

You now have updated manifests that reference your container images and Kustomize overlays that target specific CPU architectures.
You’ve updated your deployment manifests to reference your own Artifact Registry images. This ensures your application uses the multi-architecture containers you built for Arm and x86.

## Deploy to the x86 (amd64) pool

Expand All @@ -101,6 +104,8 @@ kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,STATU

Pods should be scheduled on nodes labeled `kubernetes.io/arch=amd64`.

You’ve deployed your application to the x86 node pool and verified pod placement. This confirms your manifests and overlays work as expected before migrating to Arm.

## Migrate to the Arm (arm64) pool

Apply the arm64 overlay to move workloads:
Expand All @@ -117,6 +122,8 @@ kubectl get pods -o wide

You should see pods now running on nodes where `kubernetes.io/arch=arm64`.

You’ve migrated your workloads to the Arm node pool. Pods now run on Arm-based nodes, demonstrating a successful architecture transition.

## Verify external access

Get the LoadBalancer IP and open the storefront:
Expand All @@ -138,5 +145,5 @@ Copy the EXTERNAL-IP value and open it in a new browser tab:
http://<EXTERNAL-IP>
```

The microservices storefront loads, confirming that your application is accessible and functional on the arm64 node pool.
The microservices storefront loads, confirming that your application is accessible and functional on the arm64 node pool. You’re now running a production-ready microservices storefront on Arm-powered GKE infrastructure.

Loading