-
Notifications
You must be signed in to change notification settings - Fork 392
Open
Description
Despite the OpenShift documentation containing instructions for creating s390x agent nodes and adding them to a node pool, there does not seem to be a way to create an s390x NodePool. s390x Agent's cannot be added to any other NodePool since the architectures don't match. I also tried all of these steps building the latest version of the hypershift CLI tool.
At cluster creation time:
$ hcp create cluster agent --name=jbsandbox2 --pull-secret=jbsandbox2.dockerconfigjson --agent-namespace=clusters-jbsandbox2 --base-domain=my.test.domain.com --api-server-address=api.jbsandbox2.my.test.domain.com --etcd-storage-class=fpstoc2-scale --ssh-key ssh.pub --namespace clusters-jbsandbox --control-plane-availability-policy HighlyAvailable --release-image=quay.io/openshift-release-dev/ocp-release:4.17.17-s390x --node-pool-replicas 1 --render --arch s390x
{"level":"error","ts":"2025-07-29T09:26:14-04:00","msg":"Failed to create cluster","error":"specified arch \"s390x\" is not supported","stacktrace":"github.com/openshift/hypershift/cmd/cluster/agent.NewCreateCommand.func1\n\t/home/lozcoc/hcp/hypershift/cmd/cluster/agent/create.go:160\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/lozcoc/hcp/hypershift/vendor/github.com/spf13/cobra/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/lozcoc/hcp/hypershift/vendor/github.com/spf13/cobra/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/lozcoc/hcp/hypershift/vendor/github.com/spf13/cobra/command.go:1041\ngithub.com/spf13/cobra.(*Command).ExecuteContext\n\t/home/lozcoc/hcp/hypershift/vendor/github.com/spf13/cobra/command.go:1034\nmain.main\n\t/home/lozcoc/hcp/hypershift/main.go:78\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:283"}
Error: specified arch "s390x" is not supported
specified arch "s390x" is not supported
Manually creating a NodePool:
$ hcp create nodepool agent --name jbsandbox2-workers --cluster-name=jbsandbox2 --node-count=2 --arch=s390x
{"level":"error","ts":"2025-07-29T09:43:00-04:00","msg":"Failed to create nodepool","error":"NodePool.hypershift.openshift.io \"jbsandbox2-workers\" is invalid: [spec.arch: Unsupported value: \"s390x\": supported values: \"arm64\", \"amd64\", \"ppc64le\", <nil>: Invalid value: \"null\": some validation rules were not checked because the object was invalid; correct the existing errors to complete validation]","stacktrace":"github.com/openshift/hypershift/product-cli/cmd/nodepool/agent.NewCreateCommand.(*CreateNodePoolOptions).CreateRunFunc.func1\n\t/remote-source/app/cmd/nodepool/core/create.go:42\ngithub.com/spf13/cobra.(*Command).execute\n\t/remote-source/app/vendor/github.com/spf13/cobra/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/remote-source/app/vendor/github.com/spf13/cobra/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/remote-source/app/vendor/github.com/spf13/cobra/command.go:1041\ngithub.com/spf13/cobra.(*Command).ExecuteContext\n\t/remote-source/app/vendor/github.com/spf13/cobra/command.go:1034\nmain.main\n\t/remote-source/app/product-cli/main.go:59\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:272"}
Error: NodePool.hypershift.openshift.io "jbsandbox2-workers" is invalid: [spec.arch: Unsupported value: "s390x": supported values: "arm64", "amd64", "ppc64le", <nil>: Invalid value: "null": some validation rules were not checked because the object was invalid; correct the existing errors to complete validation]
NodePool.hypershift.openshift.io "jbsandbox2-workers" is invalid: [spec.arch: Unsupported value: "s390x": supported values: "arm64", "amd64", "ppc64le", <nil>: Invalid value: "null": some validation rules were not checked because the object was invalid; correct the existing errors to complete validation]
Editing an existing NodePool:
$ hcp create nodepool agent --cluster-name=jbsandbox2 --name=jbsandbox2-workers --node-count=2
NodePool jbsandbox2-workers created
$ oc patch nodepool jbsandbox2-workers -n clusters --type=json -p='[{"op": "replace", "path": "/spec/arch", "value":"s390x"}]'
The NodePool "jbsandbox2-workers" is invalid:
* spec.arch: Unsupported value: "s390x": supported values: "arm64", "amd64", "ppc64le"
* <nil>: Invalid value: "null": some validation rules were not checked because the object was invalid; correct the existing errors to complete validation
Version Information
$ oc get clusterversion
hcpNAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.17.36 True False 5d22h Cluster version is 4.17.36
$ hcp version
Client Version: openshift/hypershift: ab02c94ed951d38fbc454e580e838705b52fc393. Latest supported OCP: 4.18.0
Server Version: ab02c94ed951d38fbc454e580e838705b52fc393
Server Supports OCP Versions: 4.18, 4.17, 4.16, 4.15, 4.14
The hosted cluster is version 4.17.17 running on s390x architecture.
Additional Information
I have created and installed two agent nodes on z/VM 7.4 using the PXE images from my InfraEnv. Both Agents show:
status:
conditions:
- lastTransitionTime: "2025-07-29T12:37:59Z"
message: The Spec has been successfully applied
reason: SyncOK
status: "True"
type: SpecSynced
- lastTransitionTime: "2025-07-25T15:49:13Z"
message: The agent's connection to the installation service is unimpaired
reason: AgentIsConnected
status: "True"
type: Connected
- lastTransitionTime: "2025-07-29T12:16:15Z"
message: The agent is ready to begin the installation
reason: AgentIsReady
status: "True"
type: RequirementsMet
- lastTransitionTime: "2025-07-29T12:16:12Z"
message: The agent's validations are passing
reason: ValidationsPassing
status: "True"
type: Validated
- lastTransitionTime: "2025-07-24T20:59:18Z"
message: The installation has not yet started
reason: InstallationNotStarted
status: "False"
type: Installed
- lastTransitionTime: "2025-07-29T12:15:34Z"
message: The agent is not bound to any cluster deployment
reason: Unbound
status: "False"
type: Bound
debugInfo:
eventsURL: https://assisted-service-multicluster-engine.apps.jbsandbox1.my.test.domain.com/api/assisted-install/v2/events?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbmZyYV9lbnZfaWQiOiIwNjZmZDM5My1iYmIwLTQ5NDQtODg2Ni04ZGVkM2UyMjI0MTAifQ.MSAiPdS44PlkoM3uRCrBVAAm2IILLcI1dDYBHhfptZ5J8UntGbIEl_fWOKnf9cwe5X7b9A_AYP9dxlyMvYYu9Q&host_id=210757dd-9237-7953-45db-c20cdb7b09c1
state: known-unbound
stateInfo: Host is ready to be bound to a cluster
Metadata
Metadata
Assignees
Labels
No labels