You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:FeatureName: Installing a cluster on GCP into a shared VPC
7
+
8
+
toc::[]
9
+
10
+
In {product-title} version {product-version}, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see link:https://cloud.google.com/vpc/docs/shared-vpc[Shared VPC overview in the GCP documentation].
11
+
12
+
The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the `install-config.yaml` file before you install the cluster.
* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes.
20
+
* You read the documentation on xref:../../installing/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users].
21
+
* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to.
22
+
* If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the `kube-system` namespace, you can xref:../../installing/installing_gcp/manually-creating-iam-gcp.adoc#manually-creating-iam-gcp[manually create and maintain IAM credentials].
23
+
* You have a GCP host project which contains a shared VPC network.
24
+
* You xref:../../installing/installing_gcp/installing-gcp-account.adoc#installing-gcp-account[configured a GCP project] to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see link:https://cloud.google.com/vpc/docs/provisioning-shared-vpc#create-shared[Attaching service projects in the GCP documentation].
25
+
* You have a GCP service account that has the xref:../../installing/installing_gcp/installing-gcp-account.adoc#installation-gcp-permissions_installing-gcp-account[required GCP permissions] in the host project.
* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console.
* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service
* xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster].
67
+
* If necessary, you can
68
+
xref:../../support/remote_health_monitoring/opting-out-of-remote-health-reporting.adoc#opting-out-remote-health-reporting_opting-out-remote-health-reporting[opt out of remote health reporting].
= Sample customized install-config.yaml file for shared VPC installation
7
+
There are several configuration parameters which are required to install {product-title} on GCP using a shared VPC. The following is a sample `install-config.yaml` file which demonstrates these fields.
8
+
9
+
[IMPORTANT]
10
+
====
11
+
This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster.
12
+
====
13
+
14
+
[source,yaml]
15
+
----
16
+
apiVersion: v1
17
+
baseDomain: example.com
18
+
credentialsMode: Passthrough <1>
19
+
metadata:
20
+
name: cluster_name
21
+
platform:
22
+
gcp:
23
+
computeSubnet: shared-vpc-subnet-1 <2>
24
+
controlPlaneSubnet: shared-vpc-subnet-2 <3>
25
+
createFirewallRules: Disabled <4>
26
+
network: shared-vpc <5>
27
+
networkProjectID: host-project-name <6>
28
+
publicDNSZone:
29
+
id: public-dns-zone <7>
30
+
project: host-project-name <8>
31
+
projectID: service-project-name <9>
32
+
region: us-east1
33
+
defaultMachinePlatform:
34
+
tags: <10>
35
+
- global-tag1
36
+
controlPlane:
37
+
name: master
38
+
platform:
39
+
gcp:
40
+
tags: <10>
41
+
- control-plane-tag1
42
+
type: n2-standard-4
43
+
zones:
44
+
- us-central1-a
45
+
- us-central1-c
46
+
replicas: 3
47
+
compute:
48
+
- name: worker
49
+
platform:
50
+
gcp:
51
+
tags: <10>
52
+
- compute-tag1
53
+
type: n2-standard-4
54
+
zones:
55
+
- us-central1-a
56
+
- us-central1-c
57
+
replicas: 3
58
+
networking:
59
+
clusterNetwork:
60
+
- cidr: 10.128.0.0/14
61
+
hostPrefix: 23
62
+
machineNetwork:
63
+
- cidr: 10.0.0.0/16
64
+
pullSecret: '{"auths": ...}'
65
+
sshKey: ssh-ed25519 AAAA... <11>
66
+
----
67
+
<1> `credentialsMode` must be set to `Passthrough` to allow the cluster to use the provided GCP service account after cluster creation. See the "Prerequisites" section for the required GCP permissions that your service account must have.
68
+
<2> The name of the subnet in the shared VPC for compute machines to use.
69
+
<3> The name of the subnet in the shared VPC for control plane machines to use.
70
+
<4> Optional. If you set `createFirewallRules` to `Disabled`, you can create and manage firewall rules manually through the use of network tags. By default, the cluster will automatically create and manage the firewall rules that are required for cluster communication. Your service account must have `roles/compute.networkAdmin` and `roles/compute.securityAdmin` privileges in the host project to perform these tasks automatically. If your service account does not have the `roles/dns.admin` privilege in the host project, it must have the `dns.networks.bindPrivateDNSZone` permission.
71
+
<5> The name of the shared VPC.
72
+
<6> The name of the host project where the shared VPC exists.
73
+
<7> Optional. The name of a public DNS zone in the host project. If you set this value, your service account must have the `roles/dns.admin` privilege in the host project. The public DNS zone domain must match the `baseDomain` parameter. If you do not set this value, the installation program will use the public DNS zone in the service project.
74
+
<8> Optional. The name of the host project which contains the public DNS zone. This value is required if you specify a public DNS zone that exists in another project.
75
+
<9> The name of the GCP project where you want to install the cluster.
76
+
<10> Optional. If you want to manually create and manage your GCP firewall rules, you can set `platform.gcp.createFirewallRules` to `Disabled` and then specify one or more network tags. You can set tags on the compute machines, the control plane machines, or all machines.
77
+
<11> You can optionally provide the `sshKey` value that you use to access the machines in your cluster.
= Optional: Adding Ingress DNS records for shared VPC installations
7
+
If the public DNS zone exists in a host project outside the project where you installed your cluster, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard `*.apps.{baseDomain}.` or specific records. You can use A, CNAME, and other records per your requirements.
8
+
9
+
.Prerequisites
10
+
* You completed the installation of {product-title} on GCP into a shared VPC.
11
+
* Your public DNS zone exists in a host project separate from the service project that contains your cluster.
12
+
13
+
.Procedure
14
+
. Verify that the Ingress router has created a load balancer and populated the `EXTERNAL-IP` field by running the following command:
15
+
+
16
+
[source,terminal]
17
+
----
18
+
$ oc -n openshift-ingress get service router-default
. Record the external IP address of the router by running the following command:
28
+
+
29
+
[source,terminal]
30
+
----
31
+
$ oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'
32
+
----
33
+
. Add a record to your GCP public zone with the router's external IP address and the name `*.apps.<cluster_name>.<cluster_domain>`. You can use the `gcloud` command line utility or the GCP web console.
34
+
. To add manual records instead of a wildcard record, create entries for each of the cluster's current routes. You can gather these routes by running the following command:
35
+
+
36
+
[source,terminal]
37
+
----
38
+
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
Installing the cluster requires that you manually generate the installation configuration file.
72
76
//Made this update as part of feedback in PR3961. tl;dr Simply state you have to create the config file, instead of creating a number of conditions to explain why.
When installing {product-title} on Microsoft Azure Stack Hub, you must manually create your installation configuration file.
83
87
endif::ash-default,ash-network[]
88
+
ifdef::gcp-shared[]
89
+
You must manually create your installation configuration file when installing {product-title} on GCP into a shared VPC using installer-provisioned infrastructure.
For some platform types, you can alternatively run `./openshift-install create install-config --dir <installation_directory>` to generate an `install-config.yaml` file. You can provide details about your cluster configuration at the prompts.
0 commit comments