diff --git a/_topic_maps/_topic_map_rosa.yml b/_topic_maps/_topic_map_rosa.yml index b7de8b68e331..679762b627c3 100644 --- a/_topic_maps/_topic_map_rosa.yml +++ b/_topic_maps/_topic_map_rosa.yml @@ -270,6 +270,8 @@ Topics: File: rosa-hcp-creating-a-cluster-quickly-terraform - Name: Creating ROSA with HCP clusters using a custom AWS KMS encryption key File: rosa-hcp-creating-cluster-with-aws-kms-key +- Name: Configuring a shared virtual private cloud for ROSA with HCP clusters + File: rosa-hcp-shared-vpc-config - Name: Creating a private cluster on ROSA with HCP File: rosa-hcp-aws-private-creating-cluster - Name: Creating ROSA with HCP clusters with egress zero @@ -299,7 +301,7 @@ Topics: File: rosa-sts-interactive-mode-reference - Name: Creating an AWS PrivateLink cluster on ROSA File: rosa-aws-privatelink-creating-cluster -- Name: Configuring a shared virtual private cloud for ROSA clusters +- Name: Configuring a shared virtual private cloud for ROSA (classic architecture) clusters File: rosa-shared-vpc-config - Name: Accessing a ROSA cluster File: rosa-sts-accessing-cluster diff --git a/_topic_maps/_topic_map_rosa_hcp.yml b/_topic_maps/_topic_map_rosa_hcp.yml index 7d4edf504642..423d447c7b31 100644 --- a/_topic_maps/_topic_map_rosa_hcp.yml +++ b/_topic_maps/_topic_map_rosa_hcp.yml @@ -201,6 +201,8 @@ Topics: File: rosa-hcp-creating-a-cluster-quickly-terraform - Name: Creating ROSA with HCP clusters using a custom AWS KMS encryption key File: rosa-hcp-creating-cluster-with-aws-kms-key +- Name: Configuring a shared virtual private cloud for ROSA with HCP clusters + File: rosa-hcp-shared-vpc-config - Name: Creating a private cluster on ROSA with HCP File: rosa-hcp-aws-private-creating-cluster - Name: Creating ROSA with HCP clusters with egress zero diff --git a/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc b/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc index 2e1c5f36d139..013a941207f8 100644 --- a/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc +++ b/modules/rosa-deleting-account-wide-iam-roles-and-policies.adoc @@ -71,6 +71,7 @@ ManagedOpenShift-Worker-Role Worker arn:aws:iam:: ---- endif::sts[] ifdef::hcp[] ++ [source,terminal] ---- I: Fetching account roles @@ -80,7 +81,9 @@ ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam:::role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.19 Yes ---- endif::hcp[] -.. Delete the account-wide roles: ++ +.. Delete the account-wide roles by running one of the following commands: +*** For clusters without a shared Virtual Private Cloud (VPC): + [source,terminal] ---- @@ -88,6 +91,14 @@ $ rosa delete account-roles --prefix --mode auto <1> ---- <1> You must include the `--` argument. Replace `` with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, `ManagedOpenShift`. + +*** For clusters with a shared VPC: ++ +[source,terminal] +---- +$ rosa delete account-roles --prefix --delete-hosted-shared-vpc-policies --mode auto <1> +---- +<1> You must include the `--` argument. Replace `` with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, `ManagedOpenShift`. ++ [IMPORTANT] ==== The account-wide IAM roles might be used by other ROSA clusters in the same AWS account. Only remove the roles if they are not required by other clusters. diff --git a/modules/rosa-hcp-aws-private-security-groups.adoc b/modules/rosa-hcp-aws-private-security-groups.adoc index 7e8ca36c9497..902a44362ccb 100644 --- a/modules/rosa-hcp-aws-private-security-groups.adoc +++ b/modules/rosa-hcp-aws-private-security-groups.adoc @@ -6,7 +6,13 @@ :_mod-docs-content-type: PROCEDURE = Adding additional AWS security groups to the AWS PrivateLink endpoint -With {hcp-title} clusters, the AWS PrivateLink endpoint exposed in the customer's VPC has a security group that limits access to requests that originate from within the cluster's Machine CIDR range. In order to grant access to the cluster's API to any entities outside of the VPC, through VPC peering, transit gateways, or other network connectivity, you must create and attach another security group to the PrivateLink endpoint to grant the necessary access. +ifdef::openshift-rosa[] +With {hcp-title} clusters, the AWS PrivateLink endpoint exposed in the customer's Virtual Private Cloud (VPC) has a security group that limits access to requests that originate from within the cluster's Machine CIDR range. You must create and attach another security group to the PrivateLink endpoint to grant API access to entities outside of the VPC through VPC peering, transit gateways, or other network connectivity. +endif::openshift-rosa[] + +ifdef::openshift-rosa-hcp[] +With {hcp-title} clusters, the AWS PrivateLink endpoint exposed in the host's Virtual Private Cloud (VPC) has a security group that limits access to requests that originate from within the cluster's Machine CIDR range. You must create and attach another security group to the PrivateLink endpoint to grant API access to entities outside of the VPC through VPC peering, transit gateways, or other network connectivity. +endif::openshift-rosa-hcp[] [IMPORTANT] ==== @@ -27,7 +33,7 @@ Adding additional AWS security groups to the AWS PrivateLink endpoint is only su $ export CLUSTER_NAME= ---- + -You can verify that the variable has been set by running the following command: +Verify that the variable exists by running the following command: + [source,terminal] ---- @@ -75,4 +81,4 @@ $ aws ec2 authorize-security-group-ingress --group-id $SG_ID --ip-permissions Fr $ aws ec2 modify-vpc-endpoint --vpc-endpoint-id $VPCE_ID --add-security-group-ids $SG_ID ---- -You now can access the API of your {hcp-title} private cluster from the specified CIDR block. +You can now access the API of your {hcp-title} private cluster from the specified CIDR block. diff --git a/modules/rosa-hcp-deleting-cluster.adoc b/modules/rosa-hcp-deleting-cluster.adoc index 45bcd742b36f..27b6cb9c815e 100644 --- a/modules/rosa-hcp-deleting-cluster.adoc +++ b/modules/rosa-hcp-deleting-cluster.adoc @@ -113,12 +113,20 @@ $ rosa delete cluster --cluster= --watch You must wait for cluster deletion to complete before you remove the Operator roles and the OIDC provider. ==== -. Delete the cluster-specific Operator IAM roles by running the following command: +. Delete the cluster-specific Operator IAM roles by running one of the following commands: +** For clusters without a shared Virtual Private Cloud (VPC): + [source,terminal] ---- $ rosa delete operator-roles --prefix ---- ++ +** For clusters with a shared VPC: ++ +[source,terminal] +---- +$ rosa delete operator-roles --prefix --delete-hosted-shared-vpc-policies +---- . Delete the OIDC provider by running the following command: + diff --git a/modules/rosa-hcp-sharing-vpc-cluster-creation.adoc b/modules/rosa-hcp-sharing-vpc-cluster-creation.adoc new file mode 100644 index 000000000000..580933e749a7 --- /dev/null +++ b/modules/rosa-hcp-sharing-vpc-cluster-creation.adoc @@ -0,0 +1,29 @@ +// Module included in the following assemblies: +// +// * networking/rosa-hcp-shared-vpc-config.adoc +:_mod-docs-content-type: PROCEDURE +[id="rosa-hcp-sharing-vpc-cluster-creation_{context}"] += Step Four - Cluster Creator: Creating your cluster in a shared VPC +To create a cluster in a shared VPC, complete the following steps. + +[NOTE] +==== +Installing a cluster in a shared VPC is supported only for OpenShift 4.17.9 and later. +==== + +image::372_OpenShift_on_AWS_persona_worflows_0923_4.png[] +.Prerequisites + +* You have the hosted zone IDs from the *VPC Owner*. +* You have the AWS region from the *VPC Owner*. +* You have the subnet IDs from the *VPC Owner*. +* You have the `Route 53 role` ARN from the *VPC Owner*. +* You have the `VPC endpoint role` ARN from the *VPC Owner*. + +.Procedure +* In a terminal, enter the following command to create the shared VPC: ++ +[source,terminal] +---- +$ rosa create cluster --cluster-name --sts --operator-roles-prefix --oidc-config-id --region us-east-1 --subnet-ids --hcp-internal-communication-hosted-zone-id --ingress-private-hosted-zone-id --route53-role-arn vpc-endpoint-role-arn --base-domain --additional-allowed-principals --hosted-cp +---- \ No newline at end of file diff --git a/modules/rosa-hcp-sharing-vpc-creation-and-sharing.adoc b/modules/rosa-hcp-sharing-vpc-creation-and-sharing.adoc new file mode 100644 index 000000000000..a06f056badf3 --- /dev/null +++ b/modules/rosa-hcp-sharing-vpc-creation-and-sharing.adoc @@ -0,0 +1,362 @@ +// Module included in the following assemblies: +// +// * networking/rosa-hcp-shared-vpc-config.adoc + +:_mod-docs-content-type: PROCEDURE +[id="rosa-hcp-sharing-vpc-creation-and-sharing_{context}"] += Step One - VPC Owner: Configuring a VPC to share within your AWS organization + +You can share subnets within a VPC with another AWS account in your AWS organization. + +image::522-shared-vpc-step-1.png[] +.Procedure + +. Create or modify a VPC to your specifications in the link:https://us-east-1.console.aws.amazon.com/vpc/[VPC section of the AWS console]. Make sure you have selected the correct region. ++ +. Create the `Route 53 role`. ++ +[NOTE] +==== +You must create the `Route 53 role` in the same account where you plan to create the Amazon Route 53 hosted zones (which are created in Step 3). For example, if you want to create the hosted zones in the centrally-managed VPC account, you must create the `Route 53 role` in the *VPC Owner* account. If you want to create the hosted zones in the workload account, you must create the `Route 53 role` in the *Cluster Creator* account. +==== ++ +.. Create a custom policy file to allow for necessary shared VPC permissions that uses the name `Route53Policy`: ++ +[source,terminal] +---- +$ cat < /tmp/route53-policy.json +{ + "Version" : "2012-10-17", + "Statement" : [ + { + "Sid" : "ReadPermissions", + "Effect" : "Allow", + "Action" : [ + "elasticloadbalancing:DescribeLoadBalancers", + "route53:GetHostedZone", + "route53:ListResourceRecordSets", + "route53:ListHostedZones", + "tag:GetResources" + ], + "Resource" : "*" + }, + { + "Sid" : "ChangeResourceRecordSetsRestrictedRecordNames", + "Effect" : "Allow", + "Action" : [ + "route53:ChangeResourceRecordSets" + ], + "Resource" : [ + "*" + ], + "Condition" : { + "ForAllValues:StringLike" : { + "route53:ChangeResourceRecordSetsNormalizedRecordNames" : [ + "*.hypershift.local", + "*.openshiftapps.com", + "*.devshift.org", + "*.openshiftusgov.com", + "*.devshiftusgov.com" + ] + } + } + }, + { + "Sid" : "ChangeTagsForResourceNoCondition", + "Effect" : "Allow", + "Action" : [ + "route53:ChangeTagsForResource" + ], + "Resource" : "*" + } + ] +} +EOF +---- ++ +.. Create the policy in AWS: ++ +[source,terminal] +---- +$ aws iam create-policy \ + --policy-name Route53Policy \ + --policy-document file:///tmp/route53-policy.json +---- ++ +.. Create a custom trust policy file that grants permission to assume roles: ++ +[source,terminal] +---- +$ cat < /tmp/route53-role.json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam:::root" <1> + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +---- ++ +-- +<1> During the initial creation of the principal, you must create a root user placeholder by using the *VPC Owner's* AWS account ID as `arn:aws:iam::{Account}:root`. This is only a temporary placeholder, and the principal is reduced in scope after the *Cluster Creator* creates the necessary cluster roles. +-- ++ +.. Create the IAM role: ++ +[source,terminal] +---- +$ aws iam create-role --role-name \ <1> + --assume-role-policy-document file:///tmp/route53-role.json +---- ++ +-- +<1> Replace __ with the name of the role you want to create. +-- ++ +.. Attach the custom `Route53Policy` permissions policy: ++ +[source, terminal] +---- +$ aws iam attach-role-policy --role-name --policy-arn \ <1> + arn:aws:iam:::policy/Route53Policy <2> +---- ++ +-- +<1> Replace __ with the name of the role you created. +<2> Replace __ with the *VPC Owner's* AWS account ID. +-- ++ +. Create the `VPC endpoint role`. +.. Create a custom policy file to allow for necessary shared VPC permissions that uses the name `VPCEPolicy`: ++ +[source,terminal] +---- +$ cat < /tmp/vpce.json +{ + "Version" : "2012-10-17", + "Statement" : [ + { + "Sid" : "ReadPermissions", + "Effect" : "Allow", + "Action" : [ + "ec2:DescribeVpcEndpoints", + "ec2:DescribeVpcs", + "ec2:DescribeSecurityGroups" + ], + "Resource" : "*" + }, + { + "Sid" : "CreateSecurityGroups", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateSecurityGroup" + ], + "Resource" : [ + "arn:aws:ec2:*:*:security-group*/*" + ], + "Condition" : { + "StringEquals" : { + "aws:RequestTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "DeleteSecurityGroup", + "Effect" : "Allow", + "Action" : [ + "ec2:DeleteSecurityGroup" + ], + "Resource" : [ + "arn:aws:ec2:*:*:security-group*/*" + ], + "Condition" : { + "StringEquals" : { + "aws:ResourceTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "SecurityGroupIngressEgress", + "Effect" : "Allow", + "Action" : [ + "ec2:AuthorizeSecurityGroupIngress", + "ec2:AuthorizeSecurityGroupEgress", + "ec2:RevokeSecurityGroupIngress", + "ec2:RevokeSecurityGroupEgress" + ], + "Resource" : [ + "arn:aws:ec2:*:*:security-group*/*" + ], + "Condition" : { + "StringEquals" : { + "aws:ResourceTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "CreateSecurityGroupsVPCNoCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateSecurityGroup" + ], + "Resource" : [ + "arn:aws:ec2:*:*:vpc/*" + ] + }, + { + "Sid" : "VPCEndpointWithCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateVpcEndpoint" + ], + "Resource" : [ + "arn:aws:ec2:*:*:vpc-endpoint/*" + ], + "Condition" : { + "StringEquals" : { + "aws:RequestTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "VPCEndpointResourceTagCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateVpcEndpoint" + ], + "Resource" : [ + "arn:aws:ec2:*:*:security-group*/*" + ], + "Condition" : { + "StringEquals" : { + "aws:ResourceTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "VPCEndpointNoCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateVpcEndpoint" + ], + "Resource" : [ + "arn:aws:ec2:*:*:vpc/*", + "arn:aws:ec2:*:*:subnet/*", + "arn:aws:ec2:*:*:route-table/*" + ] + }, + { + "Sid" : "ManageVPCEndpointWithCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:ModifyVpcEndpoint", + "ec2:DeleteVpcEndpoints" + ], + "Resource" : [ + "arn:aws:ec2:*:*:vpc-endpoint/*" + ], + "Condition" : { + "StringEquals" : { + "aws:ResourceTag/red-hat-managed" : "true" + } + } + }, + { + "Sid" : "ModifyVPCEndpoingNoCondition", + "Effect" : "Allow", + "Action" : [ + "ec2:ModifyVpcEndpoint" + ], + "Resource" : [ + "arn:aws:ec2:*:*:subnet/*" + ] + }, + { + "Sid" : "CreateTagsRestrictedActions", + "Effect" : "Allow", + "Action" : [ + "ec2:CreateTags" + ], + "Resource" : [ + "arn:aws:ec2:*:*:vpc-endpoint/*", + "arn:aws:ec2:*:*:security-group/*" + ], + "Condition" : { + "StringEquals" : { + "ec2:CreateAction" : [ + "CreateVpcEndpoint", + "CreateSecurityGroup" + ] + } + } + } + ] +} +EOF +---- ++ +.. Create the policy in AWS: ++ +[source,terminal] +---- +$ aws iam create-policy \ + --policy-name VPCEPolicy \ + --policy-document file:///tmp/vpce-role.json +---- ++ +.. Create a custom trust policy file that grants permission to assume roles: ++ +[source,terminal] +---- +$ cat < /tmp/vpce-role.json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam:::root" <1> + }, + "Action": "sts:AssumeRole" + } + ] +} +EOF +---- ++ +-- +<1> During the initial creation of the principal, you must create a root user placeholder by using the *VPC Owner's* AWS account ID as `arn:aws:iam::{Account}:root`. This is only a temporary placeholder, and the principal is reduced in scope after the *Cluster Creator* creates the necessary cluster roles. +-- ++ +.. Create the IAM role: ++ +[source,terminal] +---- +$ aws iam create-role --role-name \ <1> + --assume-role-policy-document file:///tmp/vpce-role.json +---- ++ +-- +<1> Replace __ with the name of the role you want to create. +-- ++ +.. Attach the custom `VPCEPolicy` permissions policy: ++ +[source, terminal] +---- +$ aws iam attach-role-policy --role-name --policy-arn \ <1> + arn:aws:iam:::policy/VPCEPolicy <2> +---- ++ +-- +<1> Replace __ with the name of the role you created. +<2> Replace __ with the *VPC Owner's* AWS account ID. +-- ++ +. Provide the `Route 53 role` ARN and the `VPC endpoint role` ARN to the *Cluster Creator* to continue configuration. diff --git a/modules/rosa-hcp-sharing-vpc-dns-and-roles.adoc b/modules/rosa-hcp-sharing-vpc-dns-and-roles.adoc new file mode 100644 index 000000000000..be7d190a1553 --- /dev/null +++ b/modules/rosa-hcp-sharing-vpc-dns-and-roles.adoc @@ -0,0 +1,104 @@ +// Module included in the following assemblies: +// +// * networking/rosa-hcp-shared-vpc-config.adoc +:_mod-docs-content-type: PROCEDURE +[id="rosa-hcp-sharing-vpc-dns-and-roles_{context}"] += Step Two - Cluster Creator: Reserving your DNS entries and creating cluster Operator roles + +After the *VPC Owner* creates a virtual private cloud (VPC), subnets, and an IAM role for sharing the VPC resources, reserve an `openshiftapps.com` DNS domain and create Operator roles to communicate back to the *VPC Owner*. + +[NOTE] +==== +For shared VPC clusters, you can choose to create the Operator roles after the cluster creation steps. The cluster will be in a `waiting` state until the Ingress Operator role ARN is added to the shared VPC role trusted relationships. +==== + +image::522-shared-vpc-step-2.png[] +.Prerequisites + +* You have the `Route 53 role` ARN for the IAM role from the *VPC Owner*. +* You have the `VPC endpoint role` ARN for the IAM role from the *VPC Owner*. + +.Procedure + +. Reserve an `openshiftapps.com` DNS domain with the following command: ++ +[source,terminal] +---- +$ rosa create dns-domain --hosted-cp +---- ++ +The command creates a reserved `openshiftapps.com` DNS domain. ++ +[source,terminal] +---- +I: DNS domain '14eo.p3.openshiftapps.com' has been created. +I: To view all DNS domains, run 'rosa list dns-domains' +---- +. Create an OIDC configuration. ++ +Review this article for more information on the link:https://access.redhat.com/articles/7031018[OIDC configuration process]. The following command produces the OIDC configuration ID that you need: ++ +[source,terminal] +---- +$ rosa create oidc-config +---- ++ +You receive confirmation that the command created an OIDC configuration: ++ +[source,terminal] +---- +I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace with a prefix of your choice: + rosa create operator-roles --prefix --oidc-config-id 25tu67hq45rto1am3slpf5lq6jargg +---- + +. Create the account roles by entering the following command: ++ +[source,terminal] +---- +$ rosa create account-roles --oidc-config-id <1> + --installer-role-arn <2> + --route53-role-arn <3> + --vpc-endpoint-role-arn <4> + --prefix <5> +---- ++ +-- +<1> Provide the OIDC configuration ID that you created in the previous step. +<2> Provide your installer ARN that was created as part of the `rosa create account-roles` process. +<3> Provide the ARN for the Route 53 role that the *VPC Owner* created. +<4> Provide the ARN for the VPC endpoint role that the *VPC Owner* created. +<5> Provide a prefix for the Operator roles. +-- + +. Create the Operator roles by entering the following command: ++ +[source,terminal] +---- +$ rosa create operator-roles --oidc-config-id <1> + --installer-role-arn <2> + --route53-role-arn <3> + --vpc-endpoint-role-arn <4> + --prefix <5> +---- ++ +-- +<1> Provide the OIDC configuration ID that you created in the previous step. +<2> Provide your installer ARN that was created as part of the `rosa create account-roles` process. +<3> Provide the ARN for the Route 53 role that the *VPC Owner* created. +<4> Provide the ARN for the VPC endpoint role that the *VPC Owner* created. +<5> Provide a prefix for the Operator roles. +-- ++ +[NOTE] +==== +The Installer account role and the shared VPC roles must have a one-to-one relationship. If you want to create multiple shared VPC roles, you should create one set of account roles per shared VPC role. +==== + +. After you create the Operator roles, share your _Ingress Operator Cloud Credentials_ role's ARN, your _Installer_ role's ARN, and your _Control plane Operator Cloud Credentials_ role's ARN with the *VPC Owner* to continue configuration. ++ +The shared information resembles these examples: ++ +* ``my-rosa-cluster.14eo.p1.openshiftapps.com`` +* ``arn:aws:iam::111122223333:role/ManagedOpenShift-Installer-Role`` +* ``arn:aws:iam::111122223333:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials`` +* ``arn:aws:iam::111122223333:role/my-rosa-cluster-control-plane-operator`` \ No newline at end of file diff --git a/modules/rosa-hcp-sharing-vpc-hosted-zones.adoc b/modules/rosa-hcp-sharing-vpc-hosted-zones.adoc new file mode 100644 index 000000000000..f6e48bf70ad0 --- /dev/null +++ b/modules/rosa-hcp-sharing-vpc-hosted-zones.adoc @@ -0,0 +1,59 @@ +// Module included in the following assemblies: +// +// * networking/rosa-hcp-shared-vpc-config.adoc +:_mod-docs-content-type: PROCEDURE +[id="rosa-hcp-sharing-vpc-hosted-zones_{context}"] += Step Three - VPC Owner: Updating the shared VPC role and creating hosted zones + +After the *Cluster Creator* provides the DNS domain and the IAM roles, create two hosted zones and update the trust policy on the IAM roles that were created for sharing the VPC. + +[NOTE] +==== +The hosted zones can be created in either the centrally-managed VPC account or in the workload account. +==== + +image::522-shared-vpc-step-3.png[] + +*{sp}The hosted zones can be created in either the centrally-managed VPC account or in the workload account in which the cluster is deployed. + +.Prerequisites + +* You have the full domain name from the *Cluster Creator*. +* You have the _Ingress Operator Cloud Credentials_ role's ARN from the *Cluster Creator*. +* You have the _Installer_ role's ARN from the *Cluster Creator*. +* You have the _Control plane Operator Cloud Credentials_ role's ARN from the *Cluster Creator*. + +include::snippets/rosa-long-cluster-name.adoc[] + +.Procedure + +. In the link:https://console.aws.amazon.com/ram/[Resource Access Manager of the AWS console], create a resource share that shares the previously created `Route 53 Role` trust policy public and private subnets with the *Cluster Creator's* AWS account ID. + +. Update the VPC sharing IAM role and add the _Installer_ and _Ingress Operator Cloud Credentials_ roles to the principal section of the trust policy. ++ +[source,terminal] +---- +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "Statement1", + "Effect": "Allow", + "Principal": { + "AWS": [ + "arn:aws:iam:::role/-ingress-operator-cloud-credentials", + "arn:aws:iam:::role/-hcp-Installer-Role", + "arn:aws:iam:::role/-control-plane-operator-cloud-credentials" + ] + }, + "Action": "sts:AssumeRole" + } + ] +} +---- +. Create a private hosted zone in the link:https://us-east-1.console.aws.amazon.com/route53/v2/[Route 53 section of the AWS console]. In the hosted zone configuration, the domain name is `rosa..`. The private hosted zone must be associated with the network owner's VPC. +. Create a local hosted zone in the link:https://us-east-1.console.aws.amazon.com/route53/v2/[Route 53 section of the AWS console]. In the hosted zone configuration, the domain name is `.hypershift.local`. The local hosted zone must be associated with the network owner's VPC. +. After the hosted zones are created and associated with the network owner's VPC, provide the following to the *Cluster Creator* to continue configuration: +* Hosted zone IDs +* AWS region +* Subnet IDs \ No newline at end of file diff --git a/rosa_hcp/rosa-hcp-shared-vpc-config.adoc b/rosa_hcp/rosa-hcp-shared-vpc-config.adoc new file mode 100644 index 000000000000..293dca501ec9 --- /dev/null +++ b/rosa_hcp/rosa-hcp-shared-vpc-config.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: ASSEMBLY +include::_attributes/attributes-openshift-dedicated.adoc[] +[id="rosa-hcp-shared-vpc-config"] += Configuring a shared VPC for ROSA with HCP clusters +:context: rosa-shared-vpc-config + +toc::[] + +You can create {hcp-title-first} clusters in shared, centrally-managed AWS virtual private clouds (VPCs). + +[NOTE] +==== +* This process requires *two separate* AWS accounts that belong to the same AWS organization. One account functions as the VPC-owning AWS account (*VPC Owner*), while the other account creates the cluster in the cluster-creating AWS account (*Cluster Creator*). + +* Installing a cluster in a shared VPC is supported only for OpenShift 4.17.9 and later. +==== + +image::522-shared-vpc-overview.png[] + +*{sp}The hosted zones can be created in either the centrally-managed VPC account or in the workload account in which the cluster is deployed. + +.Prerequisites for the *VPC Owner* +* You have an AWS account with the proper permissions to create roles and share resources. +* You link:https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs[enabled resource sharing] from the management account for your organization. +* You have access to an AWS entrypoint such as the link:https://signin.aws.amazon.com[AWS console] or the link:https://aws.amazon.com/cli/[AWS command-line interface] (CLI). + +.Prerequisites for the *Cluster Creator* +* You installed the link:https://console.redhat.com/openshift/downloads#tool-rosa[ROSA CLI (`rosa`)] 1.2.49 or later. +* You created all of the required link:https://docs.openshift.com/rosa/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.html[ROSA account roles] for creating a cluster. +* The *Cluster Creator's* AWS account is separate from the *VPC Owner's* AWS account. + +include::modules/rosa-hcp-sharing-vpc-creation-and-sharing.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_hcp-shared-vpc_vpc-creation"] +[discrete] +=== Additional resources +* See the AWS documentation for information about link:https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html[sharing your AWS resources]. + +include::modules/rosa-hcp-sharing-vpc-dns-and-roles.adoc[leveloffset=+1] +include::modules/rosa-hcp-sharing-vpc-hosted-zones.adoc[leveloffset=+1] +include::modules/rosa-hcp-sharing-vpc-cluster-creation.adoc[leveloffset=+1] \ No newline at end of file diff --git a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-ext-auth.adoc b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-ext-auth.adoc index d8f6af0d06cf..802473d42a0b 100644 --- a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-ext-auth.adoc +++ b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-ext-auth.adoc @@ -13,8 +13,6 @@ You can create {rosa-title} clusters that use an external OpenID Connect (OIDC) Since it is not possible to upgrade or convert existing {rosa-classic-short} clusters to a {hcp} architecture, you must create a new cluster to use {rosa-short} functionality. You also cannot convert a cluster that was created to use external authentication providers to use the internal OAuth2 server. You must also create a new cluster. ==== -include::snippets/imp-rosa-hcp-no-shared-vpc-support.adoc[leveloffset=+0] - [NOTE] ==== {rosa-short} clusters only support {sts-first} authentication. diff --git a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc index 843edf17469f..e2da5e33435b 100644 --- a/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc +++ b/rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc @@ -22,8 +22,6 @@ Create a {rosa-short} cluster quickly by using the default options and automatic Since it is not possible to upgrade or convert existing {rosa-classic-short} clusters to hosted control plane architecture, you must create a new cluster to use {rosa-short} functionality. ==== -include::snippets/imp-rosa-hcp-no-shared-vpc-support.adoc[leveloffset=+0] - [NOTE] ==== {rosa-short} clusters only support AWS IAM Security Token Service (STS) authentication. diff --git a/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc b/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc index 0b1b297a134f..5131c7906bad 100644 --- a/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc +++ b/rosa_install_access_delete_clusters/rosa-shared-vpc-config.adoc @@ -9,12 +9,7 @@ You can create {product-title} ifdef::openshift-rosa[] (ROSA) endif::openshift-rosa[] -clusters in shared, centrally-managed AWS virtual private clouds (VPCs). - -[IMPORTANT] -==== -link:https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html[Sharing VPCs across multiple AWS accounts] is currently only supported for ROSA Classic clusters using STS for authentication. -==== +clusters in shared, centrally-managed AWS virtual private clouds (VPCs). [NOTE] ==== diff --git a/rosa_release_notes/rosa-release-notes.adoc b/rosa_release_notes/rosa-release-notes.adoc index c28504c3f03f..4eabe562ae6d 100644 --- a/rosa_release_notes/rosa-release-notes.adoc +++ b/rosa_release_notes/rosa-release-notes.adoc @@ -106,6 +106,14 @@ ifndef::openshift-rosa-hcp[] endif::openshift-rosa-hcp[] // These notes need to be duplicated until the ROSA with HCP split out is completed. + +ifdef::openshift-rosa[] +* **Shared VPC for ROSA with HCP clusters.** You can create Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters in shared, centrally-managed AWS virtual private clouds (VPCs). For more information, see xref:../rosa_hcp/rosa-hcp-shared-vpc-config.adoc#rosa-hcp-shared-vpc-config[Configuring a shared VPC for ROSA with HCP clusters]. +endif::openshift-rosa[] +ifdef::openshift-rosa-hcp[] +* **Shared VPC for ROSA with HCP clusters.** You can create Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters in shared, centrally-managed AWS virtual private clouds (VPCs). For more information, see xref:../rosa_hcp/rosa-hcp-shared-vpc-config.adoc#rosa-hcp-shared-vpc-config[Configuring a shared VPC for ROSA with HCP clusters]. +endif::openshift-rosa-hcp[] + ifdef::openshift-rosa[] * **{rosa-classic-short} cluster node limit update.** {rosa-classic-short} clusters versions 4.14.14 and greater can now scale to 249 worker nodes. This is an increase from the previous limit of 180 nodes. // Removed as part of OSDOCS-13310, until figures are verified.