Skip to content

Commit 79ff79f

Browse files
Add doc for OKE and bug fixes (#172)
1 parent 95b9e1a commit 79ff79f

10 files changed

+497
-5
lines changed

docs/src/SUMMARY.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,9 @@
3434
- [Using Antrea](./networking/antrea.md)
3535
- [Custom Networking](./networking/custom-networking.md)
3636
- [Private Cluster](./networking/private-cluster.md)
37+
- [Managed Clusters (OKE)](./managed/managedcluster.md)
38+
- [Boot volume expansion](./managed/boot-volume-expansion.md)
39+
- [Networking customizations](./managed/networking.md)
3740
- [Reference](./reference/reference.md)
3841
- [API Reference](./reference/api-reference.md)
3942
- [Glossary](./reference/glossary.md)

docs/src/gs/create-workload-cluster.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,9 @@
33
## Workload Cluster Templates
44

55
Choose one of the available templates for to create your workload clusters from the
6-
[latest released artifacts][latest-release]. Each workload cluster template can be
6+
[latest released artifacts][latest-release]. Please note that the templates provided
7+
are to be considered as references and can be customized further as
8+
the [CAPOCI API Reference][api-reference]. Each workload cluster template can be
79
further configured with the parameters below.
810

911
## Workload Cluster Parameters
@@ -194,6 +196,6 @@ By default, the [OCI Cloud Controller Manager (CCM)][oci-ccm] is not installed i
194196
[calico]: ../networking/calico.md
195197
[cni]: https://www.cni.dev/
196198
[oci-ccm]: https://github.com/oracle/oci-cloud-controller-manager
197-
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases
199+
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest
198200
[install-oci-ccm]: ./install-oci-ccm.md
199201
[configure-authentication]: ./install-cluster-api.html#configure-authentication
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# Increase boot volume
2+
3+
The default boot volume size of worker nodes is 50 GB. The following steps needs to be followed
4+
to increase the boot volume size.
5+
6+
## Increase the boot volume size in spec
7+
8+
The following snippet shows how to increase the boot volume size of the instances.
9+
10+
```yaml
11+
kind: OCIManagedMachinePool
12+
spec:
13+
nodeSourceViaImage:
14+
bootVolumeSizeInGBs: 100
15+
```
16+
17+
## Extend the root partition
18+
19+
In order to take advantage of the larger size, you need to [extend the partition for the boot volume][boot-volume-extension].
20+
Custom cloud init scripts can be used for the same. The following cloud init script extends the root volume.
21+
22+
```bash
23+
#!/bin/bash
24+
25+
# DO NOT MODIFY
26+
curl --fail -H "Authorization: Bearer Oracle" -L0 http://169.254.169.254/opc/v2/instance/metadata/oke_init_script | base64 --decode >/var/run/oke-init.sh
27+
28+
## run oke provisioning script
29+
bash -x /var/run/oke-init.sh
30+
31+
### adjust block volume size
32+
/usr/libexec/oci-growfs -y
33+
34+
touch /var/log/oke.done
35+
```
36+
37+
Encode the file contents into a base64 encoded value as follows.
38+
```bash
39+
cat cloud-init.sh | base64 -w 0
40+
```
41+
42+
Add the value in the following `OCIManagedMachinePool` spec.
43+
```yaml
44+
kind: OCIManagedMachinePool
45+
spec:
46+
nodeMetadata:
47+
user_data: "<base64 encoded value from above>"
48+
```
49+
50+
[boot-volume-extension]: https://docs.oracle.com/en-us/iaas/Content/Block/Tasks/extendingbootpartition.htm

docs/src/managed/managedcluster.md

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Managed Clusters (OKE)
2+
- **Feature status:** Experimental
3+
- **Feature gate:** OKE=true,MachinePool=true
4+
5+
Cluster API Provider for OCI (CAPOCI) experimentally supports managing OCI Container
6+
Engine for Kubernetes (OKE) clusters. CAPOCI implements this with three
7+
custom resources:
8+
- `OCIManagedControlPlane`
9+
- `OCIManagedCluster`
10+
- `OCIManagedMachinePool`
11+
12+
## Workload Cluster Parameters
13+
14+
The following Oracle Cloud Infrastructure (OCI) configuration parameters are available
15+
when creating a managed workload cluster on OCI using one of our predefined templates:
16+
17+
| Parameter | Default Value | Description |
18+
|---------------------------------------|---------------------|------------------------------------------------------------------------------------------------------------------------|
19+
| `OCI_COMPARTMENT_ID` | | The OCID of the compartment in which to create the required compute, storage and network resources. |
20+
| `OCI_MANAGED_NODE_IMAGE_ID` | | The OCID of the image for the Kubernetes worker nodes. Please read the [doc][node-images] for more details. |
21+
| `OCI_MANAGED_NODE_SHAPE ` | VM.Standard.E4.Flex | The [shape][node-images-shapes] of the Kubernetes worker nodes. |
22+
| `OCI_MANAGED_NODE_MACHINE_TYPE_OCPUS` | 1 | The number of OCPUs allocated to the worker node instance. |
23+
| `OCI_SSH_KEY` | | The public SSH key to be added to the Kubernetes nodes. It can be used to login to the node and troubleshoot failures. |
24+
25+
## Pre-Requisites
26+
27+
### Environment Variables
28+
29+
Managed clusters also require the following feature flags set as environment variables before [installing
30+
CAPI and CAPOCI components using clusterctl][install-cluster-api].
31+
32+
```bash
33+
export EXP_MACHINE_POOL=true
34+
export EXP_OKE=true
35+
```
36+
37+
### OCI Security Policies
38+
39+
Please read the [doc][oke-policies] and add the necessary policies required for the user group.
40+
Please add the policies for dynamic groups if instance principal is being used as authentication
41+
mechanism. Please read the [doc][install-cluster-api] to know more about authentication mechanisms.
42+
43+
## Workload Cluster Templates
44+
45+
Choose one of the available templates to create your workload clusters from the
46+
[latest released artifacts][latest-release]. The managed cluster templates is of the
47+
form `cluster-template-managed-<flavour>`.yaml . The default managed template is
48+
`cluster-template-managed.yaml`. Please note that the templates provided are to be considered
49+
as references and can be customized further as the [CAPOCI API Reference][api-reference].
50+
51+
## Supported Kubernetes versions
52+
The [doc][supported-versions] lists the Kubernetes versions currently supported by OKE.
53+
54+
## Create a new OKE cluster.
55+
56+
The following command will create an OKE cluster using the default template. The created node pool uses
57+
[VCN native pod networking][vcn-native-pod-networking].
58+
59+
```bash
60+
OCI_COMPARTMENT_ID=<compartment-id> \
61+
OCI_MANAGED_NODE_IMAGE_ID=<ubuntu-custom-image-id> \
62+
OCI_SSH_KEY=<ssh-key> \
63+
KUBERNETES_VERSION=v1.24.1 \
64+
NAMESPACE=default \
65+
clusterctl generate cluster <cluster-name>\
66+
--from cluster-template-managed.yaml | kubectl apply -f -
67+
```
68+
69+
## Create a new private OKE cluster.
70+
71+
The following command will create an OKE private cluster. In this template, the control plane endpoint subnet is a
72+
private subnet and the API endpoint is accessible only within the subnet. The created node pool uses
73+
[VCN native pod networking][vcn-native-pod-networking].
74+
75+
```bash
76+
OCI_COMPARTMENT_ID=<compartment-id> \
77+
OCI_MANAGED_NODE_IMAGE_ID=<ubuntu-custom-image-id> \
78+
OCI_SSH_KEY=<ssh-key> \
79+
KUBERNETES_VERSION=v1.24.1 \
80+
NAMESPACE=default \
81+
clusterctl generate cluster <cluster-name>\
82+
--from cluster-template-managedprivate.yaml | kubectl apply -f -
83+
```
84+
85+
86+
87+
[node-images-shapes]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Reference/contengimagesshapes.htm
88+
[oke-policies]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpolicyconfig.htm
89+
[install-cluster-api]: ../gs/install-cluster-api.md
90+
[latest-release]: https://github.com/oracle/cluster-api-provider-oci/releases/latest
91+
[api-reference]: ../reference/api-reference.md
92+
[supported-versions]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengaboutk8sversions.htm#supportedk8sversions
93+
[vcn-native-pod-networking]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengpodnetworking_topic-OCI_CNI_plugin.htm

docs/src/managed/networking.md

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# Networking customizations
2+
## Use a pre-existing VCN
3+
4+
The following `OCIManagedCluster` snippet can be used to to use a pre-existing VCN.
5+
6+
```yaml
7+
kind: OCIManagedCluster
8+
spec:
9+
compartmentId: "${OCI_COMPARTMENT_ID}"
10+
networkSpec:
11+
skipNetworkManagement: true
12+
vcn:
13+
id: "<vcn-id>"
14+
networkSecurityGroups:
15+
- id: "<control-plane-endpoint-nsg-id>"
16+
role: control-plane-endpoint
17+
name: control-plane-endpoint
18+
- id: "<worker-nsg-id>"
19+
role: worker
20+
name: worker
21+
- id: "<pod-nsg-id>"
22+
role: pod
23+
name: pod
24+
subnets:
25+
- id: "<control-plane-endpoint-subnet-id>"
26+
role: control-plane-endpoint
27+
name: control-plane-endpoint
28+
type: public
29+
- id: "<worker-subnet-id>"
30+
role: worker
31+
name: worker
32+
- id: "<pod-subnet-id>"
33+
role: pod
34+
name: pod
35+
- id: "<service-lb-subnet-id>"
36+
role: service-lb
37+
name: service-lb
38+
type: public
39+
```
40+
41+
## Use flannel as CNI
42+
43+
Use the template `cluster-template-managed-flannel.yaml` as an example for using flannel as the CNI. The template
44+
sets the correct parameters in the spec as well as create the proper security roles in the Network Security Group (NSG).

exp/api/v1beta1/ocimanagedcluster_webhook.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -593,7 +593,7 @@ func (c *OCIManagedCluster) GetLBServiceDefaultEgressRules() []infrastructurev1b
593593
return []infrastructurev1beta1.EgressSecurityRuleForNSG{
594594
{
595595
EgressSecurityRule: infrastructurev1beta1.EgressSecurityRule{
596-
Description: common.String("Pod to Kubernetes API endpoint communication (when using VCN-native pod networking)."),
596+
Description: common.String("Load Balancer to Worker nodes node ports."),
597597
Protocol: common.String("6"),
598598
TcpOptions: &infrastructurev1beta1.TcpOptions{
599599
DestinationPortRange: &infrastructurev1beta1.PortRange{

exp/api/v1beta1/ocimanagedmachinepool_webhook_test.go

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,23 @@ func TestOCIManagedMachinePool_CreateDefault(t *testing.T) {
4444
}))
4545
},
4646
},
47+
{
48+
name: "should not override cni type",
49+
m: &OCIManagedMachinePool{
50+
Spec: OCIManagedMachinePoolSpec{
51+
NodePoolNodeConfig: &NodePoolNodeConfig{
52+
NodePoolPodNetworkOptionDetails: &NodePoolPodNetworkOptionDetails{
53+
CniType: FlannelCNI,
54+
},
55+
},
56+
},
57+
},
58+
expect: func(g *gomega.WithT, c *OCIManagedMachinePool) {
59+
g.Expect(c.Spec.NodePoolNodeConfig.NodePoolPodNetworkOptionDetails).To(Equal(&NodePoolPodNetworkOptionDetails{
60+
CniType: FlannelCNI,
61+
}))
62+
},
63+
},
4764
}
4865

4966
for _, test := range tests {

0 commit comments

Comments
 (0)