Skip to content

Commit f848e43

Browse files
committed
Merge branch 'rmarano/change-k8s-setup' into 'main'
Change K8s setup doc See merge request weblogic-cloud/weblogic-kubernetes-operator!4704 (cherry picked from commit ff14f6e) 29fdf66 change K8s setup doc bfdf174 incorporate edits from Marina
1 parent e90d08a commit f848e43

File tree

1 file changed

+40
-110
lines changed

1 file changed

+40
-110
lines changed

documentation/site/content/managing-operators/k8s-setup.md

Lines changed: 40 additions & 110 deletions
Original file line numberDiff line numberDiff line change
@@ -27,148 +27,78 @@ We have provided our hints and tips for several of these options in the followin
2727

2828
### Set up Kubernetes on bare compute resources in a cloud
2929

30-
Follow the basic steps from the [Terraform Kubernetes installer for Oracle Cloud Infrastructure](https://github.com/oracle/terraform-kubernetes-installer).
30+
Follow the basic steps from the [Terraform OKE Module Installer for Oracle Cloud Infrastructure](https://oracle-terraform-modules.github.io/terraform-oci-oke/).
3131

3232
#### Prerequisites
3333

34-
1. Download and install [Terraform](https://www.terraform.io/) (v0.10.3 or later).
35-
2. Download and install the [Terraform Provider for Oracle Cloud Infrastructure](https://github.com/terraform-providers/terraform-provider-oci) (v2.0.0 or later).
36-
3. Create an Terraform configuration file at `~/.terraformrc` that specifies the path to the Oracle Cloud Infrastructure provider:
34+
1. Download and install the [Terraform OKE Module Installer for Oracle Cloud Infrastructure](https://github.com/oracle-terraform-modules/terraform-oci-oke).
35+
1. Create directory for the Terraform module:
3736
```
38-
providers {
39-
oci = "<path_to_provider_binary>/terraform-provider-oci"
40-
}
37+
$ mkdir terraformmodule
38+
$ cd terraformmodule
39+
4140
```
42-
4. Ensure that you have [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed if you plan to interact with the cluster locally.
41+
1. Ensure that you have [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) installed if you plan to interact with the cluster locally.
4342

4443
#### Quick start
4544

45+
The quick start uses the sample provided in [Multi-region service mesh with Istio and OKE](https://github.com/oracle-terraform-modules/terraform-oci-oke/tree/main/examples/istio-mc).
46+
4647
1. Do a `git clone` of the Terraform Kubernetes Installer project:
4748

4849
```shell
49-
$ git clone https://github.com/oracle/terraform-kubernetes-installer.git
50+
$ git clone https://github.com/oracle-terraform-modules/terraform-oci-oke.git
5051
```
51-
1. Initialize your project:
52+
1. Run the following commands:
5253

5354
```shell
54-
$ cd terraform-kubernetes-installer
55+
$ cd terraform-oci-oke/examples
56+
$ mkdir okewko
57+
$ cp -rf istio-mc okewko
58+
$ cd okewko
5559
```
60+
61+
1. Edit `c1.tf` and `c2.tf` to add:
62+
5663
```shell
57-
$ terraform init
64+
allow_bastion_cluster_access = true
65+
bastion_is_public = true
66+
control_plane_is_public = true
5867
```
5968

60-
1. Copy the example `terraform.tvfars`:
61-
6269
```shell
63-
$ cp terraform.example.tfvars terraform.tfvars
70+
$ cp terraform.tfvars.example terraform.tfvars
6471
```
6572

66-
1. Edit the `terraform.tvfars` file to include values for your tenancy, user, and compartment. Optionally, edit the variables to change the `Shape` of the VMs for your Kubernetes master and workers, and your `etcd` cluster. For example:
73+
1. In the `terraform.tfvars` file, update all values with the correct paths to the keys and IDs.
6774

68-
```properties
69-
#give a label to your cluster to help identify it if you have multiple
70-
label_prefix="weblogic-operator-1-"
75+
1. Run the commands:
7176

72-
#identification/authorization info
73-
tenancy_ocid = "ocid1.tenancy...."
74-
compartment_ocid = "ocid1.compartment...."
75-
fingerprint = "..."
76-
private_key_path = "/Users/username/.oci/oci_api_key.pem"
77-
user_ocid = "ocid1.user..."
77+
```shell
78+
$ terraform init
79+
$ terraform plan
80+
$ terraform apply --auto-approve
81+
```
7882

79-
#shapes for your VMs
80-
etcdShape = "VM.Standard1.2"
81-
k8sMasterShape = "VM.Standard1.8"
82-
k8sWorkerShape = "VM.Standard1.8"
83-
k8sMasterAd1Count = "1"
84-
k8sWorkerAd1Count = "2"
83+
This will create two OKE clusters.
8584

86-
#this ingress is set to wide-open for testing **not secure**
87-
etcd_ssh_ingress = "0.0.0.0/0"
88-
master_ssh_ingress = "0.0.0.0/0"
89-
worker_ssh_ingress = "0.0.0.0/0"
90-
master_https_ingress = "0.0.0.0/0"
91-
worker_nodeport_ingress = "0.0.0.0/0"
85+
1. Log in to the OCI dashboard.
9286

93-
#create iscsi volumes to store your etcd and /var/lib/docker info
94-
worker_iscsi_volume_create = true
95-
worker_iscsi_volume_size = 100
96-
etcd_iscsi_volume_create = true
97-
etcd_iscsi_volume_size = 50
98-
```
87+
a. Go to Developer Services > OKE clusters.
9988

100-
1. Test and apply your changes:
89+
b. Select c1 cluster > Access Cluster.
10190

91+
c. Copy and paste this command to create the kubeconfig, for example:
10292
```shell
103-
$ terraform plan
104-
```
105-
```shell
106-
$ terraform apply
107-
```
108-
109-
1. Test your cluster using the built-in script `scripts/cluster-check.sh`:
93+
$ oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1...... --file $HOME/.kube/config --region us-phoenix-1 --token-version 2.0.0 --kube-endpoint PUBLIC_ENDPOINT
11094
111-
```shell
112-
$ scripts/cluster-check.sh
113-
```
114-
1. Output the SSH private key:
115-
```
116-
# output the ssh private key for use later
117-
```
118-
```shell
119-
$ rm -f generated/instances_id_rsa && terraform output ssh_private_key > generated/instances_id_rsa && chmod 600 generated/instances_id_rsa
95+
$ export KUBECONFIG= $HOME/.kube/config
12096
```
12197

122-
1. If you need shared storage between your Kubernetes worker nodes, enable and configure NFS:
123-
124-
In the current GA version, the Oracle Container Engine for Kubernetes supports network block storage that can be shared across nodes with access permission RWOnce (meaning that only one can write, others can read only).
125-
If you choose to place your domain on a persistent volume,
126-
you must use a shared file system to store the WebLogic domain configuration, which MUST be accessible from all the pods across the nodes.
127-
Oracle recommends that you use the Oracle Cloud Infrastructure File Storage Service (or equivalent on other cloud providers).
128-
Alternatively, you may install an NFS server on one node and share the file system across all the nodes.
129-
130-
{{% notice note %}} Currently, we recommend that you use NFS version 3.0 for running WebLogic Server on Oracle Container Engine for Kubernetes. During certification, we found that when using NFS 4.0, the servers in the WebLogic domain went into a failed state intermittently. Because multiple threads use NFS (default store, diagnostics store, Node Manager, logging, and domain_home), there are issues when accessing the file store. These issues are removed by changing the NFS to version 3.0.
131-
{{% /notice %}}
132-
133-
```shell
134-
$ terraform output worker_public_ips
135-
```
136-
```
137-
IP1,
138-
IP2
139-
```
140-
```shell
141-
$ terraform output worker_private_ips
142-
```
143-
```
144-
PRIVATE_IP1,
145-
PRIVATE_IP2
146-
```
147-
```shell
148-
$ ssh -i `pwd`/generated/instances_id_rsa opc@IP1
149-
```
150-
```
151-
worker-1$ sudo su -
152-
worker-1# yum install -y nfs-utils
153-
worker-1# mkdir /scratch
154-
worker-1# echo "/scratch PRIVATE_IP2(rw)" >> /etc/exports
155-
worker-1# systemctl restart nfs
156-
worker-1# exit
157-
worker-1$ exit
158-
# configure worker-2 to mount the share from worker-1
159-
```
160-
```shell
161-
$ ssh -i `pwd`/generated/instances_id_rsa opc@IP2
162-
```
163-
```
164-
worker-2$ sudo su -
165-
worker-2# yum install -y nfs-utils
166-
worker-2# mkdir /scratch
167-
worker-2# echo "PRIVATE_IP1:/scratch /scratch nfs nfsvers=3 0 0" >> /etc/fstab
168-
worker-2# mount /scratch
169-
worker-2# exit
170-
worker-2$ exit
171-
```
98+
1. Verify that the cluster is accessible:
99+
```shell
100+
$ kubectl get nodes
101+
```
172102

173103
### Install Kubernetes on your own compute resources
174104

0 commit comments

Comments
 (0)