You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,15 +52,15 @@ The operator can configure services to expose WebLogic applications and features
52
52
* How will users authenticate?
53
53
* Is the network channel encrypted?
54
54
55
-
While it is natural to expose web applications outside the cluster, exposing administrative features like the administration console and a T3 channel for WLST should be given more careful consideration. There are alternative options that should be weighed. For example, Kubernetes provides the ability to securely access a shell running in a container in a pod in the cluster. WLST could be executed from such an environment, meaning the T3 communications are entirely within the Kubernetes cluster and therefore more secure.
55
+
While it is natural to expose web applications outside the cluster, exposing administrative features like the Administration Console and a T3 channel for WLST should be given more careful consideration. There are alternative options that should be weighed. For example, Kubernetes provides the ability to securely access a shell running in a container in a pod in the cluster. WLST could be executed from such an environment, meaning the T3 communications are entirely within the Kubernetes cluster and therefore more secure.
56
56
57
57
Oracle recommends careful consideration before deciding to expose any administrative interfaces externally.
58
58
59
59
# Requirements
60
60
61
61
The Oracle WebLogic Server Kubernetes Operator has the following requirements:
62
62
63
-
* Kubernetes 1.7.5+, 1.8.0+ or 1.9.0+ (check with `kubectl version`)
63
+
* Kubernetes 1.7.5+, 1.8.0+ (check with `kubectl version`). Note that Kubernetes 1.9.x is not supported yet.
@@ -73,10 +73,10 @@ The following features are not certified or supported in the Technology Preview
73
73
74
74
* Whole Server Migration
75
75
* Consensus Leasing
76
-
* Node Manager (although it is used internally for the liveness probe and to start WebLogic servers)
76
+
* Node Manager (although it is used internally for the liveness probe and to start WebLogic Server instances)
77
77
* Dynamic domains (the current certification only covers configured clusters, certification of dynamic clusters is planned at a future date)
78
78
* Multicast
79
-
* If using a `hostPath` persistent volume, then it must have read/write/many permissions for all container/pods in the WebLogic deployment
79
+
* If using a `hostPath` persistent volume, then it must have read/write/many permissions for all container/pods in the WebLogic Server deployment
80
80
* Multitenancy
81
81
* Production redeployment
82
82
@@ -97,11 +97,11 @@ Documentation for APIs is provided here:
97
97
If you would rather see the developers demonstrating the operator rather than reading the documentation, then here are your videos:
98
98
99
99
*[Installing the operator](https://youtu.be/B5UmY2xAJnk) includes the installation and also shows using the operator's REST API.
100
-
*[Creating a WebLogic domain with the operator](https://youtu.be/Ey7o8ldKv9Y) shows creation of two WebLogic domains including accessing the administration console and looking at the various resources created in Kubernetes - services, Ingresses, pods, load balancers, etc.
100
+
*[Creating a WebLogic domain with the operator](https://youtu.be/Ey7o8ldKv9Y) shows creation of two WebLogic domains including accessing the Administration Console and looking at the various resources created in Kubernetes - services, Ingresses, pods, load balancers, etc.
101
101
*[Deploying a web application, scaling a WebLogic cluster with the operator and verifying load balancing](https://youtu.be/hx4OPhNFNDM)
102
102
*[Using WLST against a domain running in Kubernetes](https://youtu.be/eY-KXEk8rI4) shows how to create a data source for an Oracle database that is also running in Kubernetes.
103
103
*[Scaling a WebLogic cluster with WLDF](https://youtu.be/Q8iZi2e9HvU)
104
-
*watch this space, more to come!
104
+
*Watch this space, more to come!
105
105
106
106
Like what you see? Read on for all the nitty-gritty details...
107
107
@@ -157,10 +157,10 @@ Please refer to [Scaling a WebLogic cluster](site/scaling.md) for more informati
157
157
158
158
Please refer to [Shutting down a domain](site/shutdown-domain.md) for information about how to shut down a domain running in Kubernetes.
159
159
160
-
## Load balancing with the Traefik ingress controller
160
+
## Load balancing with the Traefik Ingress controller
161
161
162
162
The initial Technology Preview release of the operator supports only the Traefik load balancer/Ingress controller. Support for other load balancers is planned in the future.
163
-
Please refer to [Load balancing with the Traefik ingress controller](site/traefik.md) for information about current capabilities.
163
+
Please refer to [Load balancing with Traefik](site/traefik.md) for information about current capabilities.
164
164
165
165
[comment]: #(Exporting operator logs to ELK. The operator provides an option to export its log files to the ELK stack. Please refer to [ELK integration]site/elk.md for information about this capability.)
166
166
@@ -169,11 +169,11 @@ Please refer to [Load balancing with the Traefik ingress controller](site/traefi
169
169
To permanently remove a domain from a Kubernetes cluster, first shut down the domain using the instructions provided above in the section titled “Shutting down a domain”, then remove the persistent volume claim and the persistent volume using these commands:
170
170
171
171
```
172
-
kubectl delete pvc PVC-NAME
172
+
kubectl delete pvc PVC-NAME -n NAMESPACE
173
173
kubectl delete pv PV-NAME
174
174
```
175
175
176
-
Find the names of the persistent volume claim and the persistent volume in the domain custom resource YAML file, or if it is not available, check for the `domainUID` in the metadata on the persistent volumes.
176
+
Find the names of the persistent volume claim (represented above as `PVC-NAME`) and the persistent volume (represented as `PV-NAME`) in the domain custom resource YAML file, or if it is not available, check for the `domainUID` in the metadata on the persistent volumes. Replace `NAMESPACE` with the namespace that the operator is running in.
177
177
178
178
To permanently delete the actual domain configuration, delete the physical volume using the appropriate tools. For example, if the persistent volume used the `HostPath provider`, then delete the corresponding directory on the Kubernetes master.
Copy file name to clipboardExpand all lines: kubernetes/internal/domain-job-template.yaml
+12-6Lines changed: 12 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -60,10 +60,16 @@ data:
60
60
fail "The domain secret %SECRETS_MOUNT_PATH%/password was not found"
61
61
fi
62
62
63
-
# Do not proceed if the domain already exists
63
+
# Check if the domain already exists
64
64
domainFolder=${SHARED_PATH}/domain/%DOMAIN_NAME%
65
65
if [ -d ${domainFolder} ]; then
66
-
fail "The create domain job will not overwrite an existing domain. The domain folder ${domainFolder} already exists"
66
+
# check if user asked to replace existing data
67
+
if [ "%REPLACE_EXISTING_DOMAIN%" = "true" ]; then
68
+
echo "As requested, deleting all data in the peristent volume to make way for new domain!"
69
+
rm -rf ${SHARED_PATH}/*
70
+
else
71
+
fail "The create domain job will not overwrite an existing domain unless you set the parameter 'replaceExistingDomain' to 'true'. The domain folder ${domainFolder} already exists"
72
+
fi
67
73
fi
68
74
69
75
# Create the base folders
@@ -177,8 +183,8 @@ data:
177
183
sleep 15
178
184
echo "Finished waiting for the nodemanager to start"
Copy file name to clipboardExpand all lines: site/architecture.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,9 +9,9 @@ The operator is packaged in a Docker image `container-registry.oracle.com/middle
9
9
10
10
Scripts are provided to deploy the operator to a Kubernetes cluster. These scripts also provide options to install and configure a load balancer and ELK integration.
11
11
12
-
The operator registers a Kubernetes custom resource definition called `domain.weblogic.oracle` (shortname `domain`, plural `domains`).
12
+
The operator registers a Kubernetes custom resource definition called `domain.weblogic.oracle` (shortname `domain`, plural `domains`).
13
13
14
-
The diagram below shows the general layout of highlevel components, including optional components, in a Kubernetes cluster that is hosting WebLogic domains and the operator:
14
+
The diagram below shows the general layout of high-level components, including optional components, in a Kubernetes cluster that is hosting WebLogic domains and the operator:
@@ -32,11 +32,11 @@ This diagram shows the following details:
32
32
33
33
* A persistent volume is created using one of the available providers. The chosen provider must support “Read Write Many” access mode. A persistent volume claim is created to claim space in that persistent volume. Both the persistent volume and the persistent volume claim are labeled with `weblogic.domainUID` and these labels allow the operator to find the correct volume for a particular domain. There must be a different persistent volume for each domain. The shared state on the persistent volume include the “domain” directory, the “applications” directory, a directory for storing logs and a directory for any file-based persistence stores.
34
34
35
-
* A pod is created for the WebLogic Administration Server. This pod is labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in this pod. WebLogic Node Manager and Administration Server processes are run inside this container. The Node Manager process is used as an internal implementation detail for the liveness probe, for patching and to provide monitoring and control capabilities to the administration console. It is not intended to be used for other purposes, and it may be removed in some future release.
35
+
* A pod is created for the WebLogic Administration Server. This pod is labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in this pod. WebLogic Node Manager and Administration Server processes are run inside this container. The Node Manager process is used as an internal implementation detail for the liveness probe, for patching and to provide monitoring and control capabilities to the Administration Console. It is not intended to be used for other purposes, and it may be removed in some future release.
36
36
* A `ClusterIP` type service is created for the Administration Server pod. This service provides a stable, well-known network (DNS) name for the Administration Server. This name is derived from the `domainUID` and the Administration Server name, and it is known before starting up any pod. The Administration Server `ListenAddress` is set to this well-known name. `ClusterIP` type services are only visible inside the Kubernetes cluster. They are used to provide the well-known names that all of the servers in a domain use to communicate with each other. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
37
37
* A `NodePort` type service is created for the Administration Server pod. This service provides HTTP access to the Administration Server to clients that are outside the Kubernetes cluster. This service is intended to be used to access the WebLogic Server Administration Console only. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
38
38
* If requested when configuring the domain, a second `NodePort` type service is created for the Administration Server pod. This second service is used to expose a WebLogic channel for the T3 protocol. This service provides T3 access to the Administration Server to clients that are outside the Kubernetes cluster. This service is intended to be used for WLST connections to the Administration Server. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
39
-
* A pod is created for each WebLogic Managed Server. These pods are labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in each pod. WebLogic Node Manager and Managed Server processes are run inside each of these containers. The Node Manager process is used as an internal implementation detail for the liveness probe. It is not intended to be used for other purposes, and it may be removed in some future release.
39
+
* A pod is created for each WebLogic Managed Server. These pods are labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in each pod. WebLogic Node Manager and Managed Server processes are run inside each of these containers. The Node Manager process is used as an internal implementation detail for the liveness probe. It is not intended to be used for other purposes, and it may be removed in some future release.
40
40
* A `NodePort` type service is created for each Managed Server pod that contains a Managed Server that is not part of a WebLogic cluster. These services provide HTTP access to the Managed Servers to clients that are outside the Kubernetes cluster. These services are intended to be used to access applications running on the Managed Servers. These services are labeled with `weblogic.domainUID` and `weblogic.domainName`.
41
41
* An Ingress is created for each WebLogic cluster. This Ingress provides load balanced HTTP access to all Managed Servers in that WebLogic cluster. The operator updates the Ingress every time a Managed Server in the WebLogic cluster becomes “ready” or ceases to be able to service requests, such that the Ingress always points to just those Managed Servers that are able to handle user requests. The Ingress is labeled with `weblogic.domainUID`, `weblogic.clusterName` and `weblogic.domainName`. The Ingress is also annotated with a class which is used to match Ingresses to the correct instances of the load balancer. In the Technology Preview release, there is one instance of the load balancer running for each WebLogic cluster, and the load balancers are configured with the root URL path (“/”). More flexible load balancer configuration is planned for a future release.
42
42
* If the ELK integration was requested when configuring the operator, there will also be another pod that runs logstash in a container. This pod will publish the logs from all WebLogic Server instances in the domain into ElasticSearch. There is one logstash per domain, but only one ElasticSearch and one Kibana for the entire Kubernetes cluster.
@@ -59,7 +59,7 @@ The operator expects (and requires) that all state be stored outside of the Dock
59
59
60
60
It is worth providing some background on why this approach was adopted, in addition to the fact that this separation is consistent with other existing operators (for other products) and the Kubernetes “cattle, not pets” philosophy when it comes to containers.
61
61
62
-
The external state approach allows the operator to treat the Docker images as essentially immutable, read-only, binary images. This means that the image needs to be pulled only once, and that many domains can share the same image. This helps to minimize the amount of bandwidth and storage needed for WebLogic Server Docker images.
62
+
The external state approach allows the operator to treat the Docker images as essentially immutable, read-only, binary images. This means that the image needs to be pulled only once, and that many domains can share the same image. This helps to minimize the amount of bandwidth and storage needed for WebLogic Server Docker images.
63
63
64
64
This approach also eliminates the need to manage any state created in a running container, because all of the state that needs to be preserved is written into either the persistent volume or a database back end. The containers and pods are completely throwaway and can be replaced with new containers and pods as necessary. This makes handling failures and rolling restarts much simpler because there is no need to preserve any state inside a running container.
0 commit comments