You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: site/architecture.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,9 +9,9 @@ The operator is packaged in a Docker image `container-registry.oracle.com/middle
9
9
10
10
Scripts are provided to deploy the operator to a Kubernetes cluster. These scripts also provide options to install and configure a load balancer and ELK integration.
11
11
12
-
The operator registers a Kubernetes custom resource definition called `domain.weblogic.oracle` (shortname `domain`, plural `domains`).
12
+
The operator registers a Kubernetes custom resource definition called `domain.weblogic.oracle` (shortname `domain`, plural `domains`).
13
13
14
-
The diagram below shows the general layout of highlevel components, including optional components, in a Kubernetes cluster that is hosting WebLogic domains and the operator:
14
+
The diagram below shows the general layout of high-level components, including optional components, in a Kubernetes cluster that is hosting WebLogic domains and the operator:
@@ -32,11 +32,11 @@ This diagram shows the following details:
32
32
33
33
* A persistent volume is created using one of the available providers. The chosen provider must support “Read Write Many” access mode. A persistent volume claim is created to claim space in that persistent volume. Both the persistent volume and the persistent volume claim are labeled with `weblogic.domainUID` and these labels allow the operator to find the correct volume for a particular domain. There must be a different persistent volume for each domain. The shared state on the persistent volume include the “domain” directory, the “applications” directory, a directory for storing logs and a directory for any file-based persistence stores.
34
34
35
-
* A pod is created for the WebLogic Administration Server. This pod is labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in this pod. WebLogic Node Manager and Administration Server processes are run inside this container. The Node Manager process is used as an internal implementation detail for the liveness probe, for patching and to provide monitoring and control capabilities to the administration console. It is not intended to be used for other purposes, and it may be removed in some future release.
35
+
* A pod is created for the WebLogic Administration Server. This pod is labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in this pod. WebLogic Node Manager and Administration Server processes are run inside this container. The Node Manager process is used as an internal implementation detail for the liveness probe, for patching and to provide monitoring and control capabilities to the Administration Console. It is not intended to be used for other purposes, and it may be removed in some future release.
36
36
* A `ClusterIP` type service is created for the Administration Server pod. This service provides a stable, well-known network (DNS) name for the Administration Server. This name is derived from the `domainUID` and the Administration Server name, and it is known before starting up any pod. The Administration Server `ListenAddress` is set to this well-known name. `ClusterIP` type services are only visible inside the Kubernetes cluster. They are used to provide the well-known names that all of the servers in a domain use to communicate with each other. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
37
37
* A `NodePort` type service is created for the Administration Server pod. This service provides HTTP access to the Administration Server to clients that are outside the Kubernetes cluster. This service is intended to be used to access the WebLogic Server Administration Console only. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
38
38
* If requested when configuring the domain, a second `NodePort` type service is created for the Administration Server pod. This second service is used to expose a WebLogic channel for the T3 protocol. This service provides T3 access to the Administration Server to clients that are outside the Kubernetes cluster. This service is intended to be used for WLST connections to the Administration Server. This service is labeled with `weblogic.domainUID` and `weblogic.domainName`.
39
-
* A pod is created for each WebLogic Managed Server. These pods are labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in each pod. WebLogic Node Manager and Managed Server processes are run inside each of these containers. The Node Manager process is used as an internal implementation detail for the liveness probe. It is not intended to be used for other purposes, and it may be removed in some future release.
39
+
* A pod is created for each WebLogic Managed Server. These pods are labeled with `weblogic.domainUID`, `weblogic.serverName` and `weblogic.domainName`. One container runs in each pod. WebLogic Node Manager and Managed Server processes are run inside each of these containers. The Node Manager process is used as an internal implementation detail for the liveness probe. It is not intended to be used for other purposes, and it may be removed in some future release.
40
40
* A `NodePort` type service is created for each Managed Server pod that contains a Managed Server that is not part of a WebLogic cluster. These services provide HTTP access to the Managed Servers to clients that are outside the Kubernetes cluster. These services are intended to be used to access applications running on the Managed Servers. These services are labeled with `weblogic.domainUID` and `weblogic.domainName`.
41
41
* An Ingress is created for each WebLogic cluster. This Ingress provides load balanced HTTP access to all Managed Servers in that WebLogic cluster. The operator updates the Ingress every time a Managed Server in the WebLogic cluster becomes “ready” or ceases to be able to service requests, such that the Ingress always points to just those Managed Servers that are able to handle user requests. The Ingress is labeled with `weblogic.domainUID`, `weblogic.clusterName` and `weblogic.domainName`. The Ingress is also annotated with a class which is used to match Ingresses to the correct instances of the load balancer. In the Technology Preview release, there is one instance of the load balancer running for each WebLogic cluster, and the load balancers are configured with the root URL path (“/”). More flexible load balancer configuration is planned for a future release.
42
42
* If the ELK integration was requested when configuring the operator, there will also be another pod that runs logstash in a container. This pod will publish the logs from all WebLogic Server instances in the domain into ElasticSearch. There is one logstash per domain, but only one ElasticSearch and one Kibana for the entire Kubernetes cluster.
@@ -59,7 +59,7 @@ The operator expects (and requires) that all state be stored outside of the Dock
59
59
60
60
It is worth providing some background on why this approach was adopted, in addition to the fact that this separation is consistent with other existing operators (for other products) and the Kubernetes “cattle, not pets” philosophy when it comes to containers.
61
61
62
-
The external state approach allows the operator to treat the Docker images as essentially immutable, read-only, binary images. This means that the image needs to be pulled only once, and that many domains can share the same image. This helps to minimize the amount of bandwidth and storage needed for WebLogic Server Docker images.
62
+
The external state approach allows the operator to treat the Docker images as essentially immutable, read-only, binary images. This means that the image needs to be pulled only once, and that many domains can share the same image. This helps to minimize the amount of bandwidth and storage needed for WebLogic Server Docker images.
63
63
64
64
This approach also eliminates the need to manage any state created in a running container, because all of the state that needs to be preserved is written into either the persistent volume or a database back end. The containers and pods are completely throwaway and can be replaced with new containers and pods as necessary. This makes handling failures and rolling restarts much simpler because there is no need to preserve any state inside a running container.
Copy file name to clipboardExpand all lines: site/design.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
The Oracle WebLogic Server Kubernetes Operator (the “operator”) is designed to fulfill a similar role to that which a human operator would fill in a traditional data center deployment. It contains a set of useful built-in knowledge about how to perform various lifecycle operations on a domain correctly.
5
5
6
-
Human operators are normally responsible for starting and stopping environments, initiating backups, performing scaling operations, performing manual tasks associated with disaster recovery and high availability needs and coordinating actions with other operators in other data centers. It is envisaged that the operator will have similar responsibilities in a Kubernetes environment. The initial “Technology Preview” version of the operator does not have the capability to take on all of those responsibilities, but enumerating them here gives insight into the background context for making various design choices.
6
+
Human operators are normally responsible for starting and stopping environments, initiating backups, performing scaling operations, performing manual tasks associated with disaster recovery and high availability needs and coordinating actions with other operators in other data centers. It is envisaged that the operator will have similar responsibilities in a Kubernetes environment. The initial Technology Preview version of the operator does not have the capability to take on all of those responsibilities, but enumerating them here gives insight into the background context for making various design choices.
7
7
8
8
It is important to note the distinction between an *operator* and an *administrator*. A WebLogic Server administrator typically has different responsibilities centered around managing the detailed configuration of the WebLogic domains. The operator has only limited interest in the domain configuration, with its main concern being the high-level topology of the domain; e.g., how many clusters and servers, and information about network access points, such as channels.
9
9
@@ -13,14 +13,14 @@ Like a human operator, the operator is designed to be event-based. It waits for
13
13
14
14
The operator is designed with security in mind from the outset. Some examples of the specific security practices we follow are:
15
15
16
-
* During the deployment of the operator, Kubernetes roles are defined and assigned to the operator. These roles are designed to give the operator the minimum amount of privileges that it requires to perform its tasks.
17
-
* The code base is regularly scanned with security auditing tools and any issues that are identified are promptly resolved.
18
-
* All HTTP communications – between the operator and an external client, between the operator and WebLogic Administration Servers, and so on – are configured to require SSL and TLS 1.2.
19
-
* Unused code is pruned from the code base regularly.
16
+
* During the deployment of the operator, Kubernetes roles are defined and assigned to the operator. These roles are designed to give the operator the minimum amount of privileges that it requires to perform its tasks.
17
+
* The code base is regularly scanned with security auditing tools and any issues that are identified are promptly resolved.
18
+
* All HTTP communications – between the operator and an external client, between the operator and WebLogic Administration Servers, and so on – are configured to require SSL and TLS 1.2.
19
+
* Unused code is pruned from the code base regularly.
20
20
* Dependencies are kept as up-to-date as possible and are regularly reviewed for security vulnerabilities.
21
21
22
22
The operator is designed to avoid imposing any arbitrary restriction on how WebLogic Server may be configured or used in Kubernetes. Where there are restrictions, these are based on the availability of some specific feature in Kubernetes; for example, multicast support.
23
23
24
-
The operator learns of WebLogic domains through instances of a domain Kubernetes resource. When the operator is installed, it creates a Kubernetes [Custom Resource Definition](https://kubernetes.io/docs/concepts/api-extension/custom-resources/). This custom resource definition defines the domain resource type. Once this type is defined, you can manage domain resources using `kubectl` just like any other resource type. For instance, `kubectl get domain` or `kubectl edit domain domain1`. The schema for domain resources is [here](../swagger/domain.json).
24
+
The operator learns of WebLogic domains through instances of a domain Kubernetes resource. When the operator is installed, it creates a Kubernetes [Custom Resource Definition](https://kubernetes.io/docs/concepts/api-extension/custom-resources/). This custom resource definition defines the domain resource type. Once this type is defined, you can manage domain resources using `kubectl` just like any other resource type. For instance, `kubectl get domain` or `kubectl edit domain domain1`. The schema for domain resources is [here](../swagger/domain.json).
25
25
26
-
The schema for the domain resource is designed to be as sparse as possible. It includes the connection details for the administration server, but all of the other content are operational details about which servers should be started, environment variables, and details about what should be exposed outside the Kubernetes cluster. This way, the WebLogic domain's configuration remains the normative configuration.
26
+
The schema for the domain resource is designed to be as sparse as possible. It includes the connection details for the Administration Server, but all of the other content are operational details about which servers should be started, environment variables, and details about what should be exposed outside the Kubernetes cluster. This way, the WebLogic domain's configuration remains the normative configuration.
Copy file name to clipboardExpand all lines: site/developer.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ The following software are required to obtain and build the operator:
11
11
* Java Developer Kit (1.8u131 or later recommended, not 1.9)
12
12
* Docker 17.03.1.ce
13
13
14
-
The operator is written primarily in Java and BASH shell scripts. The Java code uses features introduced in Java 1.8, for example closures, but does not use any Java 1.9 feature.
14
+
The operator is written primarily in Java and BASH shell scripts. The Java code uses features introduced in Java 1.8 -- for example, closures -- but does not use any Java 1.9 feature.
15
15
16
16
Because the target runtime environment for the operator is Oracle Linux, no particular effort has been made to ensure the build or tests run on any other operating system. Please be aware that Oracle will not provide support, or accept pull requests to add support, for other operating systems.
The operator is built using [Apache Maven](http://maven.apache.org). The build machine will also need to have Docker installed.
30
+
The operator is built using [Apache Maven](http://maven.apache.org). The build machine will also need to have Docker installed.
31
31
32
32
To build the operator, issue the following command in the project directory:
33
33
@@ -73,7 +73,7 @@ To run the tests, uncomment the following `execution` element in the `pom.xml` f
73
73
-->
74
74
```
75
75
76
-
These test assume that the RBAC definitions exist on the Kubernetes cluster. To create them, update the inputs file and run the operator installation script with the "generate only" option as shown below (see the [installation](installation.md) page for details about this script and the inputs):
76
+
These tests assume that the RBAC definitions exist on the Kubernetes cluster. To create them, update the inputs file and run the operator installation script with the "generate only" option as shown below (see the [installation](installation.md) page for details about this script and the inputs):
@@ -152,7 +152,7 @@ This project has the following directory structure:
152
152
153
153
### Watch package
154
154
155
-
The Watch API in the Kubernetes Java client provides a watch capability across a specific list of resources for a limited amount of time. As such it is not ideally suited our use case, where a continuous stream of watches was desired, with watch events generated in real-time. The watch-wrapper in this repository extends the default Watch API to provide a continuous stream of watch events until the stream is specifically closed. It also provides `resourceVersion` tracking to exclude events that have already been seen. The Watch-wrapper provides callbacks so events, as they occur, can trigger actions.
155
+
The Watch API in the Kubernetes Java client provides a watch capability across a specific list of resources for a limited amount of time. As such it is not ideally suited for our use case, where a continuous stream of watches was desired, with watch events generated in realtime. The watch-wrapper in this repository extends the default Watch API to provide a continuous stream of watch events until the stream is specifically closed. It also provides `resourceVersion` tracking to exclude events that have already been seen. The watch-wrapper provides callbacks so events, as they occur, can trigger actions.
156
156
157
157
## Asynchronous call model
158
158
@@ -170,7 +170,7 @@ The user-level thread pattern is implemented by the classes in the `oracle.kuber
170
170
*`Component`: Provider of SPI's that may be useful to the processing flow.
171
171
*`Container`: Represents the containing environment and is a `Component`.
172
172
173
-
Each `Step` has a reference to the next `Step` in the processing flow; however `Steps` are not required to indicate that the next `Step` be invoked by the `Fiber` when the `Step` returns a `NextAction` to the `Fiber`. This leads to common use cases where `Fibers` invoke a series of `Steps` that are linked by the 'is-next' relationship, but just as commonly, use cases where the `Fiber` will invoke sets of `Steps` along a detour before returning to the normal flow.
173
+
Each `Step` has a reference to the next `Step` in the processing flow; however,`Steps` are not required to indicate that the next `Step` be invoked by the `Fiber` when the `Step` returns a `NextAction` to the `Fiber`. This leads to common use cases where `Fibers` invoke a series of `Steps` that are linked by the 'is-next' relationship, but just as commonly, use cases where the `Fiber` will invoke sets of `Steps` along a detour before returning to the normal flow.
174
174
175
175
In this sample, the caller creates an `Engine`, `Fiber`, linked set of `Step` instances, and `Packet`. The `Fiber` is then started. The `Engine` would typically be a singleton, since it's backed by a `ScheduledExecutorService`. The `Packet` would also typically be pre-loaded with values that the `Steps` would use in their `apply()` methods.
0 commit comments