Skip to content

Commit 5729b11

Browse files
committed
Merge remote-tracking branch 'origin/develop' into istio-enablement
2 parents 37a7937 + 7ec5d1a commit 5729b11

File tree

61 files changed

+3943
-1122
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+3943
-1122
lines changed

docs-source/content/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
### Oracle WebLogic Server Kubernetes Operator
22

3-
Oracle is finding ways for organizations using WebLogic Server to run important workloads, to move those workloads into the cloud. By certifying on industry standards, such as Docker and Kubernetes, WebLogic now runs in a cloud neutral infrastructure. In addition, we've provided an open-source Oracle WebLogic Server Kubernetes Operator (the “operator”) which has several key features to assist you with deploying and managing WebLogic domains in a Kubernetes environment. You can:
3+
Oracle is finding ways for organizations using WebLogic Server to run important workloads, to move those workloads into the cloud. By certifying on industry standards, such as Docker and Kubernetes, WebLogic now runs in a cloud neutral infrastructure. In addition, we've provided an open source Oracle WebLogic Server Kubernetes Operator (the “operator”) which has several key features to assist you with deploying and managing WebLogic domains in a Kubernetes environment. You can:
44

55
* Create WebLogic domains in a Kubernetes persistent volume. This persistent volume can reside in an NFS file system or other Kubernetes volume types.
66
* Create a WebLogic domain in a Docker image.
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
---
2+
title: "Coherence Requirements"
3+
date: 2019-08-12T12:41:38-04:00
4+
draft: false
5+
---
6+
7+
If you are running Coherence on Kubernetes, either inside a WebLogic domain
8+
or standalone, then there are some additional requirements to make sure
9+
that Coherence can form clusters.
10+
11+
Note that some Fusion Middleware products, like SOA Suite, use Coherence
12+
and so these requirements apply to them.
13+
14+
#### Unicast and Well Known Address
15+
When the first Coherence process starts, it will form a cluster. The next
16+
Coherence process to start (i.e. in a different pod) will use UDP to try
17+
to contact the senior member.
18+
19+
If you create a WebLogic domain which contains a Coherence cluster
20+
using the samples provided in this project, then that cluster will
21+
be configured correctly so that it is able to form;
22+
you do not need to do any additional manual configuration.
23+
24+
If you are running Coherence standalone (outside a
25+
WebLogic domain) you should configure Coherence to use unicast and
26+
provide a "well known address (WKA)" so that all members can find the senior
27+
member. Most Kubernetes overlay network providers do not
28+
support multicast.
29+
30+
This is done by specifying the Coherence well known addresses in a variable named
31+
`coherence.wka` as shown in the example below:
32+
33+
```
34+
-Dcoherence.wka=my-cluster-service
35+
```
36+
37+
In this example `my-cluster-service` should be the name of the Kubernetes
38+
service that is pointing to all of the members of that Coherence cluster.
39+
40+
Please refer to the [Coherence operator documentation](https://oracle.github.io/coherence-operator/)
41+
for more information about running Coherence in Kubernetes outside of
42+
a WebLogic domain.
43+
44+
#### Operating system library requirements
45+
46+
In order for Coherence clusters to form correctly, the `conntrack` library
47+
must be installed. Most Kubernetes distributions will do this for you.
48+
If you have issues with clusters not forming, you should check that
49+
`conntrack` is installed using this command (or equivalent):
50+
51+
```
52+
$ rpm -qa | grep conntrack
53+
libnetfilter_conntrack-1.0.6-1.el7_3.x86_64
54+
conntrack-tools-1.4.4-4.el7.x86_64
55+
```
56+
57+
You should see output similar to that shown above. If you do not, then you
58+
should install `conntrack` using your operating system tools.
59+
60+
#### Firewall (iptables) requirements
61+
62+
Some Kubernetes distributions create `iptables` rules that block some
63+
types of traffic that Coherence requires to form clusters. If you are
64+
not able to form clusters, you can check for this issue using the
65+
following command:
66+
67+
```
68+
# iptables -t nat -v -L POST_public_allow -n
69+
Chain POST_public_allow (1 references)
70+
pkts bytes target prot opt in out source destination
71+
164K 11M MASQUERADE all -- * !lo 0.0.0.0/0 0.0.0.0/0
72+
0 0 MASQUERADE all -- * !lo 0.0.0.0/0 0.0.0.0/0
73+
```
74+
75+
If you see output similar to the example above, i.e. if you see any entries
76+
in this chain, then you need to remove them. You can remove the entries
77+
using this command:
78+
79+
```
80+
# iptables -t nat -v -D POST_public_allow 1
81+
```
82+
83+
Note that you will need to run that command for each line. So in the example
84+
above, you would need to run it twice.
85+
86+
After you are done, you can run the previous command again and verify that
87+
the output is now an empty list.
88+
89+
After making this change, restart your domain(s) and the Coherence cluster
90+
should now form correctly.
91+
92+
93+

docs-source/content/userguide/managing-domains/domain-lifecycle/restarting.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -199,3 +199,14 @@ If you've created a new image that is not rolling compatible, and you've changed
199199
the image, or the Kubernetes resources that register your domain with the operator. For example, your servers are caching information from an external database and you've modified the contents of the database.
200200

201201
In these cases, you must manually initiate a restart.
202+
203+
* **Managed Coherence Servers safe shutdown**.
204+
205+
If the domain is configured to use a Coherence cluster, then you will need to increase the Kubernetes graceful timeout value.
206+
When a server is shut down, Coherence needs time to recover partitions and rebalance the cluster before it is safe to shut down a second server.
207+
Using the Kubernetes graceful termination feature, the operator will automatically wait until the Coherence HAStatus MBean attribute
208+
indicates that it is safe to shut down the server. However, after the graceful termination timeout expires, the pod will be deleted regardless.
209+
Therefore, it is important to set the domain YAML `timeoutSeconds` to a large enough value to prevent the server from shutting down before
210+
Coherence is safe. Furthermore, if the operator is not able to access the Coherence MBean, then the server will not be shut down
211+
until the domain `timeoutSeconds` expires. To minimize any possibility of cache data loss, you should increase the `timeoutSeconds`
212+
value to a large number, for example, 15 minutes.

docs-source/content/userguide/managing-domains/fmw-infra/_index.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ pre = "<b> </b>"
1515
* [Create a Kubernetes secret with the RCU credentials](#create-a-kubernetes-secret-with-the-rcu-credentials)
1616
* [Creating an FMW Infrastructure domain](#creating-an-fmw-infrastructure-domain)
1717
* [Patching the FMW Infrastructure image](#patching-the-fmw-infrastructure-image)
18+
* [Additional considerations for Coherence](#additional-considerations-for-coherence)
1819

1920

2021
Starting with release 2.2.0, the operator supports FMW Infrastructure domains.
@@ -432,3 +433,9 @@ for more information.
432433

433434
An example of a non-ZDP compliant patch is one that includes a schema change
434435
that can not be applied dynamically.
436+
437+
#### Additional considerations for Coherence
438+
439+
If you are running a domain which contains Coherence, please refer to
440+
[Coherence requirements]({{< relref "/faq/coherence-requirements.md" >}})
441+
for more information.

docs-source/content/userguide/managing-operators/using-the-operator/using-helm.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,16 @@ $ helm upgrade \
6767
kubernetes/charts/weblogic-operator
6868
```
6969

70+
Enable operator debugging on port 30999. Again, we use `--reuse-values` to change one value without affecting the others:
71+
```
72+
$ helm upgrade \
73+
--reuse-values \
74+
--set "remoteDebugNodePortEnabled=true" \
75+
--wait \
76+
weblogic-operator \
77+
kubernetes/charts/weblogic-operator
78+
```
79+
7080
### Operator Helm configuration values
7181

7282
This section describes the details of the operator Helm chart's available configuration values.

integration-tests/USECASES.MD

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,8 @@ Configuration Overrides Usecases
100100
| JDBC Resource Override | Override JDBC connection pool properties; `initialCapacity`, `maxCapacity`, `test-connections-on-reserve`, `connection-harvest-max-count`, `inactive-connection-timeout-seconds`. Override the JDBC driver parameters like data source `URL`, `DB` `user` and `password` using kubernetes secret. The test verifies the overridden functionality datasource `URL`, `user`, `password` by getting the data source connection and running DDL statement it is connected to. |
101101
| JMS Resource Override | Override UniformDistributedTopic Delivery Failure Parameters, `redelivery-limit` and `expiration-policy`. The JMX test client verifies the serverConfig MBean tree for the expected delivery failure parameters, `redelivery-limit` and `expiration-policy`. |
102102
| WLDF Resource Override | Override `wldf-instrumentation-monitor` and `harvester` in a diagnostics module. The test client verifies the new instrumentation monitors/harvesters set by getting the WLDF resource from serverConfig tree with expected values. |
103+
| Configuration override with running domain | Override the administration server with Startup and Shutdown class by editing the configmap and recreating the domain CRD. The override is verified by JMX client connecting to the serverConfig MBean tree and the values are checked against the expected values. |
104+
| JDBC Resource Override with running domain | Override non dynamic JDBC connection pool properties; `ignore-in-use-connections`, `login-delay-Seconds`, `connection-cache-type`, `global-transactions-protocol` by editing the configmap and recreating the domain CRD. The test only verifies the expected values against the config tree |
103105

104106
| Session Migration | Use Case |
105107
| --- | --- |
@@ -151,11 +153,12 @@ Configuration Overrides Usecases
151153

152154
| Init Container | Use Case |
153155
| --- | --- |
154-
| Add initContainers to domain | Add a initContainers object to spec level and verify the init containers are created for weblogic server pods prior to starting it and runs to completion and then weblogic pod are started |
155-
| Add initContainers to adminServer | Add a initContainers object to adminServer level and verify the init container is created for administration server weblogic server pod prior to starting it and runs to completion and then weblogic pod is started |
156-
| Add initContainers to Clusters | Add a initContainers object to Clusters level and verify the init containers are created for weblogic server pods prior to starting the clusters and runs to completion and then weblogic pod are started |
157-
| Add initContainers to managedServers | Add a initContainers object to managed server level and verify the init container is created for managed server weblogic server pod prior to starting it and runs to completion and then weblogic pod is started |
158-
| Add bad initContainers to domain | Add a bad initContainers object to domain and verify the init container run fails and no weblogic pod is started |
159-
| Add multiple initContainers to domain | Add multiple initContainers object to domain level and verify all of the init container are run before weblogic server pod are started |
160-
| Add initContainers with different names at different level | Add a multiple initContainers object at domain level and server level and verify all of the init containers are run before weblogic server pods are started |
161-
| Add initContainers with same names at different level | Add a multiple initContainers object at domain level and server level and verify only the server level init containers are run before weblogic server pods are started |
156+
| Add initContainers to domain | Add a initContainers object to spec level and verify the init containers are created for Weblogic server pods prior to starting it and runs to completion and then Weblogic pod are started |
157+
| Add initContainers to adminServer | Add a initContainers object to adminServer level and verify the init container is created for administration server Weblogic server pod prior to starting it and runs to completion and then Weblogic pod is started |
158+
| Add initContainers to Clusters | Add a initContainers object to Clusters level and verify the init containers are created for Weblogic server pods prior to starting the clusters and runs to completion and then Weblogic pod are started |
159+
| Add initContainers to managedServers | Add a initContainers object to managed server level and verify the init container is created for managed server Weblogic server pod prior to starting it and runs to completion and then Weblogic pod is started |
160+
| Add bad initContainers to domain | Add a bad initContainers object to domain and verify the init container run fails and no Weblogic pod is started |
161+
| Add multiple initContainers to domain | Add multiple initContainers object to domain level and verify all of the init container are run before Weblogic server pod are started |
162+
| Add initContainers with different names at different level | Add a multiple initContainers object at domain level and server level and verify all of the init containers are run before Weblogic server pods are started |
163+
| Add initContainers with same names at different level | Add a multiple initContainers object at domain level and server level and verify only the server level init containers are run before Weblogic server pods are started |
164+

integration-tests/pom.xml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -302,5 +302,13 @@
302302
<jrf_enabled>false</jrf_enabled>
303303
</properties>
304304
</profile>
305+
<profile>
306+
<id>full-integration-tests</id>
307+
<properties>
308+
<skipITs>false</skipITs>
309+
<includes-failsafe>**/*.java</includes-failsafe>
310+
<jrf_enabled>true</jrf_enabled>
311+
</properties>
312+
</profile>
305313
</profiles>
306314
</project>

integration-tests/src/test/java/oracle/kubernetes/operator/BaseTest.java

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ public class BaseTest {
4949
public static final String DOMAININIMAGE_WLST_YAML = "domaininimagewlst.yaml";
5050
public static final String DOMAININIMAGE_WDT_YAML = "domaininimagewdt.yaml";
5151
public static final String DOMAINONSHARINGPV_WLST_YAML = "domainonsharingpvwlst.yaml";
52+
public static final String DOMAINONPV_LOGGINGEXPORTER_YAML = "loggingexpdomainonpvwlst.yaml";
5253

5354
// property file used to configure constants for integration tests
5455
public static final String APP_PROPS_FILE = "OperatorIT.properties";

0 commit comments

Comments
 (0)