Skip to content

Commit 7499e09

Browse files
authored
Merge branch 'develop' into develop-bug2968457-bug29591809
2 parents f28f47c + f4f355b commit 7499e09

37 files changed

+2171
-152
lines changed

docs-source/content/userguide/managing-domains/fmw-infra/_index.md

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,16 @@ pre = "<b> </b>"
77

88
Starting with release 2.2, the operator supports FMW Infrastructure domains.
99
This means domains that are created with the FMW Infrastructure installer rather than the WebLogic
10-
installer. These domains contain the Java Required Files (JRF) feature and are
10+
Server installer. These domains contain the Java Required Files (JRF) feature and are
1111
the pre-requisite for "upper stack" products like Oracle SOA Suite, for example.
1212
These domains also require a database and the use of the Repository
1313
Creation Utility (RCU).
1414

1515
This section provides details about the special considerations for running
1616
FMW Infrastructure domains with the operator. Other than those considerations
17-
listed here, FMW Infrastructure domains work in the same way as WebLogic domains.
17+
listed here, FMW Infrastructure domains work in the same way as WebLogic Server domains.
1818
That is, the remainder of the documentation in this site applies equally to FMW
19-
Infrastructure domains and WebLogic domains.
19+
Infrastructure domains and WebLogic Server domains.
2020

2121
FMW Infrastructure domains are supported using both the "domain on a persistent volume"
2222
and the "domain in a Docker image" [models]({{< relref "/userguide/managing-domains/choosing-a-model/_index.md" >}}).
@@ -49,12 +49,16 @@ A [sample](https://github.com/oracle/docker-images/tree/master/OracleFMWInfrastr
4949
is provided in the Oracle GitHub account that demonstrates how to create a Docker image
5050
to run FMW Infrastructure.
5151

52+
Please consult the [README](https://github.com/oracle/docker-images/blob/master/OracleFMWInfrastructure/dockerfiles/12.2.1.3/README.md) file associated with this sample for important prerequisite steps,
53+
such as building or pulling the Server JRE Docker image and downloading the Fusion Middleware
54+
Infrastructure installer binary.
55+
5256
After cloning the repository and downloading the installer from Oracle Technology Network
5357
or e-delivery, you create your image by running the provided script:
5458

5559
```bash
5660
cd docker-images/OracleFMWInfrastructure/dockerfiles
57-
./buildDockerImage.sh -v 12.2.1.3 -g
61+
./buildDockerImage.sh -v 12.2.1.3 -s
5862
```
5963

6064
The image produced will be named `oracle/fmw-infrastructure:12.2.1.3`.
@@ -68,7 +72,7 @@ by running the provided script:
6872

6973
```bash
7074
cd docker-images/OracleFMWInfrastructure/samples/12213-patch-fmw-for-k8s
71-
./buildDockerImage.sh
75+
./build.sh
7276
```
7377

7478
This will produce an image named `oracle/fmw-infrastructure:12213-update-k8s`.
@@ -160,8 +164,10 @@ spec:
160164
```
161165
162166
Notice that you can pass in environment variables to set the SID, the name of the PDB, and
163-
so on. The documentation describes the other variables that are available. You should
164-
also create a service to make the database available within the Kubernetes cluster with
167+
so on. The documentation describes the other variables that are available. The `sys` password
168+
defaults to `Oradoc_db1`. Follow the instructions in the documentation to reset this password.
169+
170+
You should also create a service to make the database available within the Kubernetes cluster with
165171
a well known name. Here is an example:
166172

167173
```yaml
@@ -171,7 +177,6 @@ metadata:
171177
name: oracle-db
172178
namespace: default
173179
spec:
174-
clusterIP: 10.97.236.215
175180
ports:
176181
- name: tns
177182
port: 1521
@@ -185,7 +190,7 @@ spec:
185190
```
186191

187192
In the example above, the database would be visible in the cluster using the address
188-
`oracle-db.default.svc.cluster.local:1521/devpdc.k8s`.
193+
`oracle-db.default.svc.cluster.local:1521/devpdb.k8s`.
189194

190195
When you run the database in the Kubernetes cluster, you will probably want to also
191196
run RCU from a pod inside your network, though this
@@ -249,13 +254,13 @@ image that you built earlier as a "service" pod to run RCU. To do this, start u
249254
pod using that image as follows:
250255

251256
```bash
252-
kubectl run rcu -ti --image oracle/fmw-infrastructure:12.2.1.3 -- sleep 100000
257+
kubectl run rcu --generator=run-pod/v1 --image oracle/fmw-infrastructure:12213-update-k8s -- sleep infinity
253258
254259
```
255260

256261
This will create a Kubernetes deployment called `rcu` containing a pod running a container
257-
created from the `oracle/fmw-infrastructure:12.2.1.3` image which will just run
258-
`sleep 100000`, which essentially creates a pod that we can "exec" into and use to run whatever
262+
created from the `oracle/fmw-infrastructure:12213-update-k8s` image which will just run
263+
`sleep infinity`, which essentially creates a pod that we can "exec" into and use to run whatever
259264
commands we need to run.
260265

261266
To get inside this container and run commands, use this command:
@@ -267,11 +272,11 @@ kubectl exec -ti rcu /bin/bash
267272
When you are finished with this pod, you can remove it with this command:
268273

269274
```bash
270-
kubectl delete deploy rcu
275+
kubectl delete pod rcu
271276
```
272277

273278
{{% notice note %}}
274-
You can use the same approach to get a temporary "service" pod to run other utilities
279+
You can use the same approach to get a temporary pod to run other utilities
275280
like WLST.
276281
{{% /notice %}}
277282

@@ -287,7 +292,7 @@ for the regular schema users:
287292
-silent \
288293
-createRepository \
289294
-databaseType ORACLE \
290-
-connectString oracle-db:1521/devpdb.k8s \
295+
-connectString oracle-db.default:1521/devpdb.k8s \
291296
-dbUser sys \
292297
-dbRole sysdba \
293298
-useSamePasswordForAllSchemaUsers true \
@@ -305,7 +310,7 @@ for the regular schema users:
305310
You need to make sure that you maintain the association between the database schemas and the
306311
matching domain just like you did in a non-Kubernetes environment. There is no specific
307312
functionality provided to help with this. We recommend that you consider making the RCU
308-
prefix the same as your `domainUID` to help maintain this association.
313+
prefix (value of `schemaPrefix` argument) the same as your `domainUID` to help maintain this association.
309314

310315
##### Dropping schemas
311316

@@ -316,7 +321,7 @@ If you want to drop the schema, you can use a command like this:
316321
-silent \
317322
-dropRepository \
318323
-databaseType ORACLE \
319-
-connectString oracle-db:1521/devpdb.k8s \
324+
-connectString oracle-db.default:1521/devpdb.k8s \
320325
-dbUser sys \
321326
-dbRole sysdba \
322327
-selectDependentsForComponents true \
@@ -340,8 +345,9 @@ When you create your domain using the sample provided below, it will obtain the
340345
from this secret.
341346

342347
A [sample](/weblogic-kubernetes-operator/blob/master/kubernetes/samples/scripts/create-rcu-credentials/README.md)
343-
is provided that demonstrates how to create the secret.
344-
348+
is provided that demonstrates how to create the secret. The schema owner user name required will be the
349+
`schemaPrefix` value followed by an underscore and a component name, such as `FMW1_STB`. The schema owner
350+
password will be the password you provided for regular schema users during RCU creation.
345351

346352
#### Creating a FMW Infrastructure domain
347353

integration-tests/JRFUSECASES.MD

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Use Cases for FMW Infra Domain in the Operator
2+
3+
Java integration tests cover the following use cases:
4+
5+
## Basic test Configuration & Use Cases
6+
7+
| | |
8+
| --- | --- |
9+
| Operator Configuration | operator1 deployed in `weblogic-operator` namespace and manages domains in `default` and `jrfdomains` namespaces |
10+
| Domain Configuration | Domain on PV using WLST, Traefik load balancer |
11+
12+
**Basic Use Cases**
13+
14+
1. Create operator `weblogic-operator` which manages `default` and `jrfdomains` namespaces, verify it's deployed successfully, pods created, operator ready and verify external REST service, if configured.
15+
2. Create domain `jrfdomain` in `jrfdomains` namespace and verify the pods, services are created and servers are in ready state.
16+
3. Verify the admin external service by accessing the admin REST endpoint with `nodeport` in URL.
17+
4. Verify exec into the admin pod and deploying webapp using the admin port with WLST.
18+
5. Verify web app load balancing by accessing the webapp using `loadBalancerWebPort`.
19+
20+
**Advanced Use Cases**
21+
22+
6. Verify domain life cycle (destroy and create) should not have any impact on operator managing the domain and web app load balancing and admin external service.
23+
7. Cluster scale up/down using operator REST endpoint, webapp load balancing should adjust accordingly.
24+
8. Operator life cycle (destroy and create) should not impact the running domain.
25+
26+
Also the below use cases are covered for each test:
27+
28+
9. Verify the liveness probe by killing managed server 1 process 3 times to kick pod auto-restart.
29+
10. Shutdown the domain by changing domain `serverStartPolicy` to `NEVER`.
30+
31+
32+
## Full test Configuration & Use Cases
33+
34+
Additional Operator Configuration:
35+
36+
| | |
37+
| --- | --- |
38+
| Operator Configuration | `operator2` deployed in `weblogic-operator2` namespace and manages domains in `test2` namespace |
39+
40+
Basic Use Cases described above are verified in all the domain configurations. Also the below use cases are covered:
41+
42+
| Domain | Use Case |
43+
| --- | --- |
44+
| Domain on PV using WLST | as above in basic use cases |
45+
| Domain with ADMIN_ONLY | verify only admin server is started and managed servers are not started. Shutdown domain by deleting domain CRD. Create domain on existing PV dir, pv is already populated by a shutdown domain. |
46+
| Domain with situational config | create domain with listen address not set for admin server and t3 channel/NAP and incorrect file for admin server log location. Introspector should override these with sit-config automatically. Also, with some junk value for t3 channel public address and using custom situational config override replace with valid public address using secret. Also, on Jenkins this domain uses NFS instead of HOSTPATH PV storage |
47+
| Two domains managed by two operators | verify scaling and restart of one domain doesn't impact another domain. Delete domain resources using delete script from samples. |
48+
| Two domains in the same namespace managed by one operator | create two FMW Infra Domains in the same namespace managed by one operator. Verify scaling and restart of one domain doesn't impact another domain. |
49+
| Two domains in the different namespaces managed by one operator | create two FMW Infra Domains in the different namespaces managed by one operator. Domain1 uses VOYAGER load balancer. Domain2 uses TRAEFIK load balancer. Verify scaling and restart of one domain doesn't impact another domain. |
50+
| Domain with Recycle policy | create domain with pvReclaimPolicy="Recycle" and using Configured cluster. Verify that the PV is deleted once the domain and PVC are deleted |
51+
| Domain with default sample values | create domain using mostly default values for inputs |
52+
53+
Test cases cover the bugs found:
54+
55+
| Domain | Use Case |
56+
| --- | --- |
57+
| Domain on PV using WLST | cluster rolling restart with restartVersion and maxUnavailable set to 2, verify the not-ready server counts can not exceed maxUnavailable value |
58+
| Domain on PV using WLST | in create domain input file, set exposeAdminNodePort to false and exposeAdminT3Channel to true, verify the managed server pods are created |
59+
| Domain on PV using WLST | in create domain input file, set createDomainScriptsMountPath to non-default value, verify the create domain sample script works |
60+
| Domain on PV using WLST | in createFMWDomain.py file, set administration_port_enabled to true, verify the admin sever pod is running and ready and all managed server pods are created |
61+
| Domain on PV using WLST | in create domain input file, set exposeAdminT3Channel to true, verify the admin t3 channel is exposed |

integration-tests/README.md

Lines changed: 44 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ Shared cluster runs only Quick test use cases, Jenkins runs both Quick and Full
1616

1717
Use Cases covered in integration tests for the operator is available [here](USECASES.MD)
1818

19+
JRF Use Cases are covered [here](JRFUSECASES.MD)
20+
1921
# Directory Configuration and Structure
2022

2123
Directory structure of source code:
@@ -226,8 +228,20 @@ SHARED_CLUSTER=true:
226228
| IMAGE_NAME_OPERATOR | Docker image name for operator. Default is weblogic-kubernetes-operator |
227229
| IMAGE_PULL_POLICY_OPERATOR | Default 'Never'. |
228230
| IMAGE_PULL_SECRET_OPERATOR | Default ''. |
229-
| IMAGE_PULL_SECRET_WEBLOGIC | Default ''.
231+
| IMAGE_PULL_SECRET_WEBLOGIC | Default ''. |
230232

233+
The below env variables are required for SHARED_CLUSTER=true:
234+
235+
| Variable | Description |
236+
| --- | --- |
237+
| REPO_REGISTRY | OCIR Server to push/pull the Operator image |
238+
| REPO_USERNAME | OCIR Username |
239+
| REPO_PASSWORD | OCIR token |
240+
| REPO_EMAIL | OCIR email |
241+
| DOCKER_USERNAME | Docker username to pull the Weblogic image |
242+
| DOCKER_PASSWORD | Docker password |
243+
| DOCKER_EMAIL | Docker email |
244+
| K8S_NODEPORT_HOST | DNS name of a Kubernetes worker node. |
231245

232246
Successful run will have the output like below:
233247
```
@@ -282,6 +296,35 @@ Failed run will have the output like
282296
```
283297
JUnit test results can be seen at "integration-tests/target/failsafe-reports/TEST-oracle.kubernetes.operator.ITOperator.xml". This file shows how much time each test case took to run and the failed test results if any.
284298

299+
# How to run JRF domain In Operator related tests
300+
* Setup docker access to FMW Infrastructure 12c Image and Oracle Database 12c Image
301+
302+
Method 1
303+
- Setup a personal account on hub.docker.com
304+
- Then sign in to hub.docker.com and signup for access to Oracle Database 12c Images via https://hub.docker.com/_/oracle-database-enterprise-edition
305+
- Then export the following before running the tests:
306+
```
307+
export DOCKER_USERNAME=<docker_username>
308+
export DOCKER_PASSWORD=<docker_password>
309+
export DOCKER_EMAIL=<docker_email>
310+
```
311+
- Setup an account in phx.ocir.io
312+
- Then sign in to phx.ocir.io to get access to FMW Infrastructure 12c Image: **_phx.ocir.io/weblogick8s/oracle/fmw-infrastructure:12.2.1.3_**
313+
- export the following before running the tests:
314+
```
315+
export REPO_USERNAME=<ocir_username>
316+
export REPO_PASSWORD=<ocir_password>
317+
export REPO_EMAIL=<ocir_email>
318+
```
319+
320+
Method 2
321+
- Make sure the FMW Infrastructure image i.e. **_phx.ocir.io/weblogick8s/oracle/fmw-infrastructure:12.2.1.3_** and the Oracle database image i.e. **_store/oracle/database-enterprise:12.2.0.1_** already exist locally in a docker repository the k8s cluster can access
322+
323+
* Command to run the tests:
324+
```
325+
mvn clean verify -P jrf-integration-tests 2>&1 | tee log.txt
326+
```
327+
285328
# How to run a single test
286329
287330
mvn -Dit.test="ITOperator#testDomainOnPVUsingWLST" -DfailIfNoTests=false integration-test -P java-integration-tests

integration-tests/USECASES.MD

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Java integration tests cover the below use cases:
44

5-
## Quick test Configuration & Use Cases -
5+
## Quick test Configuration & Use Cases -
66

77
| | |
88
| --- | --- |
@@ -101,4 +101,23 @@ Configuration Overrides Usecases
101101
| Sticky Session | Use Case |
102102
| --- | --- |
103103
| Server affinity | Use a web application deployed on Weblogic cluster to track HTTP session. Test server-affinity by sending two HTTP requests to Weblogic and verify that all requests are directed to same Weblogic server |
104-
| Session state isolation | Verify that values saved in a client session state are not visible to another client |
104+
| Session state isolation | Verify that values saved in a client session state are not visible to another client |
105+
106+
| Monitoring Exporter | Use Case |
107+
| --- | --- |
108+
| Check Metrics via Prometheus | build, deploy webapp for Monitoring Exporter, start Prometheus and verify the metrics was produced by using Prometheus APIs |
109+
| Replace Configuration via exporter console| Verify that configuration for monitoring exporter can be replaced during runtime, check applied metrics via Prometheus APIs|
110+
| Append Configuration via exporter console| Verify that configuration for monitoring exporter can be appended during runtime, check applied metrics via Prometheus APIs|
111+
| Append Configuration with varios combinations of attributes via exporter console| Append monitoring exporter configuration [a] to new config [a,b] and verify it was applied |
112+
| Replace Configuration with only one attribute as array via exporter console| Replace monitoring exporter configuration [a,b,c] attributes with new config [a] and verify it was applied |
113+
| Replace Configuration with empty config file via exporter console| Replace monitoring exporter configuration with empty config file, verify it was applied |
114+
| Replace/Append Configuration with config file written in non yml format via exporter console| Try to replace/append monitoring exporter configuration with config file written in non yml format, verify configuration has not changed |
115+
| Replace/Append Configuration with corrupted yml file via exporter console| Try to replace/append monitoring exporter configuration with config file written in corrupted yml format, verify configuration has not changed |
116+
| Replace/Append Configuration with dublicated values in the config file via exporter console| Try to replace/append monitoring exporter configuration with dublicated values in the config file, verify configuration has not changed |
117+
| Replace/Append Configuration with invalid credentials via exporter console| Try to replace/append monitoring exporter configuration with varios combos for invalid credentials, verify configuration has not changed and `401 Unauthorized` exception was thrown |
118+
119+
| Logging with Elastic Stack | Use Case |
120+
| --- | --- |
121+
| Search log level | Use Elasticsearch Count API to query logs of level=INFO and verify that total number of logs for level=INFO is not zero and failed count is zero |
122+
| Search Operator log | Use Elasticsearch Search APIs to query Operator log info and verify that log hits for type=weblogic-operator are not empty |
123+
| Search Weblogic log | Use Elasticsearch Search APIs to query Weblogic log info and verify that log hits for Weblogic servers are not empty |

integration-tests/pom.xml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,11 @@
113113
<artifactId>jaxb-api</artifactId>
114114
<version>2.3.1</version>
115115
</dependency>
116+
<dependency>
117+
<groupId>net.sourceforge.htmlunit</groupId>
118+
<artifactId>htmlunit</artifactId>
119+
<version>2.32</version>
120+
</dependency>
116121
</dependencies>
117122

118123
<build>

0 commit comments

Comments
 (0)