You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs-source/content/userguide/managing-domains/fmw-infra/_index.md
+25-19Lines changed: 25 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,16 +7,16 @@ pre = "<b> </b>"
7
7
8
8
Starting with release 2.2, the operator supports FMW Infrastructure domains.
9
9
This means domains that are created with the FMW Infrastructure installer rather than the WebLogic
10
-
installer. These domains contain the Java Required Files (JRF) feature and are
10
+
Server installer. These domains contain the Java Required Files (JRF) feature and are
11
11
the pre-requisite for "upper stack" products like Oracle SOA Suite, for example.
12
12
These domains also require a database and the use of the Repository
13
13
Creation Utility (RCU).
14
14
15
15
This section provides details about the special considerations for running
16
16
FMW Infrastructure domains with the operator. Other than those considerations
17
-
listed here, FMW Infrastructure domains work in the same way as WebLogic domains.
17
+
listed here, FMW Infrastructure domains work in the same way as WebLogic Server domains.
18
18
That is, the remainder of the documentation in this site applies equally to FMW
19
-
Infrastructure domains and WebLogic domains.
19
+
Infrastructure domains and WebLogic Server domains.
20
20
21
21
FMW Infrastructure domains are supported using both the "domain on a persistent volume"
22
22
and the "domain in a Docker image" [models]({{< relref "/userguide/managing-domains/choosing-a-model/_index.md" >}}).
@@ -49,12 +49,16 @@ A [sample](https://github.com/oracle/docker-images/tree/master/OracleFMWInfrastr
49
49
is provided in the Oracle GitHub account that demonstrates how to create a Docker image
50
50
to run FMW Infrastructure.
51
51
52
+
Please consult the [README](https://github.com/oracle/docker-images/blob/master/OracleFMWInfrastructure/dockerfiles/12.2.1.3/README.md) file associated with this sample for important prerequisite steps,
53
+
such as building or pulling the Server JRE Docker image and downloading the Fusion Middleware
54
+
Infrastructure installer binary.
55
+
52
56
After cloning the repository and downloading the installer from Oracle Technology Network
53
57
or e-delivery, you create your image by running the provided script:
54
58
55
59
```bash
56
60
cd docker-images/OracleFMWInfrastructure/dockerfiles
57
-
./buildDockerImage.sh -v 12.2.1.3 -g
61
+
./buildDockerImage.sh -v 12.2.1.3 -s
58
62
```
59
63
60
64
The image produced will be named `oracle/fmw-infrastructure:12.2.1.3`.
@@ -68,7 +72,7 @@ by running the provided script:
68
72
69
73
```bash
70
74
cd docker-images/OracleFMWInfrastructure/samples/12213-patch-fmw-for-k8s
71
-
./buildDockerImage.sh
75
+
./build.sh
72
76
```
73
77
74
78
This will produce an image named `oracle/fmw-infrastructure:12213-update-k8s`.
@@ -160,8 +164,10 @@ spec:
160
164
```
161
165
162
166
Notice that you can pass in environment variables to set the SID, the name of the PDB, and
163
-
so on. The documentation describes the other variables that are available. You should
164
-
also create a service to make the database available within the Kubernetes cluster with
167
+
so on. The documentation describes the other variables that are available. The `sys` password
168
+
defaults to `Oradoc_db1`. Follow the instructions in the documentation to reset this password.
169
+
170
+
You should also create a service to make the database available within the Kubernetes cluster with
165
171
a well known name. Here is an example:
166
172
167
173
```yaml
@@ -171,7 +177,6 @@ metadata:
171
177
name: oracle-db
172
178
namespace: default
173
179
spec:
174
-
clusterIP: 10.97.236.215
175
180
ports:
176
181
- name: tns
177
182
port: 1521
@@ -185,7 +190,7 @@ spec:
185
190
```
186
191
187
192
In the example above, the database would be visible in the cluster using the address
Java integration tests cover the following use cases:
4
+
5
+
## Basic test Configuration & Use Cases
6
+
7
+
|||
8
+
| --- | --- |
9
+
| Operator Configuration | operator1 deployed in `weblogic-operator` namespace and manages domains in `default` and `jrfdomains` namespaces |
10
+
| Domain Configuration | Domain on PV using WLST, Traefik load balancer |
11
+
12
+
**Basic Use Cases**
13
+
14
+
1. Create operator `weblogic-operator` which manages `default` and `jrfdomains` namespaces, verify it's deployed successfully, pods created, operator ready and verify external REST service, if configured.
15
+
2. Create domain `jrfdomain` in `jrfdomains` namespace and verify the pods, services are created and servers are in ready state.
16
+
3. Verify the admin external service by accessing the admin REST endpoint with `nodeport` in URL.
17
+
4. Verify exec into the admin pod and deploying webapp using the admin port with WLST.
18
+
5. Verify web app load balancing by accessing the webapp using `loadBalancerWebPort`.
19
+
20
+
**Advanced Use Cases**
21
+
22
+
6. Verify domain life cycle (destroy and create) should not have any impact on operator managing the domain and web app load balancing and admin external service.
23
+
7. Cluster scale up/down using operator REST endpoint, webapp load balancing should adjust accordingly.
24
+
8. Operator life cycle (destroy and create) should not impact the running domain.
25
+
26
+
Also the below use cases are covered for each test:
27
+
28
+
9. Verify the liveness probe by killing managed server 1 process 3 times to kick pod auto-restart.
29
+
10. Shutdown the domain by changing domain `serverStartPolicy` to `NEVER`.
30
+
31
+
32
+
## Full test Configuration & Use Cases
33
+
34
+
Additional Operator Configuration:
35
+
36
+
|||
37
+
| --- | --- |
38
+
| Operator Configuration |`operator2` deployed in `weblogic-operator2` namespace and manages domains in `test2` namespace |
39
+
40
+
Basic Use Cases described above are verified in all the domain configurations. Also the below use cases are covered:
41
+
42
+
| Domain | Use Case |
43
+
| --- | --- |
44
+
| Domain on PV using WLST | as above in basic use cases |
45
+
| Domain with ADMIN_ONLY | verify only admin server is started and managed servers are not started. Shutdown domain by deleting domain CRD. Create domain on existing PV dir, pv is already populated by a shutdown domain. |
46
+
| Domain with situational config | create domain with listen address not set for admin server and t3 channel/NAP and incorrect file for admin server log location. Introspector should override these with sit-config automatically. Also, with some junk value for t3 channel public address and using custom situational config override replace with valid public address using secret. Also, on Jenkins this domain uses NFS instead of HOSTPATH PV storage |
47
+
| Two domains managed by two operators | verify scaling and restart of one domain doesn't impact another domain. Delete domain resources using delete script from samples. |
48
+
| Two domains in the same namespace managed by one operator | create two FMW Infra Domains in the same namespace managed by one operator. Verify scaling and restart of one domain doesn't impact another domain. |
49
+
| Two domains in the different namespaces managed by one operator | create two FMW Infra Domains in the different namespaces managed by one operator. Domain1 uses VOYAGER load balancer. Domain2 uses TRAEFIK load balancer. Verify scaling and restart of one domain doesn't impact another domain. |
50
+
| Domain with Recycle policy | create domain with pvReclaimPolicy="Recycle" and using Configured cluster. Verify that the PV is deleted once the domain and PVC are deleted |
51
+
| Domain with default sample values | create domain using mostly default values for inputs |
52
+
53
+
Test cases cover the bugs found:
54
+
55
+
| Domain | Use Case |
56
+
| --- | --- |
57
+
| Domain on PV using WLST | cluster rolling restart with restartVersion and maxUnavailable set to 2, verify the not-ready server counts can not exceed maxUnavailable value |
58
+
| Domain on PV using WLST | in create domain input file, set exposeAdminNodePort to false and exposeAdminT3Channel to true, verify the managed server pods are created |
59
+
| Domain on PV using WLST | in create domain input file, set createDomainScriptsMountPath to non-default value, verify the create domain sample script works |
60
+
| Domain on PV using WLST | in createFMWDomain.py file, set administration_port_enabled to true, verify the admin sever pod is running and ready and all managed server pods are created |
61
+
| Domain on PV using WLST | in create domain input file, set exposeAdminT3Channel to true, verify the admin t3 channel is exposed |
Copy file name to clipboardExpand all lines: integration-tests/README.md
+44-1Lines changed: 44 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,8 @@ Shared cluster runs only Quick test use cases, Jenkins runs both Quick and Full
16
16
17
17
Use Cases covered in integration tests for the operator is available [here](USECASES.MD)
18
18
19
+
JRF Use Cases are covered [here](JRFUSECASES.MD)
20
+
19
21
# Directory Configuration and Structure
20
22
21
23
Directory structure of source code:
@@ -226,8 +228,20 @@ SHARED_CLUSTER=true:
226
228
| IMAGE_NAME_OPERATOR | Docker image name for operator. Default is weblogic-kubernetes-operator |
227
229
| IMAGE_PULL_POLICY_OPERATOR | Default 'Never'. |
228
230
| IMAGE_PULL_SECRET_OPERATOR | Default ''. |
229
-
| IMAGE_PULL_SECRET_WEBLOGIC | Default ''.
231
+
| IMAGE_PULL_SECRET_WEBLOGIC | Default ''.|
230
232
233
+
The below env variables are required for SHARED_CLUSTER=true:
234
+
235
+
| Variable | Description |
236
+
| --- | --- |
237
+
| REPO_REGISTRY | OCIR Server to push/pull the Operator image |
238
+
| REPO_USERNAME | OCIR Username |
239
+
| REPO_PASSWORD | OCIR token |
240
+
| REPO_EMAIL | OCIR email |
241
+
| DOCKER_USERNAME | Docker username to pull the Weblogic image |
242
+
| DOCKER_PASSWORD | Docker password |
243
+
| DOCKER_EMAIL | Docker email |
244
+
| K8S_NODEPORT_HOST | DNS name of a Kubernetes worker node. |
231
245
232
246
Successful run will have the output like below:
233
247
```
@@ -282,6 +296,35 @@ Failed run will have the output like
282
296
```
283
297
JUnit test results can be seen at "integration-tests/target/failsafe-reports/TEST-oracle.kubernetes.operator.ITOperator.xml". This file shows how much time each test case took to run and the failed test results if any.
284
298
299
+
# How to run JRF domain In Operator related tests
300
+
* Setup docker access to FMW Infrastructure 12c Image and Oracle Database 12c Image
301
+
302
+
Method 1
303
+
- Setup a personal account on hub.docker.com
304
+
- Then sign in to hub.docker.com and signup for access to Oracle Database 12c Images via https://hub.docker.com/_/oracle-database-enterprise-edition
305
+
- Then export the following before running the tests:
306
+
```
307
+
export DOCKER_USERNAME=<docker_username>
308
+
export DOCKER_PASSWORD=<docker_password>
309
+
export DOCKER_EMAIL=<docker_email>
310
+
```
311
+
- Setup an account in phx.ocir.io
312
+
- Then sign in to phx.ocir.io to get access to FMW Infrastructure 12c Image: **_phx.ocir.io/weblogick8s/oracle/fmw-infrastructure:12.2.1.3_**
313
+
- export the following before running the tests:
314
+
```
315
+
export REPO_USERNAME=<ocir_username>
316
+
export REPO_PASSWORD=<ocir_password>
317
+
export REPO_EMAIL=<ocir_email>
318
+
```
319
+
320
+
Method 2
321
+
- Make sure the FMW Infrastructure image i.e. **_phx.ocir.io/weblogick8s/oracle/fmw-infrastructure:12.2.1.3_** and the Oracle database image i.e. **_store/oracle/database-enterprise:12.2.0.1_** already exist locally in a docker repository the k8s cluster can access
322
+
323
+
* Command to run the tests:
324
+
```
325
+
mvn clean verify -P jrf-integration-tests 2>&1 | tee log.txt
| Server affinity | Use a web application deployed on Weblogic cluster to track HTTP session. Test server-affinity by sending two HTTP requests to Weblogic and verify that all requests are directed to same Weblogic server |
104
-
| Session state isolation | Verify that values saved in a client session state are not visible to another client |
104
+
| Session state isolation | Verify that values saved in a client session state are not visible to another client |
105
+
106
+
| Monitoring Exporter | Use Case |
107
+
| --- | --- |
108
+
| Check Metrics via Prometheus | build, deploy webapp for Monitoring Exporter, start Prometheus and verify the metrics was produced by using Prometheus APIs |
109
+
| Replace Configuration via exporter console| Verify that configuration for monitoring exporter can be replaced during runtime, check applied metrics via Prometheus APIs|
110
+
| Append Configuration via exporter console| Verify that configuration for monitoring exporter can be appended during runtime, check applied metrics via Prometheus APIs|
111
+
| Append Configuration with varios combinations of attributes via exporter console| Append monitoring exporter configuration [a] to new config [a,b] and verify it was applied |
112
+
| Replace Configuration with only one attribute as array via exporter console| Replace monitoring exporter configuration [a,b,c] attributes with new config [a] and verify it was applied |
113
+
| Replace Configuration with empty config file via exporter console| Replace monitoring exporter configuration with empty config file, verify it was applied |
114
+
| Replace/Append Configuration with config file written in non yml format via exporter console| Try to replace/append monitoring exporter configuration with config file written in non yml format, verify configuration has not changed |
115
+
| Replace/Append Configuration with corrupted yml file via exporter console| Try to replace/append monitoring exporter configuration with config file written in corrupted yml format, verify configuration has not changed |
116
+
| Replace/Append Configuration with dublicated values in the config file via exporter console| Try to replace/append monitoring exporter configuration with dublicated values in the config file, verify configuration has not changed |
117
+
| Replace/Append Configuration with invalid credentials via exporter console| Try to replace/append monitoring exporter configuration with varios combos for invalid credentials, verify configuration has not changed and `401 Unauthorized` exception was thrown |
118
+
119
+
| Logging with Elastic Stack | Use Case |
120
+
| --- | --- |
121
+
| Search log level | Use Elasticsearch Count API to query logs of level=INFO and verify that total number of logs for level=INFO is not zero and failed count is zero |
122
+
| Search Operator log | Use Elasticsearch Search APIs to query Operator log info and verify that log hits for type=weblogic-operator are not empty |
123
+
| Search Weblogic log | Use Elasticsearch Search APIs to query Weblogic log info and verify that log hits for Weblogic servers are not empty |
0 commit comments