Skip to content

Commit 034ae51

Browse files
committed
Documentation
1 parent 019b0b4 commit 034ae51

File tree

4 files changed

+138
-28
lines changed

4 files changed

+138
-28
lines changed

docs/examples/800_istio.adoc

Lines changed: 28 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,28 @@
1-
= Istio Support
2-
3-
== Overview
4-
5-
Coherence Operator 3.1.6 and later works with Istio 1.9.1 and later. You can run the operator and Coherence cluster managed by the operator with Istio. Coherence caches can be accessed from outside the Coherence cluster via Coherence*Extend, REST, and other supported Coherence clients. Using Coherence clusters with Istio does not require the Coherence Operator to also be using Istio (and vice-versa) . The Coherence Operator can manage Coherence clusters independent of whether those clusters are using Istio or not.
6-
7-
== Prometheus
8-
9-
The coherence metrics that record and track the health of Coherence cluster using Prometheus are also available in Istio environment and can be viewed through Granfana. However, Coherence cluster traffic is not visible by Istio.
1+
///////////////////////////////////////////////////////////////////////////////
102

11-
== Traffic Visualization
3+
Copyright (c) 2021, Oracle and/or its affiliates.
4+
Licensed under the Universal Permissive License v 1.0 as shown at
5+
http://oss.oracle.com/licenses/upl.
126

13-
Istio provides traffic management capabilities, including the ability to visualize traffic in Kiali. You do not need to change your applications to use this feature. The Istio proxy (envoy) sidecar that is injected into your pods provides it. The image below shows an example with traffic flow. In this example, you can see how the traffic flows in from the Istio gateway on the left, to the cluster services, and then to the individual cluster members. This example has storage members (example-cluster-storage), a proxy member running proxy service (example-cluster-proxy), and a REST member running http server (example-cluster-rest). However, Coherence cluster traffic between members is not visible.
14-
15-
image::../images/istioKiali.png[width=1024,height=512]
7+
///////////////////////////////////////////////////////////////////////////////
8+
= Istio Support
169
17-
To learn more, see https://istio.io/latest/docs/concepts/traffic-management/[Istio traffic management].
10+
== Istio Support
1811
19-
== Limitations
12+
You can run the Coherence cluster and manage then using the Coherence Operator alongside Istio. Coherence clusters managed with the Coherence Operator 3.2.0 and later work with Istio 1.9.1 and later. Coherence caches can be accessed from outside the Coherence cluster via Coherence*Extend, REST, and other supported Coherence clients. Using Coherence clusters with Istio does not require the Coherence Operator to also be using Istio (and vice-versa) . The Coherence Operator can manage Coherence clusters independent of whether those clusters are using Istio or not.
2013
14+
[IMPORTANT]
15+
====
2116
The current support for Istio has the following limitation:
2217
23-
* Ports that are exposed in the ports list of pod spec are intercepted by Envoy proxies, thus break Coherence cluster traffic. As a result, Coherence cluster traffic must passthrough Envoy proxies.
18+
Ports that are exposed in the ports list of the container spec in a Pod will be intercepted by the Envoy proxy in the Istio side-car container. Coherence cluster traffic must not pass through Envoy proxies as this will break Coherence, so the Coherence cluster port must never be exposed as a container port if using Istio. There is no real reason to expose the Coherence cluster port in a container because there is no requirement to have this port externally visible.
19+
====
2420
25-
== Prerequisites
21+
=== Prerequisites
2622
2723
The instructions assume that you are using a Kubernetes cluster with Istio installed and configured already.
2824
29-
== Using the Coherence operator with Istio
25+
=== Using the Coherence operator with Istio
3026
3127
To use Coherence operator with Istio, you can deploy the operator into a namespace which has Istio automatic sidecar injection enabled. Before installing the operator, create the namespace in which you want to run the Coherence operator and label it for automatic injection.
3228
@@ -53,7 +49,7 @@ coherence-operator-controller-manager-7d76f9f475-q2vwv 2/2 Running 1
5349
5450
2/2 in READY column means that there are 2 containers running in the operator Pod. One is Coherence operator and the other is Envoy Proxy.
5551
56-
== Creating a Coherence cluster with Istio
52+
=== Creating a Coherence cluster with Istio
5753
5854
You can configure your cluster to run with Istio automatic sidecar injection enabled. Before creating your cluster, create the namespace in which you want to run the cluster and label it for automatic injection.
5955
@@ -96,7 +92,7 @@ example-cluster-storage-2 2/2 Running 0 45
9692
9793
You can see that 3 members in the cluster are running with 3 pods. 2/2 in READY column means that there are 2 containers running in each Pod. One is Coherence member and the other is Envoy Proxy.
9894
99-
== TLS
95+
=== TLS
10096
10197
Coherence cluster works with mTLS. Coherence client can also support TLS through Istio Gateway with TLS termination to connect to Coherence cluster running inside kubernetes. For example, you can apply the following Istio Gateway and Virtual Service in the namespace of the Coherence cluster. Before applying the gateway, create a secret for the credential from the certiticate and key (e.g. server.crt and server.key) to be used by the Gateway:
10298
@@ -204,3 +200,15 @@ If you are using Docker for Desktop, $INGRESS_HOST is 127.0.0.1 and you can use
204200
----
205201
kubectl port-forward -n istio-system <istio-ingressgateway-pod> 8043:8043
206202
----
203+
204+
=== Prometheus
205+
206+
The coherence metrics that record and track the health of Coherence cluster using Prometheus are also available in Istio environment and can be viewed through Granfana. However, Coherence cluster traffic is not visible by Istio.
207+
208+
=== Traffic Visualization
209+
210+
Istio provides traffic management capabilities, including the ability to visualize traffic in Kiali. You do not need to change your applications to use this feature. The Istio proxy (envoy) sidecar that is injected into your pods provides it. The image below shows an example with traffic flow. In this example, you can see how the traffic flows in from the Istio gateway on the left, to the cluster services, and then to the individual cluster members. This example has storage members (example-cluster-storage), a proxy member running proxy service (example-cluster-proxy), and a REST member running http server (example-cluster-rest). However, Coherence cluster traffic between members is not visible.
211+
212+
image::../images/istioKiali.png[width=1024,height=512]
213+
214+
To learn more, see https://istio.io/latest/docs/concepts/traffic-management/[Istio traffic management].
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
///////////////////////////////////////////////////////////////////////////////
2+
3+
Copyright (c) 2021, Oracle and/or its affiliates.
4+
Licensed under the Universal Permissive License v 1.0 as shown at
5+
http://oss.oracle.com/licenses/upl.
6+
7+
///////////////////////////////////////////////////////////////////////////////
8+
9+
= Performance Testing
10+
11+
== Performance Testing in Kubernetes
12+
13+
Many customers use Coherence because they want access to data at in-memory speeds. Customers who want the best performance from their application typically embark on performance testing and load testing of their application. When doing this sort of testing on Kubernetes, it is useful to understand the ways that your Kubernetes environment can impact your test results.
14+
15+
== Where are your Nodes?
16+
17+
When an application has been deployed into Kubernetes, pods will typically be distributed over many nodes in the Kubernetes cluster.
18+
When deploying into Kubernetes cluster in the cloud, for example on Oracle OKE, the nodes can be distributed across different availability zones. These zones are effectively different data centers, meaning that the network speed can differ considerable between nodes in different zones.
19+
Performance testing in this sort of environment can be difficult if you use default Pod scheduling. Different test runs could distribute Pods to different nodes, in different zones, and skew results depending on how "far" test clients and servers are from each other.
20+
For example, when testing a simple Coherence `EntryProcessor` invocation in a Kubernetes cluster spread across zones, we saw the 95% response time when the client and server were in the same zone was 0.1 milli-seconds. When the client and server were in different zones, the 95% response time could be as high as 0.8 milli-seconds. This difference is purely down to the network distance between nodes. Depending on the actual use-cases being tested, this difference might not have much impact on overall response times, but for simple operations it can be a significant enough overhead to impact test results.
21+
22+
The solution to the issue described above is to use Pod scheduling to fix the location of the Pods to be used for tests. In a cluster like Oracle OKE, this would ensure all the Pods will be scheduled into the same availability zone.
23+
24+
=== Finding Node Zones
25+
26+
This example is going to talks about scheduling Pods to a single availability zone in a Kubernetes cluster in the cloud. Pod scheduling in this way uses Node labels, and in fact any label on the Nodes in your cluster could be used to fix the location of the Pods.
27+
28+
To schedule all the Coherence Pods into a single zone we first need to know what zones we have and what labels have used.
29+
The standard Kubernetes Node label for the availability zone is `topology.kubernetes.io/zone` (as documented in the https://kubernetes.io/docs/reference/labels-annotations-taints/[Kubernetes Labels Annotations and Taints] documentation). To slightly confuse the situation, prior to Kubernetes 1.17 the label was `failure-domain.beta.kubernetes.io/zone`, which has now been deprecated. Some Kubernetes clusters, even after 1.17, still use the deprecated label, so you need to know what labels your Nodes have.
30+
31+
Run the following command so list the nodes in a Kubernetes cluster with the value of the two zone labels for each node:
32+
[source,bash]
33+
----
34+
kubectl get nodes -L topology.kubernetes.io/zone,failure-domain.beta.kubernetes.io/zone
35+
----
36+
37+
The output will be something like this:
38+
[source]
39+
----
40+
NAME STATUS ROLES AGE VERSION ZONE ZONE
41+
node-1 Ready node 66d v1.19.7 US-ASHBURN-AD-1
42+
node-2 Ready node 66d v1.19.7 US-ASHBURN-AD-2
43+
node-3 Ready node 66d v1.19.7 US-ASHBURN-AD-3
44+
node-4 Ready node 66d v1.19.7 US-ASHBURN-AD-2
45+
node-5 Ready node 66d v1.19.7 US-ASHBURN-AD-3
46+
node-6 Ready node 66d v1.19.7 US-ASHBURN-AD-1
47+
----
48+
In the output above the first `Zone` column has values, and the second does not. This means that the zone label used is the first in the label list in our `kubectl` command, i.e., `topology.kubernetes.io/zone`.
49+
50+
If the nodes had been labeled with the second, deprecated, label in the `kubectl` command list `failure-domain.beta.kubernetes.io/zone` the output would look like this:
51+
[source]
52+
----
53+
NAME STATUS ROLES AGE VERSION ZONE ZONE
54+
node-1 Ready node 66d v1.19.7 US-ASHBURN-AD-1
55+
node-2 Ready node 66d v1.19.7 US-ASHBURN-AD-2
56+
node-3 Ready node 66d v1.19.7 US-ASHBURN-AD-3
57+
node-4 Ready node 66d v1.19.7 US-ASHBURN-AD-2
58+
node-5 Ready node 66d v1.19.7 US-ASHBURN-AD-3
59+
node-6 Ready node 66d v1.19.7 US-ASHBURN-AD-1
60+
----
61+
62+
From the list of nodes above we can see that there are three zones, `US-ASHBURN-AD-1`, `US-ASHBURN-AD-2` and `US-ASHBURN-AD-3`.
63+
In this example we will schedule all the Pods to zome `US-ASHBURN-AD-1`.
64+
65+
=== Scheduling Pods of a Coherence Cluster
66+
67+
The `Coherence` CRD supports a number of ways to schedule Pods, as described in the <<other/090_pod_scheduling.adoc,Configure Pod Scheduling>> documentation. Using node labels is the simplest of the scheduling methods.
68+
In this case we need to schedule Pods onto nodes that have the label `topology.kubernetes.io/zone=US-ASHBURN-AD-1`.
69+
In the `Coherence` yaml we use the `nodeSelector` field.
70+
71+
[source,yaml]
72+
.coherence-cluster.yaml
73+
----
74+
apiVersion: coherence.oracle.com/v1
75+
kind: Coherence
76+
metadata:
77+
name: storage
78+
spec:
79+
replicas: 3
80+
nodeSelector:
81+
topology.kubernetes.io/zone: US-ASHBURN-AD-1
82+
----
83+
84+
When the yaml above is applied, a cluster of three Pods will be created, all scheduled onto nodes in the `US-ASHBURN-AD-1` availability zone.
85+
86+
87+
=== Other Performance Factors
88+
89+
Depending on the Kubernetes cluster you are using there could be various other factors to bear in mind. Many Kubernetes clusters run on virtual machines, which can be poor for repeated performance comparisons unless you know what else might be running on the underlying hardware that the VM is on. If a test run happens at the same time as another VM is consuming a lot of the underlying hardware resource this can obviously impact the results. Unfortunately bear-metal hardware, the best for repeated performance tests, is not always available, so it is useful to bear this in mind if there is suddenly a strange outlier in the tests.
90+
91+
92+
93+

examples/federation/README.adoc

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,19 @@
1-
== Coherence Operator Federation Example
1+
///////////////////////////////////////////////////////////////////////////////
22

3-
This simple example demostrates the Coherence federation feature. It shows how to deploy two Coherence clusters that federating data between them using the Coherence Operator. The Coherence federation feature requires Coherence Grid Edition. See https://oracle.github.io/coherence-operator/docs/latest/#/installation/04_obtain_coherence_images[Obtain Coherence Images] on how to get a commercial Coherence image.
3+
Copyright (c) 2021, Oracle and/or its affiliates.
4+
Licensed under the Universal Permissive License v 1.0 as shown at
5+
http://oss.oracle.com/licenses/upl.
6+
7+
///////////////////////////////////////////////////////////////////////////////
8+
= Federation Example
9+
10+
== Federation Example
11+
12+
This simple example demonstrates the Coherence federation feature. It shows how to deploy two Coherence clusters that federating data between them using the Coherence Operator. The Coherence federation feature requires Coherence Grid Edition. See https://oracle.github.io/coherence-operator/docs/latest/#/installation/04_obtain_coherence_images[Obtain Coherence Images] on how to get a commercial Coherence image.
413
514
You can find the source code in the https://github.com/oracle/coherence-operator/tree/master/examples/federation[Operator GitHub Repo].
615
7-
==== What the Example will Cover
16+
=== What the Example will Cover
817
918
* <<install-operator,Install the Coherence Operator>>
1019
* <<create-the-example-namespace,Create the example namespace>>
@@ -13,7 +22,7 @@ You can find the source code in the https://github.com/oracle/coherence-operator
1322
* <<cleanup, Cleaning Up>>
1423
1524
[#install-operator]
16-
== Install the Coherence Operator
25+
=== Install the Coherence Operator
1726
1827
To run the examples below, you will need to have installed the Coherence Operator, do this using whatever method you prefer from the https://oracle.github.io/coherence-operator/docs/latest/#/installation/01_installation[Installation Guide].
1928
@@ -39,7 +48,7 @@ namespace/coherence-example created
3948
----
4049
4150
[#create-secret]
42-
== Create image pull and configure store secrets
51+
=== Create image pull and configure store secrets
4352
4453
This example reqires two secrets:
4554
@@ -67,7 +76,7 @@ kubectl create secret generic storage-config -n coherence-example \
6776
----
6877
6978
[#example]
70-
== Run the Example
79+
=== Run the Example
7180
7281
Ensure you are in the `examples/federation` directory to run the example. This example uses the yaml files `src/main/yaml/primary-cluster.yaml` and `src/main/yaml/secondary-cluster.yaml`, which
7382
define a primary cluster and a secondary cluster.
@@ -207,7 +216,7 @@ secondaryvalue
207216
----
208217
209218
[#cleanup]
210-
== Cleaning up
219+
=== Cleaning up
211220
212221
Use the following commands to delete the primary and secondary clusters:
213222

examples/federation/src/main/resources/storage-cache-config.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
<autostart>true</autostart>
2828
<address-provider>
2929
<local-address>
30-
<address system-property="coherence.extend.address"></address>
30+
<address system-property="coherence.extend.address"/>
3131
<port system-property="coherence.federation.port">40000</port>
3232
</local-address>
3333
</address-provider>

0 commit comments

Comments
 (0)