Skip to content

Commit 5b3856b

Browse files
[hotfix] Fix typo in doc
1 parent 5608b25 commit 5b3856b

File tree

5 files changed

+9
-8
lines changed

5 files changed

+9
-8
lines changed

docs/content/docs/concepts/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ The examples are maintained as part of the operator repo and can be found [here]
9494
## Known Issues & Limitations
9595

9696
### JobManager High-availability
97-
The Operator supports both [Kubernetes HA Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/) and [Zookeeper HA Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/zookeeper_ha/) for providing High-availability for Flink jobs. The HA solution can benefit form using additional [Standby replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/), it will result in a faster recovery time, but Flink jobs will still restart when the Leader JobManager goes down.
97+
The Operator supports both [Kubernetes HA Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/) and [Zookeeper HA Services](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/zookeeper_ha/) for providing High-availability for Flink jobs. The HA solution can benefit from using additional [Standby replicas](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/overview/), it will result in a faster recovery time, but Flink jobs will still restart when the Leader JobManager goes down.
9898

9999
### JobResultStore Resource Leak
100100
To mitigate the impact of [FLINK-27569](https://issues.apache.org/jira/browse/FLINK-27569) the operator introduced a workaround [FLINK-27573](https://issues.apache.org/jira/browse/FLINK-27573) by setting `job-result-store.delete-on-commit=false` and a unique value for `job-result-store.storage-path` for every cluster launch. The storage path for older runs must be cleaned up manually, keeping the latest directory always:

docs/content/docs/operations/helm.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The operator installation is managed by a helm chart. To install with the chart
3232
helm install flink-kubernetes-operator helm/flink-kubernetes-operator
3333
```
3434

35-
To install from our Helm Chart Reporsitory run:
35+
To install from our Helm Chart Repository run:
3636

3737
```
3838
helm repo add flink-operator-repo https://downloads.apache.org/flink/flink-kubernetes-operator-<OPERATOR-VERSION>/
@@ -112,7 +112,7 @@ The configurable parameters of the Helm chart and which default values as detail
112112
| defaultConfiguration.create | Whether to enable default configuration to create for flink-kubernetes-operator. | true |
113113
| defaultConfiguration.append | Whether to append configuration files with configs. | true |
114114
| defaultConfiguration.flink-conf.yaml | The default configuration of flink-conf.yaml. | kubernetes.operator.metrics.reporter.slf4j.factory.class: org.apache.flink.metrics.slf4j.Slf4jReporterFactory<br/>kubernetes.operator.metrics.reporter.slf4j.interval: 5 MINUTE<br/>kubernetes.operator.reconcile.interval: 15 s<br/>kubernetes.operator.observer.progress-check.interval: 5 s |
115-
| defaultConfiguration.config.yaml | The newer configuration file format for flink that will enforced in Flink 2.0. Note this was introudced in flink 1.19. | kubernetes.operator.metrics.reporter.slf4j.factory.class: org.apache.flink.metrics.slf4j.Slf4jReporterFactory<br/>kubernetes.operator.metrics.reporter.slf4j.interval: 5 MINUTE<br/>kubernetes.operator.reconcile.interval: 15 s<br/>kubernetes.operator.observer.progress-check.interval: 5 s |
115+
| defaultConfiguration.config.yaml | The newer configuration file format for flink that will enforced in Flink 2.0. Note this was introduced in flink 1.19. | kubernetes.operator.metrics.reporter.slf4j.factory.class: org.apache.flink.metrics.slf4j.Slf4jReporterFactory<br/>kubernetes.operator.metrics.reporter.slf4j.interval: 5 MINUTE<br/>kubernetes.operator.reconcile.interval: 15 s<br/>kubernetes.operator.observer.progress-check.interval: 5 s |
116116

117117
| defaultConfiguration.log4j-operator.properties | The default configuration of log4j-operator.properties. | |
118118
| defaultConfiguration.log4j-console.properties | The default configuration of log4j-console.properties. | |

docs/content/docs/operations/metrics-logging.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The Operator gathers aggregates metrics about managed resources.
4242
| Namespace | FlinkDeployment.JmDeploymentStatus.&lt;Status&gt;.Count | Number of managed FlinkDeployment resources per &lt;Status&gt; per namespace. &lt;Status&gt; can take values from: READY, DEPLOYED_NOT_READY, DEPLOYING, MISSING, ERROR | Gauge |
4343
| Namespace | FlinkDeployment.FlinkVersion.&lt;FlinkVersion&gt;.Count | Number of managed FlinkDeployment resources per &lt;FlinkVersion&gt; per namespace. &lt;FlinkVersion&gt; is retrieved via REST API from Flink JM. | Gauge |
4444
| Namespace | FlinkDeployment/FlinkSessionJob.Lifecycle.State.&lt;State&gt;.Count | Number of managed resources currently in state &lt;State&gt; per namespace. &lt;State&gt; can take values from: CREATED, SUSPENDED, UPGRADING, DEPLOYED, STABLE, ROLLING_BACK, ROLLED_BACK, FAILED | Gauge |
45-
| System/Namespace | FlinkDeployment/FlinkSessionJob.Lifecycle.State.&lt;State&gt;.TimeSeconds | Time spent in state &lt;State$gt for a given resource. &lt;State&gt; can take values from: CREATED, SUSPENDED, UPGRADING, DEPLOYED, STABLE, ROLLING_BACK, ROLLED_BACK, FAILED | Histogram |
45+
| System/Namespace | FlinkDeployment/FlinkSessionJob.Lifecycle.State.&lt;State&gt;.TimeSeconds | Time spent in state &lt;State&gt; for a given resource. &lt;State&gt; can take values from: CREATED, SUSPENDED, UPGRADING, DEPLOYED, STABLE, ROLLING_BACK, ROLLED_BACK, FAILED | Histogram |
4646
| System/Namespace | FlinkDeployment/FlinkSessionJob.Lifecycle.Transition.&lt;Transition&gt;.TimeSeconds | Time statistics for selected lifecycle state transitions. &lt;Transition&gt; can take values from: Resume, Upgrade, Suspend, Stabilization, Rollback, Submission | Histogram |
4747

4848
#### Lifecycle metrics

docs/content/docs/operations/plugins.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ That folder is added to classpath upon initialization.
127127
128128
## Custom Flink Resource Mutators
129129
130-
`FlinkResourceMutator`, an interface for ,mutating the resources of `FlinkDeployment` and `FlinkSessionJob`, is a pluggable component based on the [Plugins](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/plugins) mechanism. During development, we can customize the implementation of `FlinkResourceMutator` and make sure to retain the service definition in `META-INF/services`.
130+
`FlinkResourceMutator`, an interface for mutating the resources of `FlinkDeployment` and `FlinkSessionJob`, is a pluggable component based on the [Plugins](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/filesystems/plugins) mechanism. During development, we can customize the implementation of `FlinkResourceMutator` and make sure to retain the service definition in `META-INF/services`.
131131
The following steps demonstrate how to develop and use a custom mutator.
132132
133133
1. Implement `FlinkResourceMutator` interface:

docs/content/docs/operations/upgrade.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,9 @@ Please check the [related section](#upgrading-from-v1alpha1---v1beta1).
3838
## Normal Upgrade Process
3939

4040
If you are upgrading from `kubernetes-operator-1.0.0` or later, please refer to the following two steps:
41-
1. Upgrading the CRDs
42-
2. Upgrading the Helm deployment
41+
1. Upgrading the Java client library
42+
2. Upgrading the CRDs
43+
3. Upgrading the Helm deployment
4344

4445
We will cover these steps in detail in the next sections.
4546

@@ -150,7 +151,7 @@ Here is a reference example of upgrading a `basic-checkpoint-ha-example` deploym
150151
```
151152
5. Restore the job:
152153

153-
Deploy the previously deleted job using this [FlinkDeployemnt](https://raw.githubusercontent.com/apache/flink-kubernetes-operator/main/examples/basic-checkpoint-ha.yaml) with `v1beta1` and explicitly set the `job.initialSavepointPath` to the savepoint location obtained from the step 1.
154+
Deploy the previously deleted job using this [FlinkDeployment](https://raw.githubusercontent.com/apache/flink-kubernetes-operator/main/examples/basic-checkpoint-ha.yaml) with `v1beta1` and explicitly set the `job.initialSavepointPath` to the savepoint location obtained from the step 1.
154155

155156
```
156157
spec:

0 commit comments

Comments
 (0)