You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/enterprise/embedded-manage-nodes.mdx
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,8 @@ Multi-node clusters with Embedded Cluster have the following limitations:
10
10
11
11
* High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. To get access to this feature, reach out to Alex Parker at [[email protected]](mailto:[email protected]).
12
12
13
+
* The same Embedded Cluster data directory used at installation is used for all nodes joined to the cluster. This is either the default `/var/lib/embedded-cluster` directory or the directory set with the [`--data-dir`](/reference/embedded-cluster-install#flags) flag. You cannot choose a different data directory for Embedded Cluster when joining nodes.
14
+
13
15
## Add Nodes to a Cluster (Beta) {#add-nodes}
14
16
15
17
You can add nodes to create a multi-node cluster in online (internet-connected) and air-gapped (limited or no outbound internet access) environments. The Admin Console provides the join command that you use to join nodes to the cluster.
# Updating Custom TLS Certificates in Embedded Cluster Installations
2
+
3
+
This topic describes how to update custom TLS certificates in Replicated Embedded Cluster installations.
4
+
5
+
## Update Custom TLS Certificates
6
+
7
+
Users can provide custom TLS certificates with Embedded Cluster installations and can update TLS certificates through the Admin Console.
8
+
9
+
:::important
10
+
Adding the `acceptAnonymousUploads` annotation temporarily creates a vulnerability for an attacker to maliciously upload TLS certificates. After TLS certificates have been uploaded, the vulnerability is closed again.
11
+
12
+
Replicated recommends that you complete this upload process quickly to minimize the vulnerability risk.
13
+
:::
14
+
15
+
To upload a new custom TLS certificate in Embedded Cluster installations:
16
+
17
+
1. SSH onto a controller node where Embedded Cluster is installed. Then, run the following command to start a shell so that you can access the cluster with kubectl:
18
+
19
+
```bash
20
+
sudo ./APP_SLUG shell
21
+
```
22
+
Where `APP_SLUG` is the unique slug of the installed application.
23
+
24
+
1. In the shell, run the following command to restore the ability to upload new TLS certificates by adding the `acceptAnonymousUploads` annotation:
1. Run the following command to get the name of the kurl-proxy server:
31
+
32
+
```bash
33
+
kubectl get pods -A | grep kurl-proxy | awk '{print $2}'
34
+
```
35
+
:::note
36
+
This server is named `kurl-proxy`, but is used in both Embedded Cluster and kURL installations.
37
+
:::
38
+
39
+
1. Run the following command to delete the kurl-proxy pod. The pod automatically restarts after the command runs.
40
+
41
+
```bash
42
+
kubectl delete pods PROXY_SERVER
43
+
```
44
+
45
+
Replace `PROXY_SERVER` with the name of the kurl-proxy server that you got in the previous step.
46
+
47
+
1. After the pod has restarted, go to `http://<ip>:30000/tls` in your browser and complete the process in the Admin Console to upload a new certificate.
This topic includes information about how to access Prometheus, Grafana, and Alertmanager in Replicated KOTS existing cluster and Replicated kURL installations.
4
+
5
+
For information about how to configure Prometheus monitoring in existing cluster installations, see [Configuring Prometheus Monitoring in Existing Cluster KOTS Installations](monitoring-applications).
6
+
7
+
## Overview
8
+
9
+
The Prometheus [expression browser](https://prometheus.io/docs/visualization/browser/), Grafana, and some preconfigured dashboards are included with Kube-Prometheus for advanced visualization. Prometheus Altertmanager is also included for alerting. You can access Prometheus, Grafana, and Alertmanager dashboards using `kubectl port-forward`.
10
+
11
+
:::note
12
+
You can also expose these pods on NodePorts or behind an ingress controller. This is an advanced use case. For information about exposing the pods on NodePorts, see [NodePorts](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/node-ports.md) in the kube-prometheus GitHub repository. For information about exposing the pods behind an ingress controller, see [Expose via Ingress](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) in the kube-prometheus GitHub repository.
13
+
:::
14
+
15
+
## Prerequisite
16
+
17
+
For existing cluster KOTS installations, first install Prometheus in the cluster and configure monitoring. See [Configuring Prometheus Monitoring in Existing Cluster KOTS Installations](monitoring-applications)
18
+
19
+
## Access Prometheus
20
+
21
+
To access the Prometheus dashboard:
22
+
23
+
1. Run the following command to port forward the Prometheus service:
Users can access the Grafana dashboard by logging in using a default username and password. For information about configuring Grafana, see the [Grafana documentation](https://grafana.com/docs/). For information about constructing Prometheus queries, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/) in the Prometheus documentation.
34
+
35
+
To access the Grafana dashboard:
36
+
37
+
1. Run the following command to port forward the Grafana service:
***Existing cluster**: Use the default Grafana username and password: `admin:admin`.
45
+
***kURL cluster**: The Grafana password is randomly generated by kURL and is displayed on the command line after kURL provisions the cluster. To log in, use this password generated by kURL and the username `admin`.
46
+
47
+
To retrieve the password, run the following kubectl command:
* Phase 1: Alerting rules in Prometheus servers send alerts to an Alertmanager.
58
+
* Phase 2: The Alertmanager then manages those alerts, including silencing, inhibition, aggregation, and sending out notifications through methods such as email, on-call notification systems, and chat platforms.
59
+
60
+
For more information about configuring Alertmanager, see [Configuration](https://prometheus.io/docs/alerting/configuration/) in the Prometheus documentation.
61
+
62
+
To access the Alertmanager dashboard:
63
+
64
+
1. Run the following command to port forward the Alertmanager service:
# Configuring Prometheus Monitoring in Existing Cluster KOTS Installations
5
4
6
-
This topic describes monitoring applications and clusters with Prometheus. It includes information about how to configure Prometheus monitoring for existing clusters and how to access the dashboard using a port forward.
5
+
This topic describes how to monitor applications and clusters with Prometheus in existing cluster installations with Replicated KOTS.
7
6
8
-
## Overview
9
-
10
-
<OverviewProm/>
7
+
For information about how to access Prometheus, Grafana, and Alertmanager, see [Accessing Dashboards Using Port Forwarding](/enterprise/monitoring-access-dashboards).
11
8
12
-
## Limitation
9
+
For information about consuming Prometheus metrics externally in kURL installations, see [Consuming Prometheus Metrics Externally](monitoring-external-prometheus).
13
10
14
-
<LimitationEc/>
11
+
## Overview
15
12
16
-
## Configure Monitoring in Existing Clusters {#configure-existing}
13
+
<OverviewProm/>
17
14
18
-
To configure Prometheus monitoring for applications installed in an existing cluster, connect the Admin Console to the endpoint of an installed instance of Prometheus on the cluster. See the following sections:
15
+
## Configure Prometheus Monitoring
19
16
20
-
*[Install Prometheus](#install-prometheus)
21
-
*[Connect to a Prometheus Endpoint](#connect-to-a-prometheus-endpoint)
17
+
For existing cluster installations with KOTS, users can install Prometheus in the cluster and then connect the Admin Console to the Prometheus endpoint to enable monitoring.
22
18
23
-
### Install Prometheus
19
+
### Step 1: Install Prometheus in the Cluster {#configure-existing}
24
20
25
21
Replicated recommends that you use CoreOS's Kube-Prometheus distribution for installing and configuring highly available Prometheus on an existing cluster. For more information, see the [kube-prometheus](https://github.com/coreos/kube-prometheus) GitHub repository.
26
22
@@ -43,9 +39,9 @@ To install Prometheus using the recommended Kube-Prometheus distribution:
43
39
44
40
For more information about advanced Kube-Prometheus configuration options, see [Customizing Kube-Prometheus](https://github.com/coreos/kube-prometheus#customizing-kube-prometheus) in the kube-prometheus GitHub repository.
45
41
46
-
### Connect to a Prometheus Endpoint
42
+
### Step 2: Connect to a Prometheus Endpoint
47
43
48
-
To view graphs on the Admin Console dashboard, you must provide the address of the Prometheus instance that you installed on the cluster.
44
+
To view graphs on the Admin Console dashboard, provide the address of a Prometheus instance installed in the cluster.
49
45
50
46
To connect the Admin Console to a Prometheus endpoint:
51
47
@@ -54,76 +50,4 @@ To connect the Admin Console to a Prometheus endpoint:
Graphs appear on the dashboard shortly after saving the address.
58
-
59
-
## Access the Dashboards with kubectl Port Forward
60
-
61
-
You can use the commands below to access Prometheus, Grafana, and Alertmanager dashboards using `kubectl port-forward` after you install the manifests.
62
-
63
-
You can also expose these pods on NodePorts or behind an ingress controller. This is an advanced use case. For information about exposing the pods on NodePorts, see [NodePorts](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/node-ports.md) in the kube-prometheus GitHub repository. For information about exposing the pods behind an ingress controller, see [Expose via Ingress](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) in the kube-prometheus GitHub repository.
64
-
65
-
For Replicated kURL clusters, you can consume Prometheus metrics from an external monitoring solution by connecting to the Prometheus NodePort service running in the cluster. For more information, see [Consuming Prometheus Metrics Externally](monitoring-external-prometheus).
66
-
67
-
### Access Prometheus
68
-
69
-
To access the Prometheus dashboard with a port forward:
70
-
71
-
1. Run the following command to create the port forward:
***Existing cluster**: Use the default Grafana username and password: `admin:admin`.
91
-
***kURL cluster**: The Grafana password is randomly generated by kURL and is displayed on the command line after kURL provisions the cluster. To log in, use this password generated by kURL and the username `admin`.
92
-
93
-
To retrieve the password, run the following kubectl command:
In addition to the Prometheus Expression Browser, Grafana and some preconfigured dashboards are included with Kube-Prometheus for advanced visualization.
114
-
115
-
For information about configuring Grafana, see the [Grafana documentation](https://grafana.com/docs/).
116
-
117
-
For information about constructing Prometheus queries, see the [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/) in the Prometheus documentation.
118
-
119
-
For information about the Prometheus Expression Browser, see [Expression Browser](https://prometheus.io/docs/visualization/browser/) in the Prometheus documentation.
120
-
121
-
122
-
## About Alerting with Prometheus
123
-
124
-
Alerting with Prometheus has two phases:
125
-
126
-
1. Alerting rules in Prometheus servers send alerts to an Alertmanager.
127
-
1. The Alertmanager then manages those alerts, including silencing, inhibition, aggregation, and sending out notifications through methods such as email, on-call notification systems, and chat platforms.
128
-
129
-
For more information about configuring Alertmanager, see [Configuration](https://prometheus.io/docs/alerting/configuration/) in the Prometheus documentation.
53
+
Graphs appear on the dashboard shortly after saving the address.
0 commit comments