Skip to content

Commit 252377a

Browse files
authored
Merge branch 'main' into dx/sc-118600/add-replicated-values-schema
2 parents 0f3492c + a82ddbb commit 252377a

File tree

158 files changed

+1213
-942
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

158 files changed

+1213
-942
lines changed

.github/workflows/algolia-crawl.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ jobs:
1010
runs-on: ubuntu-latest
1111
steps:
1212
- name: check out code 🛎d
13-
uses: actions/checkout@v3
13+
uses: actions/checkout@v4
1414
# when scraping the site, inject secrets as environment variables
1515
# then pass their values into the Docker container using "-e" syntax
1616
# and inject config.json contents as another variable

.github/workflows/app-manager-release-notes.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ jobs:
1111
generate-release-notes-pr:
1212
runs-on: ubuntu-20.04
1313
steps:
14-
- uses: actions/checkout@v3
14+
- uses: actions/checkout@v4
1515

1616
- name: Generate Release Notes
1717
id: release-notes
@@ -38,7 +38,7 @@ jobs:
3838
rm -rf /tmp/release-notes.txt
3939
4040
- name: Create Pull Request # creates a PR if there are differences
41-
uses: peter-evans/create-pull-request@v3
41+
uses: peter-evans/create-pull-request@v7
4242
id: cpr
4343
with:
4444
token: ${{ secrets.REPLICATED_GH_PAT }}
@@ -55,7 +55,7 @@ jobs:
5555
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
5656
5757
- name: Slack Notification
58-
uses: slackapi/slack-github-action@v1.16.0
58+
uses: slackapi/slack-github-action@v2.0.0
5959
with:
6060
payload: |
6161
{

.github/workflows/auto-label.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ jobs:
88
label:
99
runs-on: ubuntu-latest
1010
steps:
11-
- uses: actions/github-script@v6
11+
- uses: actions/github-script@v7
1212
with:
1313
github-token: ${{ secrets.DOCS_GH_PAT }}
1414
script: |

.github/workflows/kubernetes-installer-release-notes.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ jobs:
1111
generate-release-notes-pr:
1212
runs-on: ubuntu-20.04
1313
steps:
14-
- uses: actions/checkout@v3
14+
- uses: actions/checkout@v4
1515

1616
- name: Generate Release Notes
1717
id: release-notes
@@ -38,7 +38,7 @@ jobs:
3838
rm -rf /tmp/release-notes.txt
3939
4040
- name: Create Pull Request # creates a PR if there are differences
41-
uses: peter-evans/create-pull-request@v3
41+
uses: peter-evans/create-pull-request@v7
4242
id: cpr
4343
with:
4444
token: ${{ secrets.REPLICATED_GH_PAT }}
@@ -55,7 +55,7 @@ jobs:
5555
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
5656
5757
- name: Slack Notification
58-
uses: slackapi/slack-github-action@v1.16.0
58+
uses: slackapi/slack-github-action@v2.0.0
5959
with:
6060
payload: |
6161
{

.github/workflows/replicated-sdk-release-notes.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ jobs:
1414
generate-release-notes-pr:
1515
runs-on: ubuntu-22.04
1616
steps:
17-
- uses: actions/checkout@v3
17+
- uses: actions/checkout@v4
1818

1919
- name: Generate Release Notes
2020
id: release-notes
@@ -42,7 +42,7 @@ jobs:
4242
rm -rf /tmp/release-notes.txt
4343
4444
- name: Create Pull Request # creates a PR if there are differences
45-
uses: peter-evans/create-pull-request@v3
45+
uses: peter-evans/create-pull-request@v7
4646
id: cpr
4747
with:
4848
token: ${{ secrets.REPLICATED_GH_PAT }}
@@ -59,7 +59,7 @@ jobs:
5959
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
6060
6161
- name: Slack Notification
62-
uses: slackapi/slack-github-action@v1.16.0
62+
uses: slackapi/slack-github-action@v2.0.0
6363
with:
6464
payload: |
6565
{

.github/workflows/vendor-portal-release-notes.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ jobs:
1313
outputs:
1414
releaseNotes: ${{ steps.release-notes.outputs.release-notes }}
1515
steps:
16-
- uses: actions/checkout@v3
16+
- uses: actions/checkout@v4
1717

1818
- name: Generate Release Notes
1919
id: release-notes
@@ -32,7 +32,7 @@ jobs:
3232
needs: generate-release-notes
3333
if: ${{ needs.generate-release-notes.outputs.releaseNotes != '' || needs.generate-release-notes.outputs.releaseNotes != null }}
3434
steps:
35-
- uses: actions/checkout@v3
35+
- uses: actions/checkout@v4
3636
- name: Update Release Notes
3737
env:
3838
PATTERN: ".+RELEASE_NOTES_PLACEHOLDER.+"
@@ -45,7 +45,7 @@ jobs:
4545
rm -rf /tmp/release-notes.txt
4646
4747
- name: Create Pull Request # creates a PR if there are differences
48-
uses: peter-evans/create-pull-request@v3
48+
uses: peter-evans/create-pull-request@v7
4949
id: cpr
5050
with:
5151
token: ${{ secrets.REPLICATED_GH_PAT }}
@@ -62,7 +62,7 @@ jobs:
6262
echo "Pull Request URL - ${{ steps.cpr.outputs.pull-request-url }}"
6363
6464
- name: Slack Notification
65-
uses: slackapi/slack-github-action@v1.16.0
65+
uses: slackapi/slack-github-action@v2.0.0
6666
with:
6767
payload: |
6868
{

docs/enterprise/embedded-manage-nodes.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ Multi-node clusters with Embedded Cluster have the following limitations:
1010

1111
* High availability for Embedded Cluster in an Alpha feature. This feature is subject to change, including breaking changes. To get access to this feature, reach out to Alex Parker at [[email protected]](mailto:[email protected]).
1212

13+
* The same Embedded Cluster data directory used at installation is used for all nodes joined to the cluster. This is either the default `/var/lib/embedded-cluster` directory or the directory set with the [`--data-dir`](/reference/embedded-cluster-install#flags) flag. You cannot choose a different data directory for Embedded Cluster when joining nodes.
14+
1315
## Add Nodes to a Cluster (Beta) {#add-nodes}
1416

1517
You can add nodes to create a multi-node cluster in online (internet-connected) and air-gapped (limited or no outbound internet access) environments. The Admin Console provides the join command that you use to join nodes to the cluster.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
# Updating Custom TLS Certificates in Embedded Cluster Installations
2+
3+
This topic describes how to update custom TLS certificates in Replicated Embedded Cluster installations.
4+
5+
## Update Custom TLS Certificates
6+
7+
Users can provide custom TLS certificates with Embedded Cluster installations and can update TLS certificates through the Admin Console.
8+
9+
:::important
10+
Adding the `acceptAnonymousUploads` annotation temporarily creates a vulnerability for an attacker to maliciously upload TLS certificates. After TLS certificates have been uploaded, the vulnerability is closed again.
11+
12+
Replicated recommends that you complete this upload process quickly to minimize the vulnerability risk.
13+
:::
14+
15+
To upload a new custom TLS certificate in Embedded Cluster installations:
16+
17+
1. SSH onto a controller node where Embedded Cluster is installed. Then, run the following command to start a shell so that you can access the cluster with kubectl:
18+
19+
```bash
20+
sudo ./APP_SLUG shell
21+
```
22+
Where `APP_SLUG` is the unique slug of the installed application.
23+
24+
1. In the shell, run the following command to restore the ability to upload new TLS certificates by adding the `acceptAnonymousUploads` annotation:
25+
26+
```bash
27+
kubectl -n kotsadm annotate secret kotsadm-tls acceptAnonymousUploads=1 --overwrite
28+
```
29+
30+
1. Run the following command to get the name of the kurl-proxy server:
31+
32+
```bash
33+
kubectl get pods -A | grep kurl-proxy | awk '{print $2}'
34+
```
35+
:::note
36+
This server is named `kurl-proxy`, but is used in both Embedded Cluster and kURL installations.
37+
:::
38+
39+
1. Run the following command to delete the kurl-proxy pod. The pod automatically restarts after the command runs.
40+
41+
```bash
42+
kubectl delete pods PROXY_SERVER
43+
```
44+
45+
Replace `PROXY_SERVER` with the name of the kurl-proxy server that you got in the previous step.
46+
47+
1. After the pod has restarted, go to `http://<ip>:30000/tls` in your browser and complete the process in the Admin Console to upload a new certificate.
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
# Accessing Dashboards Using Port Forwarding
2+
3+
This topic includes information about how to access Prometheus, Grafana, and Alertmanager in Replicated KOTS existing cluster and Replicated kURL installations.
4+
5+
For information about how to configure Prometheus monitoring in existing cluster installations, see [Configuring Prometheus Monitoring in Existing Cluster KOTS Installations](monitoring-applications).
6+
7+
## Overview
8+
9+
The Prometheus [expression browser](https://prometheus.io/docs/visualization/browser/), Grafana, and some preconfigured dashboards are included with Kube-Prometheus for advanced visualization. Prometheus Altertmanager is also included for alerting. You can access Prometheus, Grafana, and Alertmanager dashboards using `kubectl port-forward`.
10+
11+
:::note
12+
You can also expose these pods on NodePorts or behind an ingress controller. This is an advanced use case. For information about exposing the pods on NodePorts, see [NodePorts](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/node-ports.md) in the kube-prometheus GitHub repository. For information about exposing the pods behind an ingress controller, see [Expose via Ingress](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) in the kube-prometheus GitHub repository.
13+
:::
14+
15+
## Prerequisite
16+
17+
For existing cluster KOTS installations, first install Prometheus in the cluster and configure monitoring. See [Configuring Prometheus Monitoring in Existing Cluster KOTS Installations](monitoring-applications)
18+
19+
## Access Prometheus
20+
21+
To access the Prometheus dashboard:
22+
23+
1. Run the following command to port forward the Prometheus service:
24+
25+
```bash
26+
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
27+
```
28+
29+
1. Access the dashboard at http://localhost:9090.
30+
31+
## Access Grafana
32+
33+
Users can access the Grafana dashboard by logging in using a default username and password. For information about configuring Grafana, see the [Grafana documentation](https://grafana.com/docs/). For information about constructing Prometheus queries, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/) in the Prometheus documentation.
34+
35+
To access the Grafana dashboard:
36+
37+
1. Run the following command to port forward the Grafana service:
38+
39+
```bash
40+
kubectl --namespace monitoring port-forward deployment/grafana 3000
41+
```
42+
1. Access the dashboard at http://localhost:3000.
43+
1. Log in to Grafana:
44+
* **Existing cluster**: Use the default Grafana username and password: `admin:admin`.
45+
* **kURL cluster**: The Grafana password is randomly generated by kURL and is displayed on the command line after kURL provisions the cluster. To log in, use this password generated by kURL and the username `admin`.
46+
47+
To retrieve the password, run the following kubectl command:
48+
49+
```
50+
kubectl get secret -n monitoring grafana-admin -o jsonpath="{.data.admin-password}" | base64 -d
51+
```
52+
53+
## Access Alertmanager
54+
55+
Alerting with Prometheus has two phases:
56+
57+
* Phase 1: Alerting rules in Prometheus servers send alerts to an Alertmanager.
58+
* Phase 2: The Alertmanager then manages those alerts, including silencing, inhibition, aggregation, and sending out notifications through methods such as email, on-call notification systems, and chat platforms.
59+
60+
For more information about configuring Alertmanager, see [Configuration](https://prometheus.io/docs/alerting/configuration/) in the Prometheus documentation.
61+
62+
To access the Alertmanager dashboard:
63+
64+
1. Run the following command to port forward the Alertmanager service:
65+
66+
```
67+
kubectl --namespace monitoring port-forward svc/prometheus-alertmanager 9093
68+
```
69+
70+
1. Access the dashboard at http://localhost:9093.
Lines changed: 12 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,22 @@
11
import OverviewProm from "../partials/monitoring/_overview-prom.mdx"
2-
import LimitationEc from "../partials/monitoring/_limitation-ec.mdx"
32

4-
# Monitoring Applications with Prometheus
3+
# Configuring Prometheus Monitoring in Existing Cluster KOTS Installations
54

6-
This topic describes monitoring applications and clusters with Prometheus. It includes information about how to configure Prometheus monitoring for existing clusters and how to access the dashboard using a port forward.
5+
This topic describes how to monitor applications and clusters with Prometheus in existing cluster installations with Replicated KOTS.
76

8-
## Overview
9-
10-
<OverviewProm/>
7+
For information about how to access Prometheus, Grafana, and Alertmanager, see [Accessing Dashboards Using Port Forwarding](/enterprise/monitoring-access-dashboards).
118

12-
## Limitation
9+
For information about consuming Prometheus metrics externally in kURL installations, see [Consuming Prometheus Metrics Externally](monitoring-external-prometheus).
1310

14-
<LimitationEc/>
11+
## Overview
1512

16-
## Configure Monitoring in Existing Clusters {#configure-existing}
13+
<OverviewProm/>
1714

18-
To configure Prometheus monitoring for applications installed in an existing cluster, connect the Admin Console to the endpoint of an installed instance of Prometheus on the cluster. See the following sections:
15+
## Configure Prometheus Monitoring
1916

20-
* [Install Prometheus](#install-prometheus)
21-
* [Connect to a Prometheus Endpoint](#connect-to-a-prometheus-endpoint)
17+
For existing cluster installations with KOTS, users can install Prometheus in the cluster and then connect the Admin Console to the Prometheus endpoint to enable monitoring.
2218

23-
### Install Prometheus
19+
### Step 1: Install Prometheus in the Cluster {#configure-existing}
2420

2521
Replicated recommends that you use CoreOS's Kube-Prometheus distribution for installing and configuring highly available Prometheus on an existing cluster. For more information, see the [kube-prometheus](https://github.com/coreos/kube-prometheus) GitHub repository.
2622

@@ -43,9 +39,9 @@ To install Prometheus using the recommended Kube-Prometheus distribution:
4339

4440
For more information about advanced Kube-Prometheus configuration options, see [Customizing Kube-Prometheus](https://github.com/coreos/kube-prometheus#customizing-kube-prometheus) in the kube-prometheus GitHub repository.
4541

46-
### Connect to a Prometheus Endpoint
42+
### Step 2: Connect to a Prometheus Endpoint
4743

48-
To view graphs on the Admin Console dashboard, you must provide the address of the Prometheus instance that you installed on the cluster.
44+
To view graphs on the Admin Console dashboard, provide the address of a Prometheus instance installed in the cluster.
4945

5046
To connect the Admin Console to a Prometheus endpoint:
5147

@@ -54,76 +50,4 @@ To connect the Admin Console to a Prometheus endpoint:
5450

5551
![Configuring Prometheus](/images/kotsadm-dashboard-configureprometheus.png)
5652

57-
Graphs appear on the dashboard shortly after saving the address.
58-
59-
## Access the Dashboards with kubectl Port Forward
60-
61-
You can use the commands below to access Prometheus, Grafana, and Alertmanager dashboards using `kubectl port-forward` after you install the manifests.
62-
63-
You can also expose these pods on NodePorts or behind an ingress controller. This is an advanced use case. For information about exposing the pods on NodePorts, see [NodePorts](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/node-ports.md) in the kube-prometheus GitHub repository. For information about exposing the pods behind an ingress controller, see [Expose via Ingress](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) in the kube-prometheus GitHub repository.
64-
65-
For Replicated kURL clusters, you can consume Prometheus metrics from an external monitoring solution by connecting to the Prometheus NodePort service running in the cluster. For more information, see [Consuming Prometheus Metrics Externally](monitoring-external-prometheus).
66-
67-
### Access Prometheus
68-
69-
To access the Prometheus dashboard with a port forward:
70-
71-
1. Run the following command to create the port forward:
72-
73-
```bash
74-
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
75-
```
76-
77-
1. Access the dashboard at http://localhost:9090.
78-
79-
### Access Grafana
80-
81-
To access the Grafana dashboard with a port forward:
82-
83-
1. Run the following command to create the port forward:
84-
85-
```bash
86-
kubectl --namespace monitoring port-forward deployment/grafana 3000
87-
```
88-
1. Access the dashboard at http://localhost:3000.
89-
1. Log in to Grafana:
90-
* **Existing cluster**: Use the default Grafana username and password: `admin:admin`.
91-
* **kURL cluster**: The Grafana password is randomly generated by kURL and is displayed on the command line after kURL provisions the cluster. To log in, use this password generated by kURL and the username `admin`.
92-
93-
To retrieve the password, run the following kubectl command:
94-
95-
```
96-
kubectl get secret -n monitoring grafana-admin -o jsonpath="{.data.admin-password}" | base64 -d
97-
```
98-
99-
### Access Alertmanager
100-
101-
To access the Alertmanager dashboard with a port forward:
102-
103-
1. Run the following command to create the port forward:
104-
105-
```
106-
kubectl --namespace monitoring port-forward svc/prometheus-alertmanager 9093
107-
```
108-
109-
1. Access the dashboard at http://localhost:9093.
110-
111-
## About Visualizing Metrics with Grafana
112-
113-
In addition to the Prometheus Expression Browser, Grafana and some preconfigured dashboards are included with Kube-Prometheus for advanced visualization.
114-
115-
For information about configuring Grafana, see the [Grafana documentation](https://grafana.com/docs/).
116-
117-
For information about constructing Prometheus queries, see the [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/) in the Prometheus documentation.
118-
119-
For information about the Prometheus Expression Browser, see [Expression Browser](https://prometheus.io/docs/visualization/browser/) in the Prometheus documentation.
120-
121-
122-
## About Alerting with Prometheus
123-
124-
Alerting with Prometheus has two phases:
125-
126-
1. Alerting rules in Prometheus servers send alerts to an Alertmanager.
127-
1. The Alertmanager then manages those alerts, including silencing, inhibition, aggregation, and sending out notifications through methods such as email, on-call notification systems, and chat platforms.
128-
129-
For more information about configuring Alertmanager, see [Configuration](https://prometheus.io/docs/alerting/configuration/) in the Prometheus documentation.
53+
Graphs appear on the dashboard shortly after saving the address.

0 commit comments

Comments
 (0)