You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+27-12Lines changed: 27 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,32 @@
1
-
##Replicator Kuma
1
+
# Replicator Kuma 🏳️🌈
2
2
3
-
As uptime-kuma currently has no native or blessed way to replicate monitors and other data between instances, Replicator Kuma is an attempt to do this.
3
+
Replicator Kuma extends the [uptime-kuma](https://github.com/louislam/uptime-kuma) project by adding support for replicating monitors, status pages, incidents and other entities between uptime-kuma instances.
4
4
5
-
This is an extension of the uptime-kuma project to solve this. Replicator Kuma replicates some tables of the SQLite database between uptime-kuma instances by leveraging DB dumps and restores; restic is used to push the dump to S3 (or other supported backends) from where it is cloned to the replica/secondary instances.
5
+
Replicator Kuma replicates data by dumping the SQLite database, using [Restic](https://github.com/restic/restic)to push this dump to a storage backend like S3, and then cloning it to replica instances.
6
6
7
-
Replicator Kuma only creates restic snapshots of files (data) when they change change (the data for heartbeats, TLS, settings, etc. i.e. monitoring and instance specific data is not replicated) this selective backup happens by leveraging SHA256 sums - we dump the data and compare latest snapshot data with the new dump.
7
+
Replicator Kuma only creates restic snapshots of files (data) when they change change (the data for heartbeats, TLS, settings, etc. i.e. monitoring and instance specific data is not replicated).
8
8
9
-
###Getting started
9
+
## Getting started & what you need to know
10
10
11
-
Replicator Kuma only adds a script to the original uptime-kuma images, the new images are published to docker hub under realsidsun/replicator-kuma and follows the same semver number as uptime-kuma (starting from 1.23.13).
11
+
Look at the docker-compose files, modify the parameters to suit your restic replication parameters and hostname and start the containers.
12
12
13
-
Replicator Kuma may lag behind uptime kuma as the repo needs to be manually updated right now (if you notice a new update, fell free to raise a PR updating Dockerfile, the actions workflow should build and push once I merge).
13
+
Running the replica and leader in completely seperate environments** and adding them to monitor one another is recommended.
14
14
15
-
### How it works
15
+
Do this by pointing the leader to monitor itself and all other replicas and wait until monitors are relayed across to replicas.
16
+
17
+
You may add multiple replicas but not multiple leaders. Any changes made on the replicas will be overwritten when:
18
+
1. The replica restarts
19
+
2. The leader publishes a new snapshot (a change is made on the leader)
20
+
21
+
###### **seperate environments: Ideally, seperate geographies and cloud providers (hybrid, off-cloud works too) to provide maximum insulation from an outage taking out your kumas at once.
22
+
23
+
## How it works
24
+
25
+
Replicator Kuma adds a script to the original uptime-kuma images to provide replication support.
26
+
27
+
Starting 1.23.16-1, It also modifies monitor.js to prepend the container hostname to notificaion messages making it easier to discern which replica the notification is firing from to assist in diagnosing partial failures.
28
+
29
+
The new images are published to docker hub under realsidsun/replicator-kuma and follows the same semver number as uptime-kuma (starting from 1.23.13).
16
30
17
31
Alongside the uptime-kuma service, the replicator kuma container periodically runs a few functions to either:
18
32
1. Take a live dump of some tables (ex: monitor table) from the SQLite DB, compare the current dump's SHA to last backup from restic, if it is different, do a restic backup of the new dump
@@ -49,7 +63,7 @@ maintenance_status_page
49
63
incident
50
64
```
51
65
52
-
###Is Replicator Kuma Production Ready?
66
+
## Is Replicator Kuma Production Ready?
53
67
54
68
Production is a state of mind. With that said, I have been running Replicator Kuma for 8+ months to monitor my services and have not run into a problem.
55
69
@@ -60,16 +74,17 @@ Using a provider like Backblaze or idrive E2 which do not charge basis of operat
60
74
61
75
You can skip S3 altogether and use another protocol for repos as well, restic supports basically everything.
62
76
63
-
###Update Notes
77
+
## Update Notes
64
78
Since replicator-kuma follows same version as uptime-kuma, changes made mid-cycle get pushed to the same image; there is no plan to change this as I expect these to be few and far between.
65
79
However, there have been a few changes which (while won't break your setup) you should note:
66
80
67
81
1. Backup and Restore time rhythm was changed on 18 August 2024. Backups happen every 5 mins and Restores every 6 mins.
68
82
2. If you've used replicator-kuma prior to 22 September 2024, your restic version is very outdated and likely created a v1 format repo; the new image comes with new restic version. v1 repos still work with the new binary but you should migrate to v2 by running `restic migrate upgrade_repo_v2`
69
83
3. As of release `1.23.15`, replicator-kuma supports notifying via ntfy.sh when backups are created and restores carried out.
84
+
4. As of `1.23.16-1`, we modify monitor.js to prepend the container hostname to notificaions.
70
85
71
86
72
-
###Contributions
87
+
## Contributions
73
88
If you find any quirks of Replicator Kuma or want to enhance it, feel free to raise an issue, PR and whatever else is your favourite Github feature of the week
0 commit comments