Skip to content

Commit 239ef37

Browse files
committed
Improve README
1 parent 2a72bfa commit 239ef37

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

services/rabbit/README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -10,17 +10,13 @@ Perform update one node at a time. Never update all nodes at the same time (this
1010

1111
Shutdown nodes one by one gracefully. Wait until the nodes is stopped and leaves the cluster. Then remove next node. When starting cluster, start nodes **in the reverse order**! For example, if you shutdown node01, then node02 and lastly node03, first start node03 then node02 and finally node01.
1212

13-
If all Nodes were shutdown simultaneously, then you will see mnesia tables errors in node's logs. Restarting node solves the issue . Documentation also mentions force_boot CLI command in this case (see https://www.rabbitmq.com/docs/man/rabbitmqctl.8#force_boot)
13+
If all Nodes were shutdown simultaneously, then you will see mnesia tables errors in node's logs. Restarting node solves the issue. Documentation also mentions force_boot CLI command in this case (see https://www.rabbitmq.com/docs/man/rabbitmqctl.8#force_boot)
1414

15-
#### Community discussions
16-
mnesia errors after all rabbit nodes (docker services) restart:
17-
* https://stackoverflow.com/questions/60407082/rabbit-mq-error-while-waiting-for-mnesia-tables
15+
## How to add / remove nodes
1816

19-
official documentation mentionening restart scenarios
20-
* https://www.rabbitmq.com/docs/clustering#restarting-schema-sync
17+
The only supported way, is to completely shutdown the cluster (docker stack and most likely rabbit node volumes) and start brand new.
2118

22-
all (3) cluster nodes go down simultaneosuly, cluster is broken:
23-
* https://groups.google.com/g/rabbitmq-users/c/owvanX2iSqA
19+
With manual effort this can be done on the running cluster, by adding 1 more rabbit node manually (as a separate docker stack or new service) and manually executing rabbitmqctl commands (some hints can be found here https://www.rabbitmq.com/docs/clustering#creating)
2420

2521
## Updating rabbitmq.conf / advanced.config (zero-downtime)
2622

@@ -31,19 +27,23 @@ We do not support this automated (except starting from scratch with empty volume
3127

3228
Source: https://www.rabbitmq.com/docs/next/configure#config-changes-effects
3329

34-
## How to add / remove nodes
30+
## Enable node Maintenance mode
3531

36-
The only supported way, is to completely shutdown the cluster (docker stack and most likely rabbit node volumes) and start brand new.
32+
1. Get inside container's shell (`docker exec -it <container-id> bash`)
33+
2. (Inside container) execute `rabbitmq-upgrade drain`
3734

38-
With manual effort this can be done on the running cluster, by adding 1 more rabbit node manually (as a separate docker stack or new service) and manually executing rabbitmqctl commands (some hints can be found here https://www.rabbitmq.com/docs/clustering#creating)
35+
Source: https://www.rabbitmq.com/docs/upgrade#maintenance-mode
3936

40-
## Autoscaling
37+
#### Troubleshooting
38+
mnesia errors after all rabbit nodes (docker services) restart:
39+
* https://stackoverflow.com/questions/60407082/rabbit-mq-error-while-waiting-for-mnesia-tables
4140

42-
Not supported at the moment.
41+
official documentation mentioning restart scenarios
42+
* https://www.rabbitmq.com/docs/clustering#restarting-schema-sync
4343

44-
## Enable node Maintenance mode
44+
all (3) cluster nodes go down simultaneosuly, cluster is broken:
45+
* https://groups.google.com/g/rabbitmq-users/c/owvanX2iSqA
4546

46-
1. Get inside container's shell (`docker exec -it <container-id> bash`)
47-
2. (Inside container) execute `rabbitmq-upgrade drain`
47+
## Autoscaling
4848

49-
Source: https://www.rabbitmq.com/docs/upgrade#maintenance-mode
49+
Not supported at the moment.

0 commit comments

Comments
 (0)