Skip to content

Commit fd42ccf

Browse files
author
Shlomi Noach
authored
Merge pull request #219 from github/doc-updates
Doc updates: subsecond throttling and more
2 parents 3ee0069 + 736c8a0 commit fd42ccf

File tree

7 files changed

+73
-7
lines changed

7 files changed

+73
-7
lines changed

doc/cheatsheet.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ gh-ost \
3737
--allow-master-master \
3838
--cut-over=default \
3939
--exact-rowcount \
40+
--concurrent-rowcount \
4041
--default-retries=120 \
4142
--panic-flag-file=/tmp/ghost.panic.flag \
4243
--postpone-cut-over-flag-file=/tmp/ghost.postpone.flag \
@@ -72,6 +73,7 @@ gh-ost \
7273
--allow-master-master \
7374
--cut-over=default \
7475
--exact-rowcount \
76+
--concurrent-rowcount \
7577
--default-retries=120 \
7678
--panic-flag-file=/tmp/ghost.panic.flag \
7779
--postpone-cut-over-flag-file=/tmp/ghost.postpone.flag \
@@ -102,9 +104,10 @@ gh-ost \
102104
--initially-drop-old-table \
103105
--max-load=Threads_running=30 \
104106
--switch-to-rbr \
105-
--chunk-size=2500 \
107+
--chunk-size=500 \
106108
--cut-over=default \
107109
--exact-rowcount \
110+
--concurrent-rowcount \
108111
--serve-socket-file=/tmp/gh-ost.test.sock \
109112
--panic-flag-file=/tmp/gh-ost.panic.flag \
110113
--execute

doc/command-line-flags.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,10 @@ user=gromit
3232
password=123456
3333
```
3434

35+
### concurrent-rowcount
36+
37+
See `exact-rowcount`
38+
3539
### cut-over
3640

3741
Optional. Default is `safe`. See more discussion in [cut-over](cut-over.md)
@@ -44,6 +48,7 @@ A `gh-ost` execution need to copy whatever rows you have in your existing table
4448
`gh-ost` also supports the `--exact-rowcount` flag. When this flag is given, two things happen:
4549
- An initial, authoritative `select count(*) from your_table`.
4650
This query may take a long time to complete, but is performed before we begin the massive operations.
51+
When `--concurrent-rowcount` is also specified, this runs in paralell to row copy.
4752
- A continuous update to the estimate as we make progress applying events.
4853
We heuristically update the number of rows based on the queries we process from the binlogs.
4954

@@ -63,6 +68,19 @@ We think `gh-ost` should not take chances or make assumptions about the user's t
6368

6469
See #initially-drop-ghost-table
6570

71+
### max-lag-millis
72+
73+
On a replication topology, this is perhaps the most important migration throttling factor: the maximum lag allowed for migration to work. If lag exceeds this value, migration throttles.
74+
75+
When using [Connect to replica, migrate on master](cheatsheet.md), this lag is primarily tested on the very replica `gh-ost` operates on. Lag is measured by checking the heartbeat events injected by `gh-ost` itself on the utility changelog table. That is, to measure this replica's lag, `gh-ost` doesn't need to issue `show slave status` nor have any external heartbeat mechanism.
76+
77+
When `--throttle-control-replicas` is provided, throttling also considers lag on specified hosts. Measuring lag on these hosts works as follows:
78+
79+
- If `--replication-lag-query` is provided, use the query, trust its result to indicate lag seconds (fraction, i.e. float, allowed)
80+
- Otherwise, issue `show slave status` and read `Seconds_behind_master` (`1sec` granularity)
81+
82+
See also: [Sub-second replication lag throttling](subsecond-lag.md)
83+
6684
### migrate-on-replica
6785

6886
Typically `gh-ost` is used to migrate tables on a master. If you wish to only perform the migration in full on a replica, connect `gh-ost` to said replica and pass `--migrate-on-replica`. `gh-ost` will briefly connect to the master but other issue no changes on the master. Migration will be fully executed on the replica, while making sure to maintain a small replication lag.

doc/cut-over.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,7 @@ This solution either:
1515
Also note:
1616
- With `--migrate-on-replica` the cut-over is executed in exactly the same way as on master.
1717
- With `--test-on-replica` the replication is first stopped; then the cut-over is executed just as on master, but then reverted (tables rename forth then back again).
18+
19+
Internals of the atomic cut-over are discussed in [Issue #82](https://github.com/github/gh-ost/issues/82).
20+
21+
At this time the command-line argument `--cut-over` is supported, and defaults to the atomic cut-over algorithm described above. Also supported is `--cut-over=two-step`, which uses the FB non-atomic algorithm. We recommend using the default cut-over that has been battle tested in our production environments.

doc/perks.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,3 +58,7 @@ You begin a migration, and the ETA is for it to complete at 04:00am. Not a good
5858
Today, DBAs are coordinating the migration start time such that it completes in a convenient hour. `gh-ost` offers an alternative: postpone the final cut-over phase till you're ready.
5959

6060
Execute `gh-ost` with `--postpone-cut-over-flag-file=/path/to/flag.file`. As long as this file exists, `gh-ost` will not take the final cut-over step. It will complete the row copy, and continue to synchronize the tables by continuously applying changes made on the original table onto the ghost table. It can do so on and on and on. When you're finally ready, remove the file and cut-over will take place.
61+
62+
### Sub-second lag throttling
63+
64+
With sub-second replication lag measurements, `gh-ost` is able to keep a fleet of replicas well below `1sec` lag throughout the migration. We encourage you to issue sub-second heartbeats. Read more on [sub-second replication lag throttling](subsecond-lag.md)

doc/subsecond-lag.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# Sub-second replication lag throttling
2+
3+
`gh-ost` is able to utilize sub-second replication lag measurements.
4+
5+
At GitHub, small replication lag is crucial, and we like to keep it below `1s` at all times. If you have similar concern, we strongly urge you to proceed to implement sub-second lag throttling.
6+
7+
`gh-ost` will do sub-second throttling when `--max-lag-millis` is smaller than `1000`, i.e. smaller than `1sec`.
8+
Replication lag is measured on:
9+
10+
- The "inspected" server (the server `gh-ost` connects to; replica is desired but not mandatory)
11+
- The `throttle-control-replicas` list
12+
13+
For the inspected server, `gh-ost` uses an internal heartbeat mechanism. It injects heartbeat events onto the utility changelog table, then reads those events in the binary log, and compares times. This measurement is by default and by definition sub-second enabled.
14+
15+
You can explicitly define how frequently will `gh-ost` inject heartbeat events, via `heartbeat-interval-millis`. You should set `heartbeat-interval-millis <= max-lag-millis`. It still works if not, but loses granularity and effect.
16+
17+
On the `throttle-control-replicas`, `gh-ost` only issues SQL queries, and does not attempt to read the binary log stream. Perhaps those other replicas don't have binary logs in the first place.
18+
19+
The standard way of getting replication lag on a replica is to issue `SHOW SLAVE STATUS`, then reading `Seconds_behind_master` value. But that value has a `1sec` granularity.
20+
21+
To be able to throttle on your production replicas fleet when replication lag exceeds a sub-second threshold, you must provide with a `replication-lag-query` that returns a sub-second resolution lag.
22+
23+
As a common example, many use [pt-heartbeat](https://www.percona.com/doc/percona-toolkit/2.2/pt-heartbeat.html) to inject heartbeat events on the master. You would issue something like:
24+
25+
/usr/bin/pt-heartbeat -- -D your_schema --create-table --update --replace --interval=0.1 --daemonize --pid ...
26+
27+
Note `--interval=0.1` to indicate `10` heartbeats per second.
28+
29+
You would then provide
30+
31+
gh-ost ... --replication-lag-query="select unix_timestamp(now(6)) - unix_timestamp(ts) as ghost_lag_check from your_schema.heartbeat order by ts desc limit 1"
32+
33+
Our production migrations use sub-second lag throttling and are able to keep our entire fleet of replicas well below `1sec` lag.

doc/testing-on-replica.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ $ gh-osc --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample
5454

5555
Elaborate:
5656
```shell
57-
$ gh-osc --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --switch-to-rbr --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --postpone-cut-over-flag-file=/tmp/ghost-postpone.flag --exact-rowcount --allow-nullable-unique-key --verbose --execute
57+
$ gh-osc --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --switch-to-rbr --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --postpone-cut-over-flag-file=/tmp/ghost-postpone.flag --exact-rowcount --concurrent-rowcount --allow-nullable-unique-key --verbose --execute
5858
```
5959
- Count exact number of rows (makes ETA estimation very good). This goes at the expense of paying the time for issuing a `SELECT COUNT(*)` on your table. We use this lovingly.
6060
- Automatically switch to `RBR` if replica is configured as `SBR`. See also: [migrating with SBR](migrating-with-sbr.md)

doc/throttle.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,11 +28,15 @@ Otherwise you may specify your own list of replica servers you wish it to observ
2828

2929
- `--max-lag-millis`: maximum allowed lag; any controlled replica lagging more than this value will cause throttling to kick in. When all control replicas have smaller lag than indicated, operation resumes.
3030

31-
- `--replication-lag-query`: `gh-ost` will, by default, issue a `show slave status` query to find replication lag. However, this is a notoriously flaky value. If you're using your own `heartbeat` mechanism, e.g. via [`pt-heartbeat`](https://www.percona.com/doc/percona-toolkit/2.2/pt-heartbeat.html), you may provide your own custom query to return a single `int` value indicating replication lag.
31+
- `--replication-lag-query`: `gh-ost` will, by default, issue a `show slave status` query to find replication lag. However, this is a notoriously flaky value. If you're using your own `heartbeat` mechanism, e.g. via [`pt-heartbeat`](https://www.percona.com/doc/percona-toolkit/2.2/pt-heartbeat.html), you may provide your own custom query to return a single decimal (floating point) value indicating replication lag.
3232

33-
Example: `--replication-lag-query="SELECT ROUND(NOW() - MAX(UNIX_TIMESTAMP(ts))) AS lag FROM mydb.heartbeat"`
33+
Example: `--replication-lag-query="SELECT UNIX_TIMESTAMP() - MAX(UNIX_TIMESTAMP(ts)) AS lag FROM mydb.heartbeat"`
3434

35-
Note that you may dynamically change the `throttle-control-replicas` list via [interactive commands](interactive-commands.md)
35+
We encourage you to use [sub-second replication lag throttling](subsecond-lag.md). Your query may then look like:
36+
37+
`--replication-lag-query="SELECT UNIX_TIMESTAMP(6) - MAX(UNIX_TIMESTAMP(ts)) AS lag FROM mydb.heartbeat"`
38+
39+
Note that you may dynamically change both `replication-lag-query` and the `throttle-control-replicas` list via [interactive commands](interactive-commands.md)
3640

3741
#### Status thresholds
3842

@@ -76,9 +80,9 @@ In addition to the above, you are able to take control and throttle the operatio
7680

7781
Any single factor in the above that suggests the migration should throttle - causes throttling. That is, once some component decides to throttle, you cannot override it; you cannot force continued execution of the migration.
7882

79-
`gh-ost` will first check the low hanging fruits: user commanded; throttling files. It will then proceed to check replication lag, then status thesholds, and lastly it will check the throttle-query.
83+
`gh-ost` collects different throttle-related metrics at different times, independently. It asynchronously reads the collected metrics and checks if they satisfy conditions/threasholds.
8084

81-
The first check to suggest throttling stops the search; the status message will note the reason for throttling as the first satisfied check.
85+
The first check to suggest throttling stops the check; the status message will note the reason for throttling as the first satisfied check.
8286

8387
### Throttle status
8488

0 commit comments

Comments
 (0)