Skip to content

Commit 07f6395

Browse files
committed
A bucket of typos.
by @a.lakhin
1 parent 4cf0048 commit 07f6395

24 files changed

+63
-63
lines changed

Cluster.pm

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -438,7 +438,7 @@ sub is_data_identic()
438438
}
439439
elsif ($checksum ne $current_hash)
440440
{
441-
note("got different hashes: $checksum ang $current_hash");
441+
note("got different hashes: $checksum and $current_hash");
442442
return 0;
443443
}
444444
}

README_old.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ While `multimaster` replicates data on the logical level, DDL is replicated on t
5858
The current implementation of logical replication sends data to subscriber nodes only after the local commit. In case of a heavy-write transaction, you have to wait for transaction processing twice: on the local node and on all the other nodes (simultaneously).
5959

6060
* Isolation level.
61-
The `multimaster` extenstion currently supports only the _repeatable_ _read_ isolation level. This is stricter than the default _read_ _commited_ level, but also increases probability of serialization failure during commit. _Serializable_ level is not supported yet.
61+
The `multimaster` extension currently supports only the _repeatable_ _read_ isolation level. This is stricter than the default _read_ _commited_ level, but also increases probability of serialization failure during commit. _Serializable_ level is not supported yet.
6262

6363

6464
## Compatibility

doc/specs/MtmGenerations.tla

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ VARIABLES
2323
\* local clog of node n
2424
acks, \* tx -> number of PREPARE acks
2525
clog, \* tx -> TRUE (commit) | FALSE (abort): global clog (paxos emulation)
26-
gens \* Sequence of all ever elected generations. Models possiblity of
26+
gens \* Sequence of all ever elected generations. Models possibility of
2727
\* learning newer gen existence at any moment.
2828

2929

doc/specs/commit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -357,7 +357,7 @@ More or less same algorithm is implemented in [commit.tla] and can be checked in
357357

358358
Notes:
359359

360-
[1] Now we do not abort in this case but just wait on lock. However if that local transaction will be successfully prepared then it will definetly catch a global deadlock with tx waiting for it. So it may be a good idea to abort one of them earlier -- probably a later one, since it didn't yet used resources of walsender/walreceivers and that will be cheaper for whole system.
360+
[1] Now we do not abort in this case but just wait on lock. However if that local transaction will be successfully prepared then it will definitely catch a global deadlock with tx waiting for it. So it may be a good idea to abort one of them earlier -- probably a later one, since it didn't yet used resources of walsender/walreceivers and that will be cheaper for whole system.
361361

362362
Bibliography:
363363

doc/specs/generations2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -691,7 +691,7 @@ generation whenever it is present in the clique and either
691691
present in current_gen. i.e. we should exclude someone to proceed.
692692
693693
How to recover initially, to decrease the lag without forcing nodes to wait for
694-
us? The idea is to collect with heartbeats also last_online_in of neightbours.
694+
us? The idea is to collect with heartbeats also last_online_in of neighbours.
695695
And node always before initiaing voting for adding itself would either make sure
696696
it most probably (unless many events pass during voting period) won't need
697697
recovery at all (its last_online_in is the same as clique's max) or it first

multimaster--1.0.sql

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -456,7 +456,7 @@ DECLARE
456456
altered boolean := false;
457457
saved_remotes text;
458458
BEGIN
459-
-- with sparce node_id's max(node_id) can be bigger then n_nodes
459+
-- with sparse node_id's max(node_id) can be bigger then n_nodes
460460
select max(id) into max_nodes from mtm.nodes();
461461
select current_setting('multimaster.remote_functions') into saved_remotes;
462462
set multimaster.remote_functions to 'mtm.alter_sequences';
@@ -485,7 +485,7 @@ BEGIN
485485
END LOOP;
486486
EXECUTE 'set multimaster.remote_functions to ''' || saved_remotes || '''';
487487
IF altered = false THEN
488-
RAISE NOTICE 'All found sequnces have proper params.';
488+
RAISE NOTICE 'All found sequences have proper params.';
489489
END IF;
490490
RETURN true;
491491
END

src/bgwpool.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -527,7 +527,7 @@ BgwPoolShutdown(BgwPool *poolDesc)
527527
/*
528528
* Hard termination of workers on some WAL receiver error.
529529
*
530-
* On error WAL receiver woll begin new iteration. But workers need to be killed
530+
* On error WAL receiver will begin new iteration. But workers need to be killed
531531
* without finish of processing.
532532
* The queue will kept in memory, but its state will reset.
533533
*/

src/commit.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ MtmBeginTransaction()
248248
{
249249
/*
250250
* Reject all user's transactions at offline cluster. Allow execution
251-
* of transaction by bg-workers to makeit possible to perform
251+
* of transaction by bg-workers to make it possible to perform
252252
* recovery.
253253
*/
254254
if (!MtmBreakConnection)

src/ddd.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939
*
4040
* Situation when a transaction (say T1) in apply_worker (or receiver
4141
* itself) stucks on some lock created by a transaction in a local backend (say
42-
* T2) will definetly lead to a deadlock since T2 after beeing prepared and
42+
* T2) will definitely lead to a deadlock since T2 after being prepared and
4343
* replicated will fail to obtain lock that is already held by T1.
4444
* Same reasoning may be applied to the situation when apply_worker (or
4545
* receiver) is waiting for an apply_worker (or receiver) belonging to other
@@ -48,12 +48,12 @@
4848
* Only case for distributed deadlock that is left is when apply_worker
4949
* (or receiver) is waiting for another apply_worker from same origin. However,
5050
* such situation isn't possible since one origin node can not have two
51-
* conflicting prepared transaction simultaneosly.
51+
* conflicting prepared transaction simultaneously.
5252
*
5353
* So we may construct distributed deadlock avoiding mechanism by disallowing
5454
* such edges. Now we may ask inverse question: what amount of wait graphs
5555
* with such edges are actually do not represent distributed deadlock? That may
56-
* happen in cases when holding transaction is purely local since it holdind
56+
* happen in cases when holding transaction is purely local since it holding
5757
* locks only in SHARED mode. Only lock levels that are conflicting with this
5858
* modes are EXCLUSIVE and ACCESS EXCLUSIVE. In all other cases proposed
5959
* avoiding scheme should not yield false positives.
@@ -62,7 +62,7 @@
6262
* may throw exception not in WaitOnLock() when we first saw forbidden edge
6363
* but later during first call to local deadlock detector. This way we still
6464
* have `deadlock_timeout` second to grab that lock and database user also can
65-
* increse it on per-transaction basis if there are long-living read-only
65+
* increase it on per-transaction basis if there are long-living read-only
6666
* transactions.
6767
*
6868
* As a further optimization it is possible to check whether our lock is

src/ddl.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -810,7 +810,7 @@ MtmProcessUtilityReceiver(PlannedStmt *pstmt, const char *queryString,
810810
break;
811811
}
812812

813-
/* disable functiob body check at replica */
813+
/* disable function body check at replica */
814814
case T_CreateFunctionStmt:
815815
check_function_bodies = false;
816816
break;

0 commit comments

Comments
 (0)