Skip to content

Commit 416a0f7

Browse files
authored
Use names with "unusual" symbols in behave tests (patroni#3162)
It'll hopefully prevent problems like patroni#3142 in future.
1 parent 94a592d commit 416a0f7

24 files changed

+503
-492
lines changed

features/basic_replication.feature

Lines changed: 40 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -2,84 +2,84 @@ Feature: basic replication
22
We should check that the basic bootstrapping, replication and failover works.
33

44
Scenario: check replication of a single table
5-
Given I start postgres0
6-
Then postgres0 is a leader after 10 seconds
5+
Given I start postgres-0
6+
Then postgres-0 is a leader after 10 seconds
77
And there is a non empty initialize key in DCS after 15 seconds
88
When I issue a PATCH request to http://127.0.0.1:8008/config with {"ttl": 20, "synchronous_mode": true}
99
Then I receive a response code 200
10-
When I start postgres1
11-
And I configure and start postgres2 with a tag replicatefrom postgres0
12-
And "sync" key in DCS has leader=postgres0 after 20 seconds
13-
And I add the table foo to postgres0
14-
Then table foo is present on postgres1 after 20 seconds
15-
Then table foo is present on postgres2 after 20 seconds
10+
When I start postgres-1
11+
And I configure and start postgres-2 with a tag replicatefrom postgres-0
12+
And "sync" key in DCS has leader=postgres-0 after 20 seconds
13+
And I add the table foo to postgres-0
14+
Then table foo is present on postgres-1 after 20 seconds
15+
Then table foo is present on postgres-2 after 20 seconds
1616

1717
Scenario: check restart of sync replica
18-
Given I shut down postgres2
19-
Then "sync" key in DCS has sync_standby=postgres1 after 5 seconds
20-
When I start postgres2
21-
And I shut down postgres1
22-
Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds
23-
When I start postgres1
24-
Then "members/postgres1" key in DCS has state=running after 10 seconds
18+
Given I shut down postgres-2
19+
Then "sync" key in DCS has sync_standby=postgres-1 after 5 seconds
20+
When I start postgres-2
21+
And I shut down postgres-1
22+
Then "sync" key in DCS has sync_standby=postgres-2 after 10 seconds
23+
When I start postgres-1
24+
Then "members/postgres-1" key in DCS has state=running after 10 seconds
2525
And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds
2626
And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds
2727

2828
Scenario: check stuck sync replica
2929
Given I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": true, "maximum_lag_on_syncnode": 15000000, "postgresql": {"parameters": {"synchronous_commit": "remote_apply"}}}
3030
Then I receive a response code 200
31-
And I create table on postgres0
32-
And table mytest is present on postgres1 after 2 seconds
33-
And table mytest is present on postgres2 after 2 seconds
34-
When I pause wal replay on postgres2
35-
And I load data on postgres0
36-
Then "sync" key in DCS has sync_standby=postgres1 after 15 seconds
37-
And I resume wal replay on postgres2
31+
And I create table on postgres-0
32+
And table mytest is present on postgres-1 after 2 seconds
33+
And table mytest is present on postgres-2 after 2 seconds
34+
When I pause wal replay on postgres-2
35+
And I load data on postgres-0
36+
Then "sync" key in DCS has sync_standby=postgres-1 after 15 seconds
37+
And I resume wal replay on postgres-2
3838
And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds
3939
And Status code on GET http://127.0.0.1:8010/async is 200 after 3 seconds
4040
When I issue a PATCH request to http://127.0.0.1:8008/config with {"pause": null, "maximum_lag_on_syncnode": -1, "postgresql": {"parameters": {"synchronous_commit": "on"}}}
4141
Then I receive a response code 200
42-
And I drop table on postgres0
42+
And I drop table on postgres-0
4343

4444
Scenario: check multi sync replication
4545
Given I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 2}
4646
Then I receive a response code 200
47-
Then "sync" key in DCS has sync_standby=postgres1,postgres2 after 10 seconds
47+
Then "sync" key in DCS has sync_standby=postgres-1,postgres-2 after 10 seconds
4848
And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds
4949
And Status code on GET http://127.0.0.1:8009/sync is 200 after 3 seconds
5050
When I issue a PATCH request to http://127.0.0.1:8008/config with {"synchronous_node_count": 1}
5151
Then I receive a response code 200
52-
And I shut down postgres1
53-
Then "sync" key in DCS has sync_standby=postgres2 after 10 seconds
54-
When I start postgres1
55-
Then "members/postgres1" key in DCS has state=running after 10 seconds
52+
And I shut down postgres-1
53+
Then "sync" key in DCS has sync_standby=postgres-2 after 10 seconds
54+
When I start postgres-1
55+
Then "members/postgres-1" key in DCS has state=running after 10 seconds
5656
And Status code on GET http://127.0.0.1:8010/sync is 200 after 3 seconds
5757
And Status code on GET http://127.0.0.1:8009/async is 200 after 3 seconds
5858

5959
Scenario: check the basic failover in synchronous mode
6060
Given I run patronictl.py pause batman
6161
Then I receive a response returncode 0
6262
When I sleep for 2 seconds
63-
And I shut down postgres0
63+
And I shut down postgres-0
6464
And I run patronictl.py resume batman
6565
Then I receive a response returncode 0
66-
And postgres2 role is the primary after 24 seconds
66+
And postgres-2 role is the primary after 24 seconds
6767
And Response on GET http://127.0.0.1:8010/history contains recovery after 10 seconds
68-
And there is a postgres2_cb.log with "on_role_change primary batman" in postgres2 data directory
68+
And there is a postgres-2_cb.log with "on_role_change primary batman" in postgres-2 data directory
6969
When I issue a PATCH request to http://127.0.0.1:8010/config with {"synchronous_mode": null, "master_start_timeout": 0}
7070
Then I receive a response code 200
71-
When I add the table bar to postgres2
72-
Then table bar is present on postgres1 after 20 seconds
71+
When I add the table bar to postgres-2
72+
Then table bar is present on postgres-1 after 20 seconds
7373
And Response on GET http://127.0.0.1:8010/config contains master_start_timeout after 10 seconds
7474

7575
Scenario: check rejoin of the former primary with pg_rewind
76-
Given I add the table splitbrain to postgres0
77-
And I start postgres0
78-
Then postgres0 role is the secondary after 20 seconds
79-
When I add the table buz to postgres2
80-
Then table buz is present on postgres0 after 20 seconds
76+
Given I add the table splitbrain to postgres-0
77+
And I start postgres-0
78+
Then postgres-0 role is the secondary after 20 seconds
79+
When I add the table buz to postgres-2
80+
Then table buz is present on postgres-0 after 20 seconds
8181

8282
@reject-duplicate-name
8383
Scenario: check graceful rejection when two nodes have the same name
84-
Given I start duplicate postgres0 on port 8011
85-
Then there is one of ["Can't start; there is already a node named 'postgres0' running"] CRITICAL in the dup-postgres0 patroni log after 5 seconds
84+
Given I start duplicate postgres-0 on port 8011
85+
Then there is one of ["Can't start; there is already a node named 'postgres-0' running"] CRITICAL in the dup-postgres-0 patroni log after 5 seconds

features/cascading_replication.feature

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@ Feature: cascading replication
22
We should check that patroni can do base backup and streaming from the replica
33

44
Scenario: check a base backup and streaming replication from a replica
5-
Given I start postgres0
6-
And postgres0 is a leader after 10 seconds
7-
And I configure and start postgres1 with a tag clonefrom true
8-
And replication works from postgres0 to postgres1 after 20 seconds
9-
And I create label with "postgres0" in postgres0 data directory
10-
And I create label with "postgres1" in postgres1 data directory
11-
And "members/postgres1" key in DCS has state=running after 12 seconds
12-
And I configure and start postgres2 with a tag replicatefrom postgres1
13-
Then replication works from postgres0 to postgres2 after 30 seconds
14-
And there is a label with "postgres1" in postgres2 data directory
5+
Given I start postgres-0
6+
And postgres-0 is a leader after 10 seconds
7+
And I configure and start postgres-1 with a tag clonefrom true
8+
And replication works from postgres-0 to postgres-1 after 20 seconds
9+
And I create label with "postgres-0" in postgres-0 data directory
10+
And I create label with "postgres-1" in postgres-1 data directory
11+
And "members/postgres-1" key in DCS has state=running after 12 seconds
12+
And I configure and start postgres-2 with a tag replicatefrom postgres-1
13+
Then replication works from postgres-0 to postgres-2 after 30 seconds
14+
And there is a label with "postgres-1" in postgres-2 data directory

features/citus.feature

Lines changed: 54 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -2,79 +2,79 @@ Feature: citus
22
We should check that coordinator discovers and registers workers and clients don't have errors when worker cluster switches over
33

44
Scenario: check that worker cluster is registered in the coordinator
5-
Given I start postgres0 in citus group 0
6-
And I start postgres2 in citus group 1
7-
Then postgres0 is a leader in a group 0 after 10 seconds
8-
And postgres2 is a leader in a group 1 after 10 seconds
9-
When I start postgres1 in citus group 0
10-
And I start postgres3 in citus group 1
11-
Then replication works from postgres0 to postgres1 after 15 seconds
12-
Then replication works from postgres2 to postgres3 after 15 seconds
13-
And postgres0 is registered in the postgres0 as the primary in group 0 after 5 seconds
14-
And postgres1 is registered in the postgres0 as the secondary in group 0 after 5 seconds
15-
And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds
16-
And postgres3 is registered in the postgres0 as the secondary in group 1 after 5 seconds
5+
Given I start postgres-0 in citus group 0
6+
And I start postgres-2 in citus group 1
7+
Then postgres-0 is a leader in a group 0 after 10 seconds
8+
And postgres-2 is a leader in a group 1 after 10 seconds
9+
When I start postgres-1 in citus group 0
10+
And I start postgres-3 in citus group 1
11+
Then replication works from postgres-0 to postgres-1 after 15 seconds
12+
Then replication works from postgres-2 to postgres-3 after 15 seconds
13+
And postgres-0 is registered in the postgres-0 as the primary in group 0 after 5 seconds
14+
And postgres-1 is registered in the postgres-0 as the secondary in group 0 after 5 seconds
15+
And postgres-2 is registered in the postgres-0 as the primary in group 1 after 5 seconds
16+
And postgres-3 is registered in the postgres-0 as the secondary in group 1 after 5 seconds
1717

1818
Scenario: coordinator failover updates pg_dist_node
19-
Given I run patronictl.py failover batman --group 0 --candidate postgres1 --force
20-
Then postgres1 role is the primary after 10 seconds
21-
And "members/postgres0" key in a group 0 in DCS has state=running after 15 seconds
22-
And replication works from postgres1 to postgres0 after 15 seconds
23-
And postgres1 is registered in the postgres2 as the primary in group 0 after 5 seconds
24-
And postgres0 is registered in the postgres2 as the secondary in group 0 after 15 seconds
25-
And "sync" key in a group 0 in DCS has sync_standby=postgres0 after 15 seconds
26-
When I run patronictl.py switchover batman --group 0 --candidate postgres0 --force
27-
Then postgres0 role is the primary after 10 seconds
28-
And replication works from postgres0 to postgres1 after 15 seconds
29-
And postgres0 is registered in the postgres2 as the primary in group 0 after 5 seconds
30-
And postgres1 is registered in the postgres2 as the secondary in group 0 after 15 seconds
31-
And "sync" key in a group 0 in DCS has sync_standby=postgres1 after 15 seconds
19+
Given I run patronictl.py failover batman --group 0 --candidate postgres-1 --force
20+
Then postgres-1 role is the primary after 10 seconds
21+
And "members/postgres-0" key in a group 0 in DCS has state=running after 15 seconds
22+
And replication works from postgres-1 to postgres-0 after 15 seconds
23+
And postgres-1 is registered in the postgres-2 as the primary in group 0 after 5 seconds
24+
And postgres-0 is registered in the postgres-2 as the secondary in group 0 after 15 seconds
25+
And "sync" key in a group 0 in DCS has sync_standby=postgres-0 after 15 seconds
26+
When I run patronictl.py switchover batman --group 0 --candidate postgres-0 --force
27+
Then postgres-0 role is the primary after 10 seconds
28+
And replication works from postgres-0 to postgres-1 after 15 seconds
29+
And postgres-0 is registered in the postgres-2 as the primary in group 0 after 5 seconds
30+
And postgres-1 is registered in the postgres-2 as the secondary in group 0 after 15 seconds
31+
And "sync" key in a group 0 in DCS has sync_standby=postgres-1 after 15 seconds
3232

3333
Scenario: worker switchover doesn't break client queries on the coordinator
34-
Given I create a distributed table on postgres0
35-
And I start a thread inserting data on postgres0
34+
Given I create a distributed table on postgres-0
35+
And I start a thread inserting data on postgres-0
3636
When I run patronictl.py switchover batman --group 1 --force
3737
Then I receive a response returncode 0
38-
And postgres3 role is the primary after 10 seconds
39-
And "members/postgres2" key in a group 1 in DCS has state=running after 15 seconds
40-
And replication works from postgres3 to postgres2 after 15 seconds
41-
And postgres3 is registered in the postgres0 as the primary in group 1 after 5 seconds
42-
And postgres2 is registered in the postgres0 as the secondary in group 1 after 15 seconds
43-
And "sync" key in a group 1 in DCS has sync_standby=postgres2 after 15 seconds
38+
And postgres-3 role is the primary after 10 seconds
39+
And "members/postgres-2" key in a group 1 in DCS has state=running after 15 seconds
40+
And replication works from postgres-3 to postgres-2 after 15 seconds
41+
And postgres-3 is registered in the postgres-0 as the primary in group 1 after 5 seconds
42+
And postgres-2 is registered in the postgres-0 as the secondary in group 1 after 15 seconds
43+
And "sync" key in a group 1 in DCS has sync_standby=postgres-2 after 15 seconds
4444
And a thread is still alive
4545
When I run patronictl.py switchover batman --group 1 --force
4646
Then I receive a response returncode 0
47-
And postgres2 role is the primary after 10 seconds
48-
And replication works from postgres2 to postgres3 after 15 seconds
49-
And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds
50-
And postgres3 is registered in the postgres0 as the secondary in group 1 after 15 seconds
51-
And "sync" key in a group 1 in DCS has sync_standby=postgres3 after 15 seconds
47+
And postgres-2 role is the primary after 10 seconds
48+
And replication works from postgres-2 to postgres-3 after 15 seconds
49+
And postgres-2 is registered in the postgres-0 as the primary in group 1 after 5 seconds
50+
And postgres-3 is registered in the postgres-0 as the secondary in group 1 after 15 seconds
51+
And "sync" key in a group 1 in DCS has sync_standby=postgres-3 after 15 seconds
5252
And a thread is still alive
5353
When I stop a thread
54-
Then a distributed table on postgres0 has expected rows
54+
Then a distributed table on postgres-0 has expected rows
5555

5656
Scenario: worker primary restart doesn't break client queries on the coordinator
57-
Given I cleanup a distributed table on postgres0
58-
And I start a thread inserting data on postgres0
59-
When I run patronictl.py restart batman postgres2 --group 1 --force
57+
Given I cleanup a distributed table on postgres-0
58+
And I start a thread inserting data on postgres-0
59+
When I run patronictl.py restart batman postgres-2 --group 1 --force
6060
Then I receive a response returncode 0
61-
And postgres2 role is the primary after 10 seconds
62-
And replication works from postgres2 to postgres3 after 15 seconds
63-
And postgres2 is registered in the postgres0 as the primary in group 1 after 5 seconds
64-
And postgres3 is registered in the postgres0 as the secondary in group 1 after 15 seconds
61+
And postgres-2 role is the primary after 10 seconds
62+
And replication works from postgres-2 to postgres-3 after 15 seconds
63+
And postgres-2 is registered in the postgres-0 as the primary in group 1 after 5 seconds
64+
And postgres-3 is registered in the postgres-0 as the secondary in group 1 after 15 seconds
6565
And a thread is still alive
6666
When I stop a thread
67-
Then a distributed table on postgres0 has expected rows
67+
Then a distributed table on postgres-0 has expected rows
6868

6969
Scenario: check that in-flight transaction is rolled back after timeout when other workers need to change pg_dist_node
70-
Given I start postgres4 in citus group 2
71-
Then postgres4 is a leader in a group 2 after 10 seconds
72-
And "members/postgres4" key in a group 2 in DCS has role=primary after 3 seconds
70+
Given I start postgres-4 in citus group 2
71+
Then postgres-4 is a leader in a group 2 after 10 seconds
72+
And "members/postgres-4" key in a group 2 in DCS has role=primary after 3 seconds
7373
When I run patronictl.py edit-config batman --group 2 -s ttl=20 --force
7474
Then I receive a response returncode 0
7575
And I receive a response output "+ttl: 20"
76-
Then postgres4 is registered in the postgres2 as the primary in group 2 after 5 seconds
77-
When I shut down postgres4
78-
Then there is a transaction in progress on postgres0 changing pg_dist_node after 5 seconds
79-
When I run patronictl.py restart batman postgres2 --group 1 --force
76+
Then postgres-4 is registered in the postgres-2 as the primary in group 2 after 5 seconds
77+
When I shut down postgres-4
78+
Then there is a transaction in progress on postgres-0 changing pg_dist_node after 5 seconds
79+
When I run patronictl.py restart batman postgres-2 --group 1 --force
8080
Then a transaction finishes in 20 seconds

features/custom_bootstrap.feature

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,16 @@ Feature: custom bootstrap
22
We should check that patroni can bootstrap a new cluster from a backup
33

44
Scenario: clone existing cluster using pg_basebackup
5-
Given I start postgres0
6-
Then postgres0 is a leader after 10 seconds
7-
When I add the table foo to postgres0
8-
And I start postgres1 in a cluster batman1 as a clone of postgres0
9-
Then postgres1 is a leader of batman1 after 10 seconds
10-
Then table foo is present on postgres1 after 10 seconds
5+
Given I start postgres-0
6+
Then postgres-0 is a leader after 10 seconds
7+
When I add the table foo to postgres-0
8+
And I start postgres-1 in a cluster batman1 as a clone of postgres-0
9+
Then postgres-1 is a leader of batman1 after 10 seconds
10+
Then table foo is present on postgres-1 after 10 seconds
1111

1212
Scenario: make a backup and do a restore into a new cluster
13-
Given I add the table bar to postgres1
14-
And I do a backup of postgres1
15-
When I start postgres2 in a cluster batman2 from backup
16-
Then postgres2 is a leader of batman2 after 30 seconds
17-
And table bar is present on postgres2 after 10 seconds
13+
Given I add the table bar to postgres-1
14+
And I do a backup of postgres-1
15+
When I start postgres-2 in a cluster batman2 from backup
16+
Then postgres-2 is a leader of batman2 after 30 seconds
17+
And table bar is present on postgres-2 after 10 seconds

0 commit comments

Comments
 (0)