Skip to content

Commit c36c162

Browse files
committed
qa: cut squid nightlies to one-per-week
Now that it's released, we should go back to typical release branch cadence (teuthology queue is also growing). Also, update the priority (ahead of reef) now that it's released. Signed-off-by: Patrick Donnelly <[email protected]>
1 parent 5de994d commit c36c162

File tree

1 file changed

+9
-19
lines changed

1 file changed

+9
-19
lines changed

qa/crontab/teuthology-cronjobs

Lines changed: 9 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,6 @@ TEUTHOLOGY_SUITE_ARGS="--non-interactive --newest=100 --ceph-repo=https://git.ce
6262

6363

6464
## main branch runs - weekly
65-
## suites rados and rbd use --subset arg and must be call with schedule_subset.sh
66-
## see script in https://github.com/ceph/ceph/tree/main/qa/machine_types
67-
6865
# rados is massive and difficult to bring down to less than 300 jobs, use one higher priority
6966
00 20 * * 0 $CW $SS 100000 --ceph main --suite rados -p 101 --force-priority
7067
08 20 * * 1 $CW $SS 64 --ceph main --suite orch -p 950
@@ -76,25 +73,18 @@ TEUTHOLOGY_SUITE_ARGS="--non-interactive --newest=100 --ceph-repo=https://git.ce
7673
56 20 * * 6 $CW $SS 1 --ceph main --suite crimson-rados -p 101 --force-priority --flavor crimson
7774

7875

79-
## squid branch runs - twice weekly (crimson-rados is run weekly)
80-
## suites rados and rbd use --subset arg and must be call with schedule_subset.sh
81-
## see script in https://github.com/ceph/ceph/tree/main/qa/machine_types
82-
76+
## squid branch runs - weekly
8377
# rados is massive and difficult to bring down to less than 300 jobs, use one higher priority
84-
# -p 94-
85-
00 21 * * 0 $CW $SS 100000 --ceph squid --suite rados -p 700 --force-priority
86-
08 21 * * 1,5 $CW $SS 64 --ceph squid --suite orch -p 100 --force-priority
87-
16 21 * * 2,6 $CW $SS 128 --ceph squid --suite rbd -p 100 --force-priority
88-
24 21 * * 3,0 $CW $SS 512 --ceph squid --suite fs -p 100 --force-priority
89-
32 21 * * 4,1 $CW $SS 4 --ceph squid --suite powercycle -p 100 --force-priority
90-
40 21 * * 5,2 $CW $SS 1 --ceph squid --suite rgw -p 100 --force-priority
91-
48 21 * * 6,3 $CW $SS 4 --ceph squid --suite krbd -p 100 --force-priority --kernel testing
92-
56 21 * * 6 $CW $SS 1 --ceph squid --suite crimson-rados -p 100 --force-priority --flavor crimson
78+
00 21 * * 0 $CW $SS 100000 --ceph squid --suite rados -p 921
79+
08 21 * * 1 $CW $SS 64 --ceph squid --suite orch -p 920
80+
16 21 * * 2 $CW $SS 128 --ceph squid --suite rbd -p 920
81+
24 21 * * 3 $CW $SS 512 --ceph squid --suite fs -p 920
82+
32 21 * * 4 $CW $SS 4 --ceph squid --suite powercycle -p 920
83+
40 21 * * 5 $CW $SS 1 --ceph squid --suite rgw -p 920
84+
48 21 * * 6 $CW $SS 4 --ceph squid --suite krbd -p 920 --kernel testing
85+
56 21 * * 6 $CW $SS 1 --ceph squid --suite crimson-rados -p 920 --flavor crimson
9386

9487
## reef branch runs - weekly
95-
## suites rados and rbd use --subset arg and must be call with schedule_subset.sh
96-
## see script in https://github.com/ceph/ceph/tree/main/qa/machine_types
97-
9888
# rados is massive and difficult to bring down to less than 300 jobs, use one higher priority
9989
00 22 * * 0 $CW $SS 100000 --ceph reef --suite rados -p 931
10090
08 22 * * 1 $CW $SS 64 --ceph reef --suite orch -p 930

0 commit comments

Comments
 (0)