Skip to content

Commit 3536f9a

Browse files
authored
[test times] Change query for periodic jobs (#7086)
Query should be faster now since it doesn't need to do a join with push, and also queries a smaller table Reason: Query was hitting memory limits when I ran locally Pros: query no longer hits memory limits Cons: query semantics change slightly but should be ok I don't think we even need the jobs to be successful now that we continue on error in main This does not fix the asan time out thing, that is a separate issue. If you don't know what this means, don't worry about it Testing: ran `python tools/torchci/update_test_times.py` locally
1 parent 90f4b97 commit 3536f9a

File tree

2 files changed

+18
-30
lines changed
  • torchci/clickhouse_queries/test_times

2 files changed

+18
-30
lines changed

torchci/clickhouse_queries/test_times/per_class_periodic_jobs/query.sql

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,17 @@
11
-- same as test_time_per_file query except for the first select
22
WITH good_periodic_sha AS (
3-
SELECT job.head_sha AS sha
3+
SELECT
4+
w.head_sha AS sha
45
FROM
5-
default.workflow_job job
6-
JOIN default.push ON job.head_sha = push.head_commit.'id'
6+
default .workflow_run w
77
WHERE
8-
job.workflow_name = 'periodic'
9-
AND job.head_branch LIKE 'main'
10-
AND job.repository_full_name = 'pytorch/pytorch'
11-
GROUP BY
12-
job.head_sha,
13-
push.head_commit.'timestamp'
14-
HAVING
15-
groupBitAnd(
16-
job.conclusion = 'success'
17-
AND job.conclusion IS NOT null
18-
) = 1
8+
w.name = 'periodic'
9+
AND w.head_branch = 'main'
10+
AND w.repository. 'full_name' = 'pytorch/pytorch'
11+
and w.conclusion = 'success'
12+
and w.run_attempt = 1
1913
ORDER BY
20-
push.head_commit.'timestamp' DESC
14+
w.head_commit. 'timestamp' DESC
2115
LIMIT
2216
3
2317
),

torchci/clickhouse_queries/test_times/per_file_periodic_jobs/query.sql

Lines changed: 9 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,17 @@
11
-- same as test_time_per_file query except for the first select
22
WITH good_periodic_sha AS (
3-
SELECT job.head_sha AS sha
3+
SELECT
4+
w.head_sha AS sha
45
FROM
5-
default.workflow_job job
6-
JOIN default.push ON job.head_sha = push.head_commit.'id'
6+
default .workflow_run w
77
WHERE
8-
job.workflow_name = 'periodic'
9-
AND job.head_branch LIKE 'main'
10-
AND job.repository_full_name = 'pytorch/pytorch'
11-
GROUP BY
12-
job.head_sha,
13-
push.head_commit.'timestamp'
14-
HAVING
15-
groupBitAnd(
16-
job.conclusion = 'success'
17-
AND job.conclusion IS NOT null
18-
) = 1
8+
w.name = 'periodic'
9+
AND w.head_branch = 'main'
10+
AND w.repository. 'full_name' = 'pytorch/pytorch'
11+
and w.conclusion = 'success'
12+
and w.run_attempt = 1
1913
ORDER BY
20-
push.head_commit.'timestamp' DESC
14+
w.head_commit. 'timestamp' DESC
2115
LIMIT
2216
3
2317
),

0 commit comments

Comments
 (0)