-
Notifications
You must be signed in to change notification settings - Fork 160
Feat; status page queries #2217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Feat; status page queries #2217
Conversation
0a29b2a
to
b26fd62
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It's a bit weird to encode the knowledge of status page data in the database crate, but I guess that it is better than splitting it into multiple methods on the pool, as we want to get highly specialized results here, and ideally in an optimal manner. So fine by me.
ORDER BY | ||
completed_at | ||
DESC LIMIT {max_completed_requests} | ||
), jobs AS ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this a bit more and I think that we should probably just compute the request computation duration at completion time, and then just store it as a field in the benchmark request table. It will allow us to avoid doing this job query everytime we load the page, and it makes the duration immutable. If we load the jobs on every status page load, then if the jobs disappear, the duration can change (should be quite rare), but more importantly if we backfill the request in the meantime, then suddenly the duration of the request would jump to some ludicrously long duration, because we'd get a new recent completed job, while the old completed jobs could be e.g. 1-2 days old.
Computing the duration once at the time the request is completed (ideally in the mark_request_as_completed
query) would avoid these, so I would prefer doing that.
), artifacts AS ( | ||
SELECT | ||
artifact.id, | ||
\"name\" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this need to be quoted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name
is a keyword in postgres to use keywords as column names you need to double quote them. However name is un-reserved so I can remove it. The editor (dbeaver) I was writing this SQL in automatically did it.
), errors AS ( | ||
SELECT | ||
artifacts.name AS tag, | ||
ARRAY_AGG(error) AS errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, interesting. The error count should be low, so normally I would just do a join and call it a day, but if this works, then why not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something to note on the error
table is that it needs a unique "krate"
, aid
pairing. Which if we are using for arbitrary error collection may mean we need think about this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it's called "krate", but it's really just an arbitrary string, so we can put there "job-queue-error" or whatever.
This should have no performance bottle necks; Both
in_progress
andcompleted
are obtained in 2 queries.Roughly we have;
Then in the website we'd compose the queue by calling the method from the
job_queue
module.Annoyingly we don't have any database deserialisation so I've had to do it manually. The test probably gives the best example of why this approach is handy. I've tried to push all of the oddities into the
postgres.rs
file so nothing leaks out.