fix(syncBreaches): only upload logos that do not exist in s3#6479
fix(syncBreaches): only upload logos that do not exist in s3#6479kschelonka wants to merge 2 commits intomainfrom
Conversation
| // MNTOR-5166 | ||
| await uploadToS3(logoFilename, Buffer.from(await res.arrayBuffer())); | ||
| await updateBreachFaviconUrl( | ||
| // Check if logo exists in S3 and get its metadata |
There was a problem hiding this comment.
I was going to remove the "and get its metadata" part, but then the rest is just rephrasing the function call:
| // Check if logo exists in S3 and get its metadata |
| } | ||
| async function run() { | ||
|
|
||
| // TODO: MNTOR-5188 |
There was a problem hiding this comment.
Just wondering: if we're not writing these now, when would we ever? I'd also be fine with just marking these as having too many side effects for the juice to be worth the squeeze, i.e. just consciously choosing to not write tests for them. (Rather than doing so while pretending we're not :P)
There was a problem hiding this comment.
The 100% requirement is really strict, so I assume the intention there is that we should write tests where possible and helpful. I'd typically write a smoke/integration test for this job with local database and redis through docker, but I didn't want to expand the scope of this ticket so I put it in TODO. If we're not planning on adding more integration tests then I wonder if it would be better to relax the requirement so we can also get a more accurate coverage report.
There was a problem hiding this comment.
Yes, see the linked docs - the intention is that we consciously consider for 100% of our code whether unit tests make sense, and have predefined criteria for when exceptions are OK (which are indicated by the c8 ignore comments). The thresholds make sure it's easy to find which code one might have forgotten to consider, but if you'd like to know the actual coverage for whatever reason, we run the unit tests daily with the ignore comments removed, so you can inspect those coverage reports. (I'm of the opinion that those numbers are basically meaningless, but they are there.)
We didn't use to have integration tests exercising the database, but of course I'm not opposed to having them. It's mostly that, when I see a TODO like this one, I typically wonder: realistically, if we're not willing to spend the effort to write them now, will we ever? And if not, then maybe we should just consciously accept that?
And to emphasise: I already approved, so this is not a blocker, just food for thought. Feel free to merge with the TODO :)
Co-authored-by: Vincent <Vinnl@users.noreply.github.com>
References:
Jira: MNTOR-4502, MNTOR-5166
Figma:
Description
Updates to syncBreaches:
Note that if an icon changes and we want to update it, we'll have to either update the code logic or do it manually by uploading to S3 bucket directly. Can consider implementing a ttl/staleness check in the future if we want to refresh periodically, but with less frequency than every 10 minutes.
Screenshot (if applicable)
Not applicable.
How to test
npm run test
Checklist (Definition of Done)