Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #7022 +/- ##
=======================================
Coverage 98.69% 98.69%
=======================================
Files 79 79
Lines 14677 14678 +1
=======================================
+ Hits 14486 14487 +1
Misses 191 191 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Generated via commit f11c022 Download link for the artifact containing the test results: ↓ atime-results.zip
|
.ci/atime/tests.R
Outdated
| tmp_csv = tempfile() | ||
| fwrite(DT, tmp_csv) | ||
| }, | ||
| Faster = "60a01fa65191c44d7997de1843e9a1dfe5be9f72", # First commit of the PR (https://github.com/Rdatatable/data.table/pull/6925/commits) that reduced time usage |
There was a problem hiding this comment.
maybe a name like FasterFS/FasterDiskIO to convey the circumstances under which this might be faster? WDYT about a targeted benchmark that might draw out the difference a bit more, e.g. "read a sharded file", something like
setwd(tempdir())
dir.create(td<-tempfile())
setwd(td)
for (ii in 1:100) fwrite(iris, ii)
lapply(list.files(), fread)
MichaelChirico
left a comment
There was a problem hiding this comment.
Great, this looks more faithful as a test -- scaling the number of fread() calls --> scale the number of file.info() calls. Even if this doesn't catch the specific improvement we were after, having something like this will still be a useful performance regression test.





@MichaelChirico suggested adding a performance test for the improvement in #6925 (comment)
the test cases already existed; I added a Faster commit.