-
Notifications
You must be signed in to change notification settings - Fork 5
Batch test execution #324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch test execution #324
Conversation
🦙 MegaLinter status: ✅ SUCCESS
See detailed report in MegaLinter reports |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #324 +/- ##
==========================================
+ Coverage 93.84% 95.69% +1.84%
==========================================
Files 27 27
Lines 1593 1602 +9
==========================================
+ Hits 1495 1533 +38
+ Misses 98 69 -29
... and 1 file with indirect coverage changes Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
@jmafoster1 I agree, definitely not the most elegant solution. If it is because of the Can you explain a few things for me first, you say:
|
1 and 2. Yes and no. Of course, we need the model to evaluate the test case, but we don't need it to be assigned to
|
…CausalTestingFramework into 323-batch-test-execution
@f-allian I've removed the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jmafoster1 Excellent, nice one Michael!
@jmafoster1 Is this ready to be merged? |
Yeah I think so. I must have missed your review or something |
Closes #323 |
For the EA case study, trying to execute 87000 test cases fills up the RAM. The
run_tests_in_batches
method was meant to fix this, but didn't. I really want to be able to run all 87K tests at once, so I dug a bit deeper into this. The problem is that, when we execute a test case, theestimator.model
attribute becomes non-None. It is this that fills up the RAM, but we don't use it here, so I just reset it to None after the test is executed. I'm not sure how I feel about this as a solution going forward, so any better suggestions would be much appreciated. I thinktest.estimator.model
is used elsewhere, though (perhaps as part of test adequacy?) but we should be able to do everything we need to do before resetting it to None. On the other hand, it just feels a bit inelegant somehow.