Skip to content

Conversation

@audreyyeoCH
Copy link
Collaborator

closes #120

@audreyyeoCH audreyyeoCH changed the title issue 120 issues 120 issues Sep 25, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Sep 25, 2025

badge

Code Coverage Summary

Filename                 Stmts    Miss  Cover    Missing
---------------------  -------  ------  -------  -----------
R/betadiff.R                78       0  100.00%
R/boundsPostprob.R          44       0  100.00%
R/boundsPredprob.R          56       1  98.21%   83
R/dbetabinom.R              99       3  96.97%   32, 62, 154
R/oc2.R                    162     162  0.00%    93-326
R/oc3.R                    146     146  0.00%    91-308
R/ocPostprob.R             117       0  100.00%
R/ocPostprobDist.R         122       0  100.00%
R/ocPredprob.R             205       0  100.00%
R/ocPredprobDist.R         257       5  98.05%   363-367
R/ocRctPostprobDist.R      174       0  100.00%
R/ocRctPredprobDist.R      302       0  100.00%
R/plotBeta.R                97       9  90.72%   124-132
R/plotBounds.R              52      52  0.00%    33-90
R/plotDecision.R            73       0  100.00%
R/plotOc.R                  55       2  96.36%   99-100
R/postprob.R                34       1  97.06%   113
R/postprobDist.R            77       1  98.70%   204
R/predprob.R                36       0  100.00%
R/predprobDist.R           140       1  99.29%   269
R/runShinyPhase1b.R          4       4  0.00%    8-13
R/sumBetaDiff.R             80      15  81.25%   102-120
R/sumTable.R                36       0  100.00%
TOTAL                     2446     402  83.57%

Diff against main

Filename          Stmts    Miss  Cover
--------------  -------  ------  --------
R/dbetabinom.R       +8       0  +0.27%
R/predprob.R        +12       0  +100.00%
TOTAL               +20       0  +0.14%

Results for commit: ab6746a

Minimum allowed coverage is 80%

♻️ This comment has been updated with latest results

@github-actions
Copy link
Contributor

github-actions bot commented Sep 25, 2025

Unit Test Performance Difference

Test Suite $Status$ Time on main $±Time$ $±Tests$ $±Skipped$ $±Failures$ $±Errors$
ocPostprob 💚 $24.98$ $-1.43$ $0$ $0$ $0$ $0$
ocPredprob 💚 $191.52$ $-1.02$ $0$ $0$ $0$ $0$
ocPredprobDist 💚 $93.27$ $-10.39$ $0$ $0$ $0$ $0$
ocRctPredprobDist 💚 $28.29$ $-2.95$ $0$ $0$ $0$ $0$
plotOc 💚 $207.05$ $-20.88$ $0$ $0$ $0$ $0$
predprobDist 💔 $1.13$ $+20.62$ $+2426$ $0$ $0$ $0$
Additional test case details
Test Suite $Status$ Time on main $±Time$ Test Case
ocPredprob 💚 $188.90$ $-1.04$ ocPredprob_correctly_shows_maximum_sample_size_when_no_decision_reached
ocPredprobDist 💚 $24.32$ $-2.74$ ocPredprobDist_gives_higher_PrEfficacy_with_more_efficacy_looks
ocPredprobDist 💚 $59.97$ $-6.68$ ocPredprobDist_gives_higher_PrFutility_with_more_futility_looks
ocRctPredprobDist 💚 $11.31$ $-1.23$ ocRctPredprobDist_gives_higher_PrEfficacy_with_increased_pE
ocRctPredprobDist 💚 $11.22$ $-1.10$ ocRctPredprobDist_gives_higher_PrFutility_with_decreased_pE
plotOc 💚 $55.64$ $-6.12$ h_get_dataframe_oc_gives_correct_results_for_ocPredprobDist_when_relativeDelta_FALSE
plotOc 💚 $47.84$ $-5.23$ h_get_dataframe_oc_gives_correct_results_for_ocPredprobDist_when_relativeDelta_TRUE
plotOc 💚 $100.63$ $-9.30$ plotOc_gives_expected_results_for_ocPredprobDist_with_different_relativeDelta_status
postprob 👶 $+0.01$ postprob_from_beta_mixture_priors_utilise_updated_weights
predprob 👶 $+0.01$ Warning_length_equal_to_length_of_x
predprob 💀 $0.02$ $-0.02$ predprob_gives_correct_list
predprobDist 💔 $0.18$ $+1.77$ h_predprobdist_gives_correct_list
predprobDist 💔 $0.32$ $+5.32$ h_predprobdist_gives_higher_predictive_probability_when_thetaT_is_lower
predprobDist 💔 $0.31$ $+9.26$ predprobDist_gives_higher_predictive_probability_when_thetaT_is_lower_in_a_single_arm_trial
predprobDist 💔 $0.17$ $+3.75$ predprobDist_gives_the_correct_results_in_a_two_arm_study

Results for commit 31d6238

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 25, 2025

Unit Tests Summary

    1 files     20 suites   8m 57s ⏱️
  140 tests   140 ✅ 0 💤 0 ❌
3 764 runs  3 764 ✅ 0 💤 0 ❌

Results for commit ab6746a.

♻️ This comment has been updated with latest results.

Copy link
Collaborator

@danielinteractive danielinteractive left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @audreyyeoCH , please see my comments below

@audreyyeoCH
Copy link
Collaborator Author

@danielinteractive, when users don't assign weights to two pairs of alpha and betas, then a weight of 1 is assigned to each pair, making the sum(weights) != 1. The warning will therefore come up. Is this valuable ?

@danielinteractive
Copy link
Collaborator

@audreyyeoCH no that would be confusing, but I guess that should be easy to fix on the package side? in the sense that we can default the weights not to 1 but to 1/nComponents ?

@audreyyeoCH
Copy link
Collaborator Author

@audreyyeoCH no that would be confusing, but I guess that should be easy to fix on the package side? in the sense that we can default the weights not to 1 but to 1/nComponents ?

Great, thanks, would it be good to move this too into h_getBetamixPost() instead of getting the user facing function to do it ?

@danielinteractive
Copy link
Collaborator

no, better keep the position of assigning default weights in the user facing functions

@audreyyeoCH
Copy link
Collaborator Author

no, better keep the position of assigning default weights in the user facing functions

alright, thanks!

@audreyyeoCH
Copy link
Collaborator Author

audreyyeoCH commented Sep 29, 2025

no, better keep the position of assigning default weights in the user facing functions

also for predprob when we put the warning in the helper, testing is (now passing) rather complicated as expect_warning() only refers to the first warning, but this is "solved" when we put it in the user facing function. I've found a way now, but do we still keep it in the helper ?

@danielinteractive
Copy link
Collaborator

the warning should stay in the helper function, because there the renormalization is done.

Co-authored-by: Daniel Sabanes Bove <[email protected]>
@audreyyeoCH audreyyeoCH merged commit a0c19aa into main Oct 1, 2025
23 checks passed
@audreyyeoCH audreyyeoCH deleted the 120_one_more_unit_test_postprob branch October 1, 2025 08:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unit testing for postprob() in Mixture scenario

3 participants