-
-
Notifications
You must be signed in to change notification settings - Fork 238
Enh/bootstrapping for ci estimation #897
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enh/bootstrapping for ci estimation #897
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## develop #897 +/- ##
===========================================
+ Coverage 80.27% 80.67% +0.39%
===========================================
Files 104 107 +3
Lines 12769 13301 +532
===========================================
+ Hits 10250 10730 +480
- Misses 2519 2571 +52 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Gui-FernandesBR
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR looks amazing!!!
This is a feature we have always wondered about. So I'm extremely happy to see it being implemented here.
At a first glance, the only thing I would suggest is to update documentation to talk about this methods.
You could extend the docs/user/ pages. This is important so users can easily get to know how to use the new method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR enhances the MonteCarlo class by implementing a bootstrapping-based confidence interval estimation method, enabling users to quantify uncertainty in their simulation results and assess convergence.
Key Changes:
- Adds
estimate_confidence_interval()method to the MonteCarlo class usingscipy.stats.bootstrap - Implements comprehensive unit tests using a mock MonteCarlo class to validate CI calculations
- Updates CHANGELOG to document the new feature
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 11 comments.
| File | Description |
|---|---|
| rocketpy/simulation/monte_carlo.py | Adds new public method estimate_confidence_interval() for bootstrapping-based statistical analysis of Monte Carlo results |
| tests/unit/simulation/test_monte_carlo.py | Introduces MockMonteCarlo class and four test functions covering basic CI calculation, custom statistics, error handling, and confidence level consistency |
| CHANGELOG.md | Documents the new bootstrapping feature in the Unreleased section |
|
more specifically. Search within the docs/user files and try to see where the MonteCarlo class is being described. You can add a new block of documentation in some already existing file. |
|
Hey, I updated the documentation. Also do you want me to correct the copilot review to ? |
absolutely! Let me know when all the comments are marked as resolved. |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
I think i resolved or the comments ! |
Gui-FernandesBR
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Pull request type
Checklist
black rocketpy/ tests/) has passed locallypytest tests -m slow --runslow) have passed locallyCHANGELOG.mdhas been updated (if relevant)Current behavior
Currently, the MonteCarlo class allows for running simulations and storing results, but it lacks built-in statistical tools to assess the reliability of these results. There is no native method to calculate confidence intervals, forcing users to manually extract data and perform external calculations to verify simulation convergence (e.g., to ensure the mean apogee has stabilized).
New behavior
This PR implements the estimate_confidence_interval method within the MonteCarlo class, using bootstrapping (via scipy.stats.bootstrap) to calculate confidence intervals for any result attribute (e.g., apogee, max_velocity). This enables users to directly quantify simulation uncertainty and determine if the iteration count is sufficient. Documentation has been updated to explain CI interpretation, and unit tests have been added to validate the calculations.
Breaking change
Additional information
I haven’t included the documentation yet, and I’m not entirely sure where it should go (in a notebook or a new file). Could you please indicate the appropriate place for it