-
Notifications
You must be signed in to change notification settings - Fork 2
Allow for different metrics in NPV calculation #1027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1027 +/- ##
==========================================
+ Coverage 82.09% 82.18% +0.09%
==========================================
Files 53 53
Lines 7310 7383 +73
Branches 7310 7383 +73
==========================================
+ Hits 6001 6068 +67
- Misses 1019 1025 +6
Partials 290 290 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| let (metric_precedence, metric) = match annual_fixed_cost.value() { | ||
| // If AFC is zero, use total surplus as the metric (strictly better than nonzero AFC) | ||
| 0.0 => (0, -profitability_index.total_annualised_surplus.value()), | ||
| // If AFC is non-zero, use profitability index as the metric | ||
| _ => (1, -profitability_index.value().value()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not fully convinced by this. the profitability index is dimensionless, but the annualised surplus is Money. Even though it does not matter from the typing perspective since you are getting the underlying value in both cases, which is float, I wonder if this choice makes sense from a logic perspective.
Not that I've a better suggestion.
|
|
||
| // calculate metric and precedence depending on asset parameters | ||
| // note that metric will be minimised so if larger is better, we negate the value | ||
| let (metric_precedence, metric) = match annual_fixed_cost.value() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking again on this, I think this logic (whatever it becomes, see my other comment) should be within the ProfitabilityIndex.value, also adding a ProfitabilityIndex.precedence method that returns 0 or 1 depending on the value of AFC.
tsmbland
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is ok, I have an alternative suggestion though.
The idea would be introduce a new trait for appraisal metrics:
pub trait MetricTrait {
fn value(&self) -> f64;
fn compare(&self, other: &Self) -> Ordering;
}
pub struct AppraisalOutput {
pub metric: Box<dyn MetricTrait>,
// ...
}
You could add this trait to your ProfitabilityIndex struct, and add a custom compare method here. You'd also have to make an equivalent struct for LCOX - it would probably be very simple, although there may be some edge cases we haven't thought of yet. I think this would help to contain the comparison logic and make the code cleaner. We'd also no longer have to make the profitability index negative as the custom compare method could be written to look for the maximum - I always found this a bit hacky and it makes the output files confusing
Or this: I think there are various pros and cons of each option which I don't fully understand. I think possibly the latter is better if you don't need to store |
alexdewar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've taken the liberty of jumping in for a review here @Aurashk 🙃. Hope that's ok!
I agree with @tsmbland's suggestion that it would be better to use traits for this instead though -- I just think it will make it a bit clearer and more maintainable.
I'm wondering if it might be best to define a supertrait instead (that's just an alias for a combination of traits). In our case, we just need things which can be compared (Ord) and written to file (Serialize). We did talk about having an Ord implementation for unit types (#717) and I think I've actually done that somewhere, but didn't open a PR as we didn't need it, but I can do if that would be useful! That unit types would automatically define the supertrait.
I think the problem with having a value() method returning f64, as @tsmbland suggested, is that it wouldn't be obvious which value was being returned for the NPV case.
E.g.:
trait ComparisonMetric: Ord + Serialize {}
pub struct AppraisalOutput {
pub metric: Box<dyn ComparisonMetric>,
// ...
}What do people think?
| /// Where there is more than one possible metric for comparing appraisals, this integer | ||
| /// indicates the precedence of the metric (lower values have higher precedence). | ||
| /// Only metrics with the same precedence should be compared. | ||
| pub metric_precedence: u8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably make this a u32 instead. I know we won't ever need more than 256 different values here (if we did, that would be horrible!), but I think it's best to use 32-bit integers as a default, unless there's a good reason not to.
| // Calculate profitability index for the hypothetical investment | ||
| let annual_fixed_cost = annual_fixed_cost(asset); | ||
| if annual_fixed_cost.value() < 0.0 { | ||
| bail!("The current NPV calculation does not support negative annual fixed costs"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this actually happen? I'm struggling to think... @tsmbland?
If it's more of a sanity check instead (still worth doing!) then I'd change this to an assert! instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is then something has gone badly wrong! Agree, better to change to assert!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding of this comment from Adam, it may be possible in the future. And in that case a negative AFC isn't necessarily wrong, just the resulting profitability index will be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ok. I'd think I'd still make it a panic! for now though, because if it happens it's a coding bug, not a user-facing error.
|
PS -- this isn't super important, but for the NPV metric, I'd include the numerator and denominator in the output CSV file (e.g. "1.0/2.0" or something) rather than just saving one of the two values. We want users to be able to see why the algo has made whatever decisions it has |
|
Actually, on reflection, my suggestion won't work after all 😞. I still think we should use traits, but it's not as simple as adding a single supertrait. The problem is that I think what you want is the pub trait AppraisalMetric {
fn compare(&self, other: &dyn Any) -> Ordering;
// ...
}
impl AppraisalMetric for LCOXMetric {
fn compare(&self, other: &dyn Any) -> Ordering {
let other = other.downcast_ref::<Self>().expect("Cannot compare metrics of different types");
// ...
}
}(This assumes you have an I still think it might make sense to have a supertrait which is It's a little convoluted but that's the downside of using a statically typed language 😛 |
|
Btw, I know I'm a bit late to the party on this one, but I'm not sure about the I'm not sure why we need to go ensure consistent ordering for assets with identical appraisal outputs. If they're identical -- or approximately identical -- then, by definition, we should just return I'm only raising it because I think the added complexity will make @Aurashk's life harder here and I can't really see what benefit it brings. Is there something I'm missing? |
It's not actually that unlikely:
I agree with you that it feels a little hacky for identical metrics not to return |
Ok, good point.
Can I just check what the motivation for this is? On reflection, I'm guessing that the idea was that it would make it easier to figure out why a particular asset was chosen over another. Is that right? If so, that seems reasonable. Initially I was thinking that it was to make choosing between two assets with identical metrics less arbitrary which I'm less convinced about. A lot of things in MUSE2 are arbitrary, e.g. how HiGHS distributes activity across time slices, and the results fluctuate as we change the code anyway, so it seemed overkill to try to make guarantees to users about this when we can't do the same for so much of the rest of the model. Anyway, in terms of the code, I think the problem is that it's not a good separation of concerns. It would be better if the |
That's part of it at least. E.g. Before working on this I didn't previously consider that multiple existing assets from different commission years might have the same metric. If it's going to have to pick one over the other, I'd at least like some consistency so that decision is explainable.
I don't think we can/should try guarantee to users that the model is completely unarbitrary, but we can still do our best and if there's an easy way to make certain behaviours just a little bit more predictable/explainable then I don't think there's an excuse not to.
I think that's a good idea, do you want to give it a try? |
|
I think your other point is that the warning that we're currently raising if it ultimately does have to make an arbitrary decision isn't really worthy of a warning that users should have to be concerned about. I think that's fair, so happy if you'd rather change that to a debug message |
|
@tsmbland Ok cool. Seems like we're on the same page now. I'll have a go at the refactoring. |
|
Sorry just catching up with the discussion. I could do with a bit more convincing on the the traits approach to help me understand the benefits. I do see the elegance of it (particularly over the existing approach), but in the end we have just have three possible comparable metrics - the LCOX one and the two NPV ones (profitability index and total annualised surplus) - and adding others would be a rare occasion. It seems like a metric for our purposes is always going to be a number and one of three (maybe more in the future) labels, we only need to compare like-for-like labels on the same logical path and we are always comparing an Based on the requirements above, it feels overly-general to make the metric an arbitrary data type that has the property of being comparable. If the implementation was really simple I might feel differently, but it feels like you're having to navigate through too much abstraction for a relatively simple bit of logic. I realise I'm outvoted on this and happy to go with the majority, just want to understand the reasoning better. |
You're right that we won't be adding new metrics all that often. There is an open issue to add support for "equivalent annual cost" (#524), but I guess the output of that will be similar to the other two. I think the main rationale for using traits was to have clearer code with better separation of concerns, so you'd have a dedicated function to handle the logic for comparing NPV outputs (for example). Another upside is that we could represent these metrics in more intuitive ways in the output file (currently we output negative NPV, which is potentially pretty confusing), but I don't think this is super important. I hear what you're saying about it being clunky and overcomplex though. Another way to do this would be to keep your current approach, but have a generic |
I'd probably still favour the traits approach, for the reasons that @alexdewar mentioned, but if it's too much work then don't worry about it (I didn't think it would be too difficult when I suggested it here, but maybe I'm missing something). We're also going to have to revisit this when we come to allow multiple objectives, so no point over-engineering things right now |
I don't think it's actually too difficult -- there's just slightly more boilerplate. Maybe try it and see how it looks @Aurashk? If it's a pain, let me know and I can try to help or we can go with the other approach.
Good point! |
|
@Aurashk Actually, would you be able to sort the serialisation stuff in this PR too 👼? You just need to update the code in |
I've had a go at this but I'm a bit confused about what argument to apply to the because the writer works row by row. Am I missing something obvious here? In the latest commit I tried what seems like the simplest solution - you can serialize them to json, then just write the json as a string. It requires another dependency but wasn't able to find anything more straightforward. We could alternatively explicitly add the fields (profitability index etc) to the results row and downcast but it seems like that defeats the object of doing the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
| fn compare(&self, other: &dyn ComparableMetric) -> Ordering { | ||
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); | ||
|
|
||
| // Handle comparison based on fixed cost status | ||
| match (self.is_zero_fixed_cost(), other.is_zero_fixed_cost()) { | ||
| // Both have zero fixed cost: compare total surplus (higher is better) | ||
| (true, true) => { | ||
| let self_surplus = self.0.total_annualised_surplus; | ||
| let other_surplus = other.0.total_annualised_surplus; | ||
|
|
||
| compare_approx(other_surplus, self_surplus) | ||
| } | ||
| // Both have non-zero fixed cost: compare profitability index (higher is better) | ||
| (false, false) => { | ||
| let self_pi = self.0.value(); | ||
| let other_pi = other.0.value(); | ||
|
|
||
| compare_approx(other_pi, self_pi) | ||
| } | ||
| // Zero fixed cost is always better than non-zero fixed cost | ||
| (true, false) => Ordering::Less, | ||
| (false, true) => Ordering::Greater, | ||
| } | ||
| } |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new comparison logic in NPVMetric::compare handles an important edge case (zero fixed costs) with different comparison semantics than normal NPV comparison. However, there are no tests added to verify this behavior works correctly. Consider adding tests that verify:
- Two NPVMetrics with zero fixed cost are compared correctly by surplus
- Two NPVMetrics with non-zero fixed cost are compared correctly by profitability index
- An NPVMetric with zero fixed cost is always considered better than one with non-zero fixed cost (lines 203-204)
- The comparison correctly returns
Ordering::Equalfor approximately equal values
src/input/process/flow.rs
Outdated
| ensure!( | ||
| number_of_years[&(commodity_id.clone(), region_id.clone())] | ||
| == required_years.len().try_into().unwrap(), | ||
| == u32::try_from(required_years.len()).unwrap(), |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This refactoring from .try_into().unwrap() to u32::try_from(...).unwrap() improves code clarity but appears unrelated to the PR's main purpose of allowing different metrics in NPV calculation. Consider keeping such unrelated refactorings in separate commits or PRs to maintain a clear change history.
| == u32::try_from(required_years.len()).unwrap(), | |
| == required_years.len().try_into().unwrap(), |
| assert!( | ||
| annual_fixed_cost >= MoneyPerCapacity(0.0), | ||
| "The current NPV calculation does not support negative annual fixed costs" | ||
| ); |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion that checks annual fixed cost is non-negative only runs in debug builds. If negative annual fixed costs could occur in production (even if they shouldn't), this would silently pass in release builds and could lead to incorrect behavior. Consider using ensure! or returning a Result error instead of assert! to enforce this constraint in all builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this is fine.
| assert!( | ||
| self.annualised_fixed_cost != Money(0.0), | ||
| "Annualised fixed cost cannot be zero when calculating profitability index." | ||
| ); |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion that checks annualised fixed cost is non-zero only runs in debug builds. In release builds, if this condition is violated, the division on line 49 would produce infinity or NaN, leading to incorrect behavior. Consider using a runtime check (like ensure!) that returns an error instead of assert! to enforce this constraint in all builds.
| assert!( | ||
| !(self.metric.is_nan() || other.metric.is_nan()), | ||
| !(self.metric.value().is_nan() || other.metric.value().is_nan()), | ||
| "Appraisal metric cannot be NaN" | ||
| ); |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion checking that metric values are not NaN only runs in debug builds. In release builds, if NaN values occur, the comparison logic could produce unexpected results. Consider using a runtime check that returns an error or handles NaN values explicitly to ensure correct behavior in all builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's... not true. assert! also works in release builds in Rust (unlike assert in other languages). There is a debug_assert! macro for checks that you really only want in debug builds
I don't think you're missing anything obvious -- just seems it's more of a faff than I anticipated 🙃. Sorry for dragging this out! In the interests of getting this merged, how about just reverting that last commit and then we can do a re-review? We can come back to the serialisation stuff later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 6 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The downcast operation with expect will panic at runtime if metrics of different types are compared. While this shouldn't happen in practice (as all appraisals in a round use the same objective type), consider adding a debug_assert or a more descriptive error message that includes the actual types being compared to aid debugging if this invariant is ever violated.
| /// | ||
| /// # Returns | ||
| /// | ||
| /// An `AppraisalOutput` containing the hypothetical capacity, activity profile and unmet demand. |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation comment has been removed but was actually helpful in explaining the metric transformation. Consider keeping documentation that explains the relationship between metrics and comparison semantics, particularly noting that NPVMetric handles the "higher is better" logic internally so external code doesn't need to negate values.
| /// An `AppraisalOutput` containing the hypothetical capacity, activity profile and unmet demand. | |
| /// An `AppraisalOutput` containing the hypothetical capacity, activity profile and unmet demand. | |
| /// The returned `metric` encapsulates the NPV value for this appraisal. For NPV, higher values | |
| /// are better, and the associated `NPVMetric` type encodes this "higher is better" comparison | |
| /// semantics internally, so callers can compare `metric` values directly without negating them. |
| #[test] | ||
| #[should_panic(expected = "Annualised fixed cost cannot be zero")] | ||
| fn profitability_index_panics_on_zero_cost() { | ||
| let result = profitability_index( | ||
| Capacity(0.0), | ||
| MoneyPerCapacity(100.0), | ||
| &indexmap! {}, | ||
| &indexmap! {}, | ||
| ); | ||
| result.value(); | ||
| } |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The removed test case for zero capacity with infinite profitability index has been replaced with a panic test, but there's no test coverage for the new NPVMetric behavior when annualised_fixed_cost is zero. Consider adding integration tests that verify the entire appraisal flow correctly handles assets with zero annual fixed costs and compares them properly against assets with non-zero fixed costs.
| fn compare(&self, other: &dyn ComparableMetric) -> Ordering { | ||
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); | ||
|
|
||
| // Handle comparison based on fixed cost status | ||
| match (self.is_zero_fixed_cost(), other.is_zero_fixed_cost()) { | ||
| // Both have zero fixed cost: compare total surplus (higher is better) | ||
| (true, true) => { | ||
| let self_surplus = self.0.total_annualised_surplus; | ||
| let other_surplus = other.0.total_annualised_surplus; | ||
| compare_approx(other_surplus, self_surplus) | ||
| } | ||
| // Both have non-zero fixed cost: compare profitability index (higher is better) | ||
| (false, false) => { | ||
| let self_pi = self.0.value(); | ||
| let other_pi = other.0.value(); | ||
| compare_approx(other_pi, self_pi) | ||
| } | ||
| // Zero fixed cost is always better than non-zero fixed cost | ||
| (true, false) => Ordering::Less, | ||
| (false, true) => Ordering::Greater, | ||
| } | ||
| } | ||
|
|
||
| fn as_any(&self) -> &dyn Any { | ||
| self | ||
| } | ||
| } |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no unit tests for the new NPVMetric comparison logic, particularly for the critical cases: (1) comparing two assets with zero fixed cost, (2) comparing zero fixed cost vs non-zero fixed cost, and (3) comparing two assets with non-zero fixed cost. The complex branching logic in the compare method should be thoroughly tested to ensure correct behavior.
| fn value(&self) -> f64 { | ||
| if self.is_zero_fixed_cost() { | ||
| self.0.total_annualised_surplus.value() | ||
| } else { | ||
| self.metric.partial_cmp(&other.metric).unwrap() | ||
| self.0.value().value() | ||
| } | ||
| } |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value method returns different semantics depending on whether fixed cost is zero. When fixed cost is zero, it returns total_annualised_surplus (an absolute money value). When fixed cost is non-zero, it returns the profitability index (a dimensionless ratio). This inconsistency could be confusing for code that uses the value for purposes other than comparison. Consider documenting this behavior clearly or adding a separate method that consistently returns a comparable numeric representation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 6 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| compare_approx(other_surplus, self_surplus) | ||
| } | ||
| // Both have non-zero fixed cost: compare profitability index (higher is better) | ||
| (false, false) => { | ||
| let self_pi = self.0.value(); | ||
| let other_pi = other.0.value(); | ||
| compare_approx(other_pi, self_pi) |
Copilot
AI
Jan 14, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of arguments to compare_approx is reversed (other, self) compared to the pattern used in LCOXMetric::compare at line 141 which uses (self, other). For consistency across metric implementations, consider using the same argument order and applying .reverse() where needed to achieve the desired ordering semantics.
| compare_approx(other_surplus, self_surplus) | |
| } | |
| // Both have non-zero fixed cost: compare profitability index (higher is better) | |
| (false, false) => { | |
| let self_pi = self.0.value(); | |
| let other_pi = other.0.value(); | |
| compare_approx(other_pi, self_pi) | |
| compare_approx(self_surplus, other_surplus).reverse() | |
| } | |
| // Both have non-zero fixed cost: compare profitability index (higher is better) | |
| (false, false) => { | |
| let self_pi = self.0.value(); | |
| let other_pi = other.0.value(); | |
| compare_approx(self_pi, other_pi).reverse() |
|
@alexdewar I've reverted the commit and added a few tests for the compare methods. |
|
Cool. Is it ready for re-review? |
Yes |
|
@Aurashk I'll review now. For future reference, you can re-request review by clicking the button with the arrows by a reviewer's name (I've already done it for myself):
That way it'll appear in my list of things to review |
alexdewar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one small thing, then I think this is good to merge. Thanks for bearing with -- sorry it ended up dragging out.
I've just suggested adding additional test cases for the NPV comparison code; should be easy if you use a parametrised test with rstest (see comment)
| if a.approx_eq(b, F64Margin::default()) { | ||
| Ordering::Equal | ||
| } else { | ||
| a.partial_cmp(&b).unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error message might be good for debugging:
| a.partial_cmp(&b).unwrap() | |
| a.partial_cmp(&b).expect("Cannot compare NaN values") |
| assert!( | ||
| !(self.metric.is_nan() || other.metric.is_nan()), | ||
| !(self.metric.value().is_nan() || other.metric.value().is_nan()), | ||
| "Appraisal metric cannot be NaN" | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's... not true. assert! also works in release builds in Rust (unlike assert in other languages). There is a debug_assert! macro for checks that you really only want in debug builds
| let other = other | ||
| .as_any() | ||
| .downcast_ref::<Self>() | ||
| .expect("Cannot compare metrics of different types"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems a bit verbose to me...
| assert!( | ||
| annual_fixed_cost >= MoneyPerCapacity(0.0), | ||
| "The current NPV calculation does not support negative annual fixed costs" | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this is fine.
| } | ||
|
|
||
| #[test] | ||
| fn npv_compare_both_zero_fixed_cost() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some test cases missing here, as Copilot has pointed out. It would be good to have total coverage.
There are 11 branches:
- AFC is non-zero for both: less than, greater than, equal
- AFC is zero for both: less than, greater than, equal
- AFC is zero for
selfbut nototherand vice versa
You could also add a test or two for the case where AFC is approx zero, but not exactly.
I'd write these as parametrised tests with rstest: https://docs.rs/rstest/latest/rstest/#creating-parametrized-tests
I wouldn't normally be this picky, but this actually is a function where we can test all branches without too much hassle, so let's do it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @alexdewar, I've tagged you for a final check of the new tests, feel free to merge it if you're happy with them
alexdewar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ace! Good work.

Description
If annual fixed costs (AFC) are zero, this currently makes the metric in appraisal comparisons NaN. This PR modifies the behaviour so that we use the total annualised surplus to appraise assets in this case instead. Since there are two different possible metrics in this case, with one strictly better (AFC == 0 always better than AFC > 0). I've added a metric_precedence to
AppraisalOutputwhich ranks the metrics by the order which they can be used. Then,select_best_assetswill disregard all appraisals which have a precedence higher that the minimum. Another way of doing this may be to make the metric itself a struct and implement more sophisticated comparison logic, but since lcox uses the same struct it might end up getting too involvedFixes #1012
Type of change
Key checklist
$ cargo test$ cargo docFurther checks