Conversation
…ated gitignore, and updated __init__.py
earth-chris
left a comment
There was a problem hiding this comment.
Thank you for submitting this PR, @PC-FSU. Including this metric is a nice addition to the package. I have a few comments and requests.
- Thank you very much for the detailed docstring and jupyter examples.
- Thank you very very much for including tests.
- Please apply proper code formatting. To do this, review the contributing guidelines and set up a dev environment with
pre-commitinstalled. Then runpre-commit run --allto apply formatting. - To simplify the code, and to better align with other
sklearnmetrics, I'd recommend breaking theboyce_indexfunction into smaller components. One to compute the intervals, one for the p/e ratio, one for plotting, and one for returning the correlation coefficient. As a user, I'd expect a single value returned from the boyce_index function. - I haven't had much time to evaluate the scientific merit of the code yet, so I'll likely provide another review after addressing the key software comments I've made. But what strikes me at the moment is that:
- The relationships between
nclass,windowandresare not super clear to me, and the results appear very sensitive to these parameters. - It is very easy to return
nanvalues, despite the manynanchecks that are applied. This makes me think that something is not sufficiently robust.
- The relationships between
Thanks for your submission, and I'll look forward to your updates.
elapid/evaluate.py
Outdated
| # implement Boyce index as describe in https://www.whoi.edu/cms/files/hirzel_etal_2006_53457.pdf (Eq.4) | ||
|
|
||
|
|
||
| def boycei(interval, obs, fit): |
There was a problem hiding this comment.
i might prefer renaming these functions boyce_index and continuous_boyce_index to differentiate (boycei/boyce_index are easily confused)
elapid/evaluate.py
Outdated
| Args: | ||
| interval (tuple or list): Two elements representing the lower and upper bounds of the interval. | ||
| obs (numpy.ndarray): Observed suitability values (i.e., predictions at presence points). | ||
| fit (numpy.ndarray): Suitability values (e.g., from a raster), i.e., predictions at presence + background points. |
There was a problem hiding this comment.
to better align with sklearn api design, prefer renaming variables and order them as boyce_index(yobs, ypred, interval)
elapid/evaluate.py
Outdated
| return fi | ||
|
|
||
|
|
||
| def boyce_index(fit, obs, nclass=0, window="default", res=100, PEplot=False): |
There was a problem hiding this comment.
please rename and reorder fit, obs as yobs, ypred
elapid/evaluate.py
Outdated
|
|
||
|
|
||
| # Remove NaNs from fit | ||
| fit = fit[~np.isnan(fit)] |
elapid/evaluate.py
Outdated
| print(vec_mov) | ||
| print(intervals) |
There was a problem hiding this comment.
remove debug print statements
elapid/evaluate.py
Outdated
| vec_mov = np.linspace(mini, maxi, num=nclass + 1) | ||
| intervals = np.column_stack((vec_mov[:-1], vec_mov[1:])) | ||
| else: | ||
| raise ValueError("Invalid nclass value.") |
There was a problem hiding this comment.
a comment or two in this section would be useful to elucidate the different methods for computing the intervals
There was a problem hiding this comment.
I refactored the code. Intervals are now calculated in a new function, and refactored the code, turned out res was not required at all. Updated the name of the argument to more logical names.
elapid/evaluate.py
Outdated
| corr, _ = spearmanr(f_valid, intervals_mid) | ||
|
|
||
|
|
||
| if PEplot: |
There was a problem hiding this comment.
I would prefer to keep plotting as a separate function, which makes for cleaner code.
elapid/evaluate.py
Outdated
|
|
||
|
|
||
| # Remove NaNs | ||
| valid = ~np.isnan(f) |
There was a problem hiding this comment.
there seems to be a lot of nan checking. if the nans are removed from the initial arrays, what would lead to 'invalid' values? are we sure this isn't covering up some other issue in the calculations?
elapid/evaluate.py
Outdated
|
|
||
| results = { | ||
| 'F.ratio': f, | ||
| 'Spearman.cor': round(corr, 3) if not np.isnan(corr) else np.nan, |
There was a problem hiding this comment.
another suspicious nan check here. also, it seems unnecessary to round the correlation coefficient.
elapid/evaluate.py
Outdated
| predicted = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) | ||
|
|
||
| # Observed presence suitability scores (e.g., predictions at presence points) | ||
| observed = np.array([0.3, 0.7, 0.8, 0.9]) |
There was a problem hiding this comment.
based on this example, it's not clear to me why users would want to pass in arrays of different lengths (where predicted and observed are not matching: one is presence+background, the other is just presence).
There was a problem hiding this comment.
So I misinterpreted the equation. For P/E ratio, you need the predicted frequency for each habitat suitability region and need only presence-only data. Expected frequency is calculated using the random distribution only. i.e at background points and doesn't required prediction at presence data. I changed the code and docs to reflect the same.
…e made at presence and background points, not presence and combined presence + background (Thanks for pointing that out, I misinterpreted the paper). Removed redundant NaN checks. Updated test cases, and example notebook.
|
Any updates? |
elapid/evaluate.py
Outdated
| nbins (int | list, optional): Number of classes or a list of class thresholds. Defaults to 0. | ||
| bin_size (float | str, optional): Width of the the bin. Defaults to 'default' which sets width as 1/10th of the fit range. |
There was a problem hiding this comment.
the interactions between these parameters isn't super clear to me. the default behavior sets nbins to 10. setting 0 here doesn't mean that zero bins are estimated, but instead behaves as if no bins are passed at all, which depends on the 'default' parameter.
I might recommend one of the following:
- set the defaults as
nbins=10andbin_size=None, support passing None for each, and throw an error if both are passed (since they are mutually exclusive) - drop the
bin_sizeargument and just go withnbins.
elapid/evaluate.py
Outdated
| mini, maxi = range | ||
|
|
||
| if isinstance(bin_size, float): | ||
| nbins = (maxi - mini) / bin_size |
There was a problem hiding this comment.
I get the following warning that appears to be driven by floating point precision. I don't think it's a problem, just flagging it:
ela.evaluate.continuous_boyce_index(y, ypred, bin_size=0.25)
/home/cba/src/elapid/elapid/evaluate.py:59: UserWarning: bin_size has been adjusted to nearest appropriate size using ceil, as range/bin_size : 0.9999999999983606 / 0.25 is not an integer.
elapid/evaluate.py
Outdated
| results = { | ||
| "F.ratio": f_scores, | ||
| "Spearman.cor": corr, | ||
| "HS": intervals, | ||
| } | ||
|
|
||
| return results |
There was a problem hiding this comment.
my personal preference is to avoid returning a dictionary, but instead return the three values as a tuple, so users can specify:
f_ratio, cor, hs = ela.evaluate.continuous_boyce_index(presence, background)
earth-chris
left a comment
There was a problem hiding this comment.
Looks much better, @PC-FSU, thank you for the improvements! One more round and I think it's there. There are a few small updates I've requested in code, but I'll make one more here regarding the notebook.
The example provided uses some dummy array data, which does not provide much insight into what the index does in a real-life context. Would you please use modeled predictions from the notebook to demonstrate what it can tell you beyond what you see from the AUC results?
elapid/evaluate.py
Outdated
| yobs: Union[np.ndarray, pd.Series, gpd.GeoSeries], | ||
| ypred: Union[np.ndarray, pd.Series, gpd.GeoSeries], |
There was a problem hiding this comment.
I think I had misunderstood the original implementation, and it looks like these are both actually predicted values, just predictions at different locations (ypred at presence and background sites).
I might prefer we rename these variables to be a bit more clear (like ypred_observed and ypred_background).
|
@PC-FSU let me know if you plan to finish up this contribution. If not, I'll go ahead and merge it and make the remaining updates. Cheers, |
|
@earth-chris Please give me some time. I will be able to push it after 2nd April. |
|
I've added the suggestions you mentioned. I also spent some time thinking about how to improve the explanation of the Boyce Index in the notebook, but I couldn't come up with a concise way to do it. Could you clarify what you had in mind? The example in the notebook feels fairly limited. |
|
I've reviewed this PR again and want to document the next steps. PF-FSU has contributed a large PR with several pieces. There are still a number of changes I want to make to simplify these new features, however. I think this will best be done by merging this PR then submitting the new changes. These include:
|
PR Description: Add Continuous Boyce Index Calculation and Test Cases
Summary:
In this pull request, I have added functionality to calculate the continuous Boyce index as described in Hirzel et al. (2006) . This method provides a reliable way to evaluate habitat suitability models, specifically for presence-only data. Along with the implementation, I have also added test cases to ensure the correctness and robustness of the new function.
Key Updates:
Boyce Index Calculation:
Test Cases:
Notebook Update:
WorkingWithGeospatialData.ipynbto include a detailed example demonstrating how to use the continuous Boyce index function.Testing:
The test cases ensure that the continuous Boyce index function works as expected. The test cases cover:
This PR enhances the project by providing a robust and well-tested method to evaluate habitat suitability models using presence-only data, with clear examples in the updated notebook.