Skip to content

Robust matched filter#138

Open
jpolchlo wants to merge 4 commits intomasterfrom
feature/robust-matched-filter
Open

Robust matched filter#138
jpolchlo wants to merge 4 commits intomasterfrom
feature/robust-matched-filter

Conversation

@jpolchlo
Copy link
Contributor

Overview

Target detection requires a target to search for, but it may not be entirely easy to acquire a proper target spectrum that is likely to be found in a given scene. Available spectra may be lab sampled, and lack the characteristics of the target substance as they appear in the environment or as they are appear when sampled by a specific remote sensor. This can be dealt with to some extent by using a robust matched filter, which allows the detected signature to deviate from the supplied signature by a small amount (a bound is placed on ‖s - s0‖, where s0 is the supplied target spectrum and s is the true target spectrum).

This PR provides an implementation of the robust matched filter.

Closes #118

Demo

See the included notebook.

Notes

When dealing with smaller samples in the generation of a background covariance model, it is advised to use a shrinkage estimator for the covariance. There was an implementation in this library, but it hadn't been extensively used. It is not a very numerically-robust estimator, and especially seems to fail for larger-dimensional inputs, which is unfortunately our use case. The recommendation is to switch to sklearn.covariance.LedoitWolf to access a shrunk estimate.

Checklist

  • CI passes after rebase
  • README.md updated if necessary to reflect the changes

@jamesmcclain
Copy link
Contributor

👀

@jamesmcclain
Copy link
Contributor

I just have a few general comments.

  1. If the MIF notebook changed in this PR, please describe how.
  2. It was nice to see the maximum likelihood spectrum make an appearance in the RMF notebook. It would also be interesting to test the default plastic spectrum from the library. Just looking at the inferred spectrum from the gist (picture below), it looks like more iteration might have been needed to get a good spectrum (assuming that the method is capable of providing one).
  3. Since the general scheme in Manolakis, et al. is pretty formal (but also with parameters) it might be interesting to empirically estimate some of those parameters using Pytorch (I am thinking of the loading factor and positive definite matrix that defines the ellipse, in particular).
  4. The general idea of trading decreased variance for increased bias is interesting. I think that the Pytorch-based optimization of the whitening operator can be thought of as expressing a similar idea -- the Pytorch code tries to the BCE loss (which is kind of like variance) and has the option to do so by squeezing some of that into a bias parameter. It might be interesting to constrain that a bit more formally using some of the ideas from these papers.

image

Copy link
Contributor

@jamesmcclain jamesmcclain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that a few more comments in the library code could help. I tried to map some of the functions to equations in Manolakis, et al.: if those mappings are correct I think they should appear as comments in the code.

r0 = np.trace(np.dot(Ginv, S))
return (p * math.log(2 * math.pi) + math.log(detG)
+ math.log(1 - β * r0) + r0 / (1 - β * r0)) / 2
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks useful.

return np.divide(np.einsum('i,rci->rc', s̃, X̃), np.sqrt(np.einsum('rci,rci->rc',X̃, X̃)))/math.sqrt(np.inner(s̃, s̃))


def quad_form(ζ, s̃, λ, ε):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some discussion/description in a comment would be welcome here. It is not clear which if any of the references this is associated with.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you are finding the Lagrange multiplier as in equation 13 of Manolakis, et al.?


Minv = np.linalg.inv(Σ - (1/ζ) * np.eye(k))

return np.matmul(Minv, s0) / np.inner(s0, np.matmul(np.matmul(np.matmul(Minv, Σ), Minv), s0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Equation 16 from Manolakis, et. al.

@jamesmcclain
Copy link
Contributor

@jpolchlo I added the PRISMA notebooks to this PR branch, please feel free to remove the commit if you do not want those notebook in this branch.

@jamesmcclain
Copy link
Contributor

@jpolchlo Should we try to get this merged?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement Robust Matched Filter

2 participants