-
Notifications
You must be signed in to change notification settings - Fork 130
Update the Fitting of Peaks in Texture Analysis Pipeline #39931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
04df3b3 to
6d6974d
Compare
…39929) ### Description of work Currently system test is failing on mac os, giving pretty different parameter values. This initial PR just fixes the issue by not validating parameter values on mac and adding a temporary warning to users on mac (at least that is what I am trying to have it do). I have a follow up PR #39931 which was looking at improving fit reliability anyway, so that will hopefully go in soon as a better long term solution ### To test: Please try running on the system test `TextureAnalysisScriptTest.py` on mac and check it doesn't fail <!-- REMEMBER: - Add labels, milestones, etc. - Ensure the base of this PR is correct (e.g. release-next or main) - Add release notes in separate file as per ([guidelines](https://developer.mantidproject.org/Standards/ReleaseNotesGuide.html)), or justify their absence: *This does not require release notes* because <fill in an explanation of why> --> Co-authored-by: thomashampson <[email protected]>
|
👋 Hi, @andy-bridger, Conflicts have been detected against the base branch. Please rebase your branch against the base branch. |
6d6974d to
6168b28
Compare
|
👋 Hi, @andy-bridger, Conflicts have been detected against the base branch. Please rebase your branch against the base branch. |
6168b28 to
63230ef
Compare
efc9763 to
8ef3f2c
Compare
|
Another thing that might improve the fitting by reducing noise in the data (possibly separate to this PR) - is using the algorithm |
|
Van norm is just being applied as it would in the Engineering Diffraction interface, so it is applied in the |
|
👋 Hi, @andy-bridger, Conflicts have been detected against the base branch. Please rebase your branch against the base branch. |
0ad183b to
877d655
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, results look a lot better and fitting in TOF I think is more numerically stable (though I have also fitted in d). I know there's a rush so I don't want to hold up too much so feel free to push back on any of the comments!
I still have a nagging feeling that it should be possible to use exactly the same code as IntegratePeaks1DProfile for fitting. I know you have inherited from the relevant class TexturePeakFunctionGenerator(PeakFunctionGenerator) - perhaps PeakFunctiongenerator could have the common bits refactored out. In any case that's an issue for maintenance!
| lower.append(xdat.min()) | ||
| upper.append(xdat.max()) | ||
| step.append(np.diff(xdat).max()) | ||
| return f"{np.max(lower)}, {np.max(step)}, {np.min(upper)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a difficult thing to do in practice, but I have a few thoughts.
- I think
Rebinwill accept these parameters as a list which might make your life easier! - Is it expected that the bin edges of a given spectrum will be different in different workspaces (i.e. can you just use the first workspace)?
- Have these workspaces already been cropped to the region around the Bragg peak? If not, then you could loose a lot of resolution by taking the maximum bin-width over the range of the data - this will particularly affect low d-spacing peaks at backscattering and perhaps even affect the fit (despite improved stats).
- If you do want to do this over the whole of the data range it would probably be better to preserve the log- binning rather than use constant bin width
- I think one could do something similar (perhaps not exactly equivalent) using
extractX(for behaviour with ragged workspaces see New instrument view single plotter in GUI and detector table #39932 (comment))
lower, upper, bin_width = -np.inf, np.inf, -np.inf
for ws in wss:
xdat = ws_ceria.extractX()
lower = max(xdat[xdat>0].min(), lower)
upper = min(xdat.max(), upper)
dx = max(np.diff(xdat, axis=1).max(), bin_width)
or get dx from number of bins in each spectrum or even better actually save maximum dx/x for log binning?
A different approach could be to estimate I/sigma for all focussed spectra for a given peak (using my favourite summation...after rough estimate of avg. bg) then fit spectrum with max I/sigma to better constrain parameters? Then you don't need to worry about rebinning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of keeping the summed spectra rather than just the best individual one (feels like it should give better stats if you have a peak that is frequent but never particularly strong?), but agree on all the other points and have updated to crop around peaks first and then work from there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeh definitely important to get decent stats - but fitting the summed spectra could be problematic for textured data where the peak centre is very different. This will make the peak shape very weird and broad. You're also averaging over different resolutions, but I don't think that matters as much as the effect of the peak centres, as ENGINX does not have a large range of two-theta measured (though I know fitting a single strongest spectrum will also have a different resolution to the other two-theta groupings). I guess we could also be thinking ahead to other instruments (e.g. GEM) where a larger range of two-theta are measured and we wouldn't want to focus the entire instrument (or we could cross that bridge when/if we get to it!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to stick with fitting summed spectrum for now though!
| fit_wss.append(_rebin_and_rebunch(ws_tof, smooth_val)) | ||
| bkg_is_tied.append(tied_bkgs[i]) | ||
| else: | ||
| # if no smoothing values are given, the initial fit should just be on the ws |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean there is no option to initially fit summed spectrum without rebunching?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, probably not very clear, i've updated it a bit. The initial summed fit is always done and this is without rebunching.
You can then do the individual spectra fits and these can be done with an iterative series of rebunches or without rebunching entirely
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the need for iterative rebunches, rather than a single rebunch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Functional testing worked I think!
Here are the pole figures I get for the peak at ~2.035 Ang in a few runs (364902, 11, 20, 26,29,35,44) - which I think are aluminium?

Does that look right?
It might be hard to tell as there are not many points on pole figuee

A couple of thoughts (to be addressed in a different PR):
- It's difficult to inspect the fits - so difficult to know whether I can unfix any parameters
- Thinking about it, fitting all spectra from a given run is not ideal - we want to be fitting a given group/spectrum across all runs - such that e.g. A and B are expected to be the same (as these depend on the instrument not the sample) and can be global in the fit. Practically I don't know whether that will work when you don't have that many rotations measured (e.g. at the beginning of an experiment or in a system test). To be discussed!
- As discussed many times - it would be better to fix A and B in a given group at values extracted form ceria fits! Either by parameterising the resolution/peak profile across the detector, or with a lookup table as per Florencia!
I think we can do B and S well (though S from ceria would be a lower bound on the S on a typical sample). A is a bit more tricky - but possibly not as important?
Approving on assumption unit tests pass
The base branch was changed.
60d0184 to
052a1fe
Compare
RichardWaiteSTFC
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved - just loosened the tolerance on 2 system tests (some small differences between my local windows and linux).
|
I think something's gone wrong with a rebase here, the commits tab is showing merge commits from |
# Conflicts: # scripts/Engineering/texture/TextureUtils.py # Conflicts: # scripts/Engineering/texture/TextureUtils.py # Conflicts: # scripts/Engineering/texture/TextureUtils.py
Test passes locally on windows, but requires a slightly larger tolerance on Linux. Also put Mixin classes first in inheritance in class def
3674af4 to
64feef5
Compare
cailafinn
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test still passing, code looks good to me.












Description of work
Currently, fitting is somewhat flaky and unreliable, largely owing to two factors: the fact that peaks might be very weak/ nonexistent (i.e. the sample has texture); and the fitting in dspacing doesn't give nice order of magnitude for step size of A/B
To try and address these I've done a bit of an update, firstly fitting in TOF and then converting back to d, and secondly providing better starting parameters and generally improving the fit approach
To test:
Firstly, ideally you would test the system tests (
TextureAnalysisScriptTest.py) on mac osSecondly, func test this script:
Reviewer
Your comments will be used as part of the gatekeeper process. Comment clearly on what you have checked and tested during your review. Provide an audit trail for any changes requested.
As per the review guidelines:
Gatekeeper
As per the gatekeeping guidelines: