Skip to content

Conversation

@andy-bridger
Copy link
Collaborator

Description of work

Currently, fitting is somewhat flaky and unreliable, largely owing to two factors: the fact that peaks might be very weak/ nonexistent (i.e. the sample has texture); and the fitting in dspacing doesn't give nice order of magnitude for step size of A/B

To try and address these I've done a bit of an update, firstly fitting in TOF and then converting back to d, and secondly providing better starting parameters and generally improving the fit approach

To test:

Firstly, ideally you would test the system tests (TextureAnalysisScriptTest.py) on mac os

Secondly, func test this script:

# import mantid algorithms, numpy and matplotlib
from mantid.simpleapi import *
import matplotlib.pyplot as plt
import numpy as np
from mantid.api import AnalysisDataService as ADS
from os import path, makedirs, scandir
from Engineering.texture.TextureUtils import find_all_files, fit_all_peaks, mk
from Engineering.common.calibration_info import CalibrationInfo
from Engineering.EnggUtils import GROUP

############### ENGINEERING DIFFRACTION INTERFACE FITTING ANALOGUE #######################

######################### EXPERIMENTAL INFORMATION ########################################

# First, you need to specify your file directories, If you are happy to use the same root, from experiment
# to experiment, you can just change this experiment name.
exp_name = "PostExp-SteelCentre"

# otherwise set root directory here:
root_dir = fr"C:\Users\kcd17618\Engineering_Mantid\User\{exp_name}"

# Next the folder contraining the workspaces you want to fit
file_folder = "Focus"
# These are likely within a sub-folder specified by the detector grouping
grouping = "Texture30"
prm_path = None
groupingfile_path = None

# You also need to specify a name for the folder the fit parameters will be saved in
fit_save_folder = "ScriptFitParameters-FitTest2"

# Provide a list of peaks that you want to be fit within the spectra
peaks = [1.44] # steel
#peaks = [2.8, 2.575, 2.455, 1.89, 1.62, 1.46] # zr

# The fitting has a couple of parameters that deal with when peaks are missing as a result of the texture
# The first parameter is 1_over_sigma_thresh - this determines the minimum value of I/sigma for a fit to be considered as for a valid peak
# any invalid peak will have parameters set to nan by default, but these nans can be overwritten by no_fit_value_dicts and nan_replacement
# no_fit_value_dict takes fitted parameter names and allows you to specify what the unfit value should be eg. {"I":0.0} - if you can't fit intensity
# set the value directly to 0.0
# nan_replacement then happens after this, if a nan_replacement method is given any parameters without an unfit_value provided will have nans replaced
# either with "zeros", or with the min/max/mean value of that parameter (Note: if all the values are nan, the value will remain nan)

i_over_sigma_thresh = 3.0
no_fit_value_dict = {"I": 0.0, "I_est": 0.0}
nan_replacement = "mean"

######################### RUN SCRIPT ########################################

# create output directory
fit_save_dir = path.join(root_dir, fit_save_folder)
mk(fit_save_dir)

# find and load peaks

# get grouping directory name
calib_info = CalibrationInfo(group = GROUP(grouping))
if groupingfile_path:
    calib_info.set_grouping_file(groupingfile_path)
elif prm_path:
    calib_info.set_prm_filepath(prm_path) 
group_folder = calib_info.get_group_suffix()
focussed_data_dir = path.join(root_dir, file_folder, group_folder, "CombinedFiles")
focus_ws_paths = find_all_files(focussed_data_dir)
focus_wss = [path.splitext(path.basename(fp))[0] for fp in focus_ws_paths]
for iws, ws in enumerate(focus_wss):
    if not ADS.doesExist(ws):
        Load(Filename = focus_ws_paths[iws], OutputWorkspace= ws)


# execute the fitting                     
fit_all_peaks(focus_wss, peaks, 0.05, fit_save_dir, i_over_sigma_thresh = i_over_sigma_thresh, 
                nan_replacement = nan_replacement, no_fit_value_dict = no_fit_value_dict, smooth_vals = (3,2),
                tied_bkgs = (False, False), final_fit_raw = False)     










Reviewer

Your comments will be used as part of the gatekeeper process. Comment clearly on what you have checked and tested during your review. Provide an audit trail for any changes requested.

As per the review guidelines:

  • Is the code of an acceptable quality? (Code standards/GUI standards)
  • Has a thorough functional test been performed? Do the changes handle unexpected input/situations?
  • Are appropriately scoped unit and/or system tests provided?
  • Do the release notes conform to the guidelines and describe the changes appropriately?
  • Has the relevant (user and developer) documentation been added/updated?

Gatekeeper

As per the gatekeeping guidelines:

  • Has a thorough first line review been conducted, including functional testing?
  • At a high-level, is the code quality sufficient?
  • Are the base, milestone and labels correct?

@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from 04df3b3 to 6d6974d Compare September 12, 2025 10:22
rboston628 pushed a commit that referenced this pull request Sep 12, 2025
…39929)

### Description of work

Currently system test is failing on mac os, giving pretty different
parameter values. This initial PR just fixes the issue by not validating
parameter values on mac and adding a temporary warning to users on mac
(at least that is what I am trying to have it do).

I have a follow up PR #39931 which was looking at improving fit
reliability anyway, so that will hopefully go in soon as a better long
term solution

### To test:

Please try running on the system test `TextureAnalysisScriptTest.py` on
mac and check it doesn't fail

<!-- REMEMBER:
- Add labels, milestones, etc.
- Ensure the base of this PR is correct (e.g. release-next or main)
- Add release notes in separate file as per
([guidelines](https://developer.mantidproject.org/Standards/ReleaseNotesGuide.html)),
or justify their absence:
*This does not require release notes* because <fill in an explanation of
why>
-->

Co-authored-by: thomashampson <[email protected]>
@github-actions
Copy link
Contributor

👋 Hi, @andy-bridger,

Conflicts have been detected against the base branch. Please rebase your branch against the base branch.

@github-actions github-actions bot added the Has Conflicts Used by the bot to label pull requests that have conflicts label Sep 12, 2025
@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from 6d6974d to 6168b28 Compare September 16, 2025 12:39
@github-actions github-actions bot removed the Has Conflicts Used by the bot to label pull requests that have conflicts label Sep 16, 2025
@github-actions github-actions bot added the Has Conflicts Used by the bot to label pull requests that have conflicts label Oct 27, 2025
@github-actions
Copy link
Contributor

👋 Hi, @andy-bridger,

Conflicts have been detected against the base branch. Please rebase your branch against the base branch.

@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from 6168b28 to 63230ef Compare January 12, 2026 11:06
@github-actions github-actions bot removed the Has Conflicts Used by the bot to label pull requests that have conflicts label Jan 12, 2026
@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from efc9763 to 8ef3f2c Compare January 13, 2026 07:11
@andy-bridger andy-bridger added this to the Release 6.16 milestone Jan 19, 2026
@RichardWaiteSTFC
Copy link
Contributor

RichardWaiteSTFC commented Jan 20, 2026

Another thing that might improve the fitting by reducing noise in the data (possibly separate to this PR) - is using the algorithm SplineBackground on the vanadium (though you'd hope the vanadium is counted a lot longer than the smaple runs!). Is the vanadium normalisation happening in the raw data or focussed data?

@andy-bridger
Copy link
Collaborator Author

Van norm is just being applied as it would in the Engineering Diffraction interface, so it is applied in the focus call and I believe it happens after the data has been focused...

@github-actions github-actions bot added the Has Conflicts Used by the bot to label pull requests that have conflicts label Jan 21, 2026
@github-actions
Copy link
Contributor

👋 Hi, @andy-bridger,

Conflicts have been detected against the base branch. Please rebase your branch against the base branch.

@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from 0ad183b to 877d655 Compare January 21, 2026 15:25
@andy-bridger andy-bridger marked this pull request as ready for review January 21, 2026 15:25
@andy-bridger andy-bridger removed the Has Conflicts Used by the bot to label pull requests that have conflicts label Jan 21, 2026
@andy-bridger
Copy link
Collaborator Author

andy-bridger commented Jan 21, 2026

Improvement, results from nearby points in different scans more coherent, fewer points being dropped :

Before Al (200):
200_ENGIN-X_364749-364857_Texture30_pf_table_I_contour_6 0
200_ENGIN-X_364749-364857_Texture30_pf_table_I_scatter
200_ENGIN-X_364749-364857_Texture30_pf_table_X0_scatter

After Al (200):
200_ENGIN-X_364749-364857_Texture30_pf_table_I_contour_6 0
200_ENGIN-X_364749-364857_Texture30_pf_table_I_scatter
200_ENGIN-X_364749-364857_Texture30_pf_table_X0_scatter

Before Al (222):
222_ENGIN-X_364749-364857_Texture30_pf_table_I_contour_6 0
222_ENGIN-X_364749-364857_Texture30_pf_table_I_scatter
222_ENGIN-X_364749-364857_Texture30_pf_table_X0_scatter

After Al (222):
222_ENGIN-X_364749-364857_Texture30_pf_table_I_contour_6 0
222_ENGIN-X_364749-364857_Texture30_pf_table_I_scatter
222_ENGIN-X_364749-364857_Texture30_pf_table_X0_scatter

Copy link
Contributor

@RichardWaiteSTFC RichardWaiteSTFC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, results look a lot better and fitting in TOF I think is more numerically stable (though I have also fitted in d). I know there's a rush so I don't want to hold up too much so feel free to push back on any of the comments!

I still have a nagging feeling that it should be possible to use exactly the same code as IntegratePeaks1DProfile for fitting. I know you have inherited from the relevant class TexturePeakFunctionGenerator(PeakFunctionGenerator) - perhaps PeakFunctiongenerator could have the common bits refactored out. In any case that's an issue for maintenance!

lower.append(xdat.min())
upper.append(xdat.max())
step.append(np.diff(xdat).max())
return f"{np.max(lower)}, {np.max(step)}, {np.min(upper)}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a difficult thing to do in practice, but I have a few thoughts.

  1. I think Rebin will accept these parameters as a list which might make your life easier!
  2. Is it expected that the bin edges of a given spectrum will be different in different workspaces (i.e. can you just use the first workspace)?
  3. Have these workspaces already been cropped to the region around the Bragg peak? If not, then you could loose a lot of resolution by taking the maximum bin-width over the range of the data - this will particularly affect low d-spacing peaks at backscattering and perhaps even affect the fit (despite improved stats).
  4. If you do want to do this over the whole of the data range it would probably be better to preserve the log- binning rather than use constant bin width
  5. I think one could do something similar (perhaps not exactly equivalent) using extractX (for behaviour with ragged workspaces see New instrument view single plotter in GUI and detector table #39932 (comment))
lower, upper, bin_width = -np.inf, np.inf, -np.inf
for ws in wss:
    xdat = ws_ceria.extractX()
    lower = max(xdat[xdat>0].min(), lower)
    upper = min(xdat.max(), upper)
    dx = max(np.diff(xdat, axis=1).max(), bin_width)

or get dx from number of bins in each spectrum or even better actually save maximum dx/x for log binning?

A different approach could be to estimate I/sigma for all focussed spectra for a given peak (using my favourite summation...after rough estimate of avg. bg) then fit spectrum with max I/sigma to better constrain parameters? Then you don't need to worry about rebinning?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of keeping the summed spectra rather than just the best individual one (feels like it should give better stats if you have a peak that is frequent but never particularly strong?), but agree on all the other points and have updated to crop around peaks first and then work from there.

Copy link
Contributor

@RichardWaiteSTFC RichardWaiteSTFC Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeh definitely important to get decent stats - but fitting the summed spectra could be problematic for textured data where the peak centre is very different. This will make the peak shape very weird and broad. You're also averaging over different resolutions, but I don't think that matters as much as the effect of the peak centres, as ENGINX does not have a large range of two-theta measured (though I know fitting a single strongest spectrum will also have a different resolution to the other two-theta groupings). I guess we could also be thinking ahead to other instruments (e.g. GEM) where a larger range of two-theta are measured and we wouldn't want to focus the entire instrument (or we could cross that bridge when/if we get to it!)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to stick with fitting summed spectrum for now though!

fit_wss.append(_rebin_and_rebunch(ws_tof, smooth_val))
bkg_is_tied.append(tied_bkgs[i])
else:
# if no smoothing values are given, the initial fit should just be on the ws
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean there is no option to initially fit summed spectrum without rebunching?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, probably not very clear, i've updated it a bit. The initial summed fit is always done and this is without rebunching.
You can then do the individual spectra fits and these can be done with an iterative series of rebunches or without rebunching entirely

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the need for iterative rebunches, rather than a single rebunch?

Copy link
Contributor

@RichardWaiteSTFC RichardWaiteSTFC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functional testing worked I think!
Here are the pole figures I get for the peak at ~2.035 Ang in a few runs (364902, 11, 20, 26,29,35,44) - which I think are aluminium?
image
Does that look right?
It might be hard to tell as there are not many points on pole figuee
image

A couple of thoughts (to be addressed in a different PR):

  1. It's difficult to inspect the fits - so difficult to know whether I can unfix any parameters
  2. Thinking about it, fitting all spectra from a given run is not ideal - we want to be fitting a given group/spectrum across all runs - such that e.g. A and B are expected to be the same (as these depend on the instrument not the sample) and can be global in the fit. Practically I don't know whether that will work when you don't have that many rotations measured (e.g. at the beginning of an experiment or in a system test). To be discussed!
  3. As discussed many times - it would be better to fix A and B in a given group at values extracted form ceria fits! Either by parameterising the resolution/peak profile across the detector, or with a lookup table as per Florencia!

I think we can do B and S well (though S from ceria would be a lower bound on the S on a typical sample). A is a bit more tricky - but possibly not as important?

image

Approving on assumption unit tests pass

@thomashampson thomashampson changed the base branch from main to release-next January 22, 2026 15:47
@thomashampson thomashampson dismissed RichardWaiteSTFC’s stale review January 22, 2026 15:47

The base branch was changed.

@RichardWaiteSTFC RichardWaiteSTFC self-assigned this Jan 23, 2026
Copy link
Contributor

@RichardWaiteSTFC RichardWaiteSTFC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved - just loosened the tolerance on 2 system tests (some small differences between my local windows and linux).

@jclarkeSTFC
Copy link
Contributor

I think something's gone wrong with a rebase here, the commits tab is showing merge commits from main as well as this branch

@andy-bridger andy-bridger force-pushed the texture-fit-tof-fitting branch from 3674af4 to 64feef5 Compare January 27, 2026 11:29
cailafinn
cailafinn previously approved these changes Jan 27, 2026
Copy link
Contributor

@cailafinn cailafinn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test still passing, code looks good to me.

@cailafinn cailafinn enabled auto-merge (squash) January 27, 2026 14:31
@cailafinn cailafinn merged commit f12e9b0 into release-next Jan 27, 2026
10 checks passed
@cailafinn cailafinn deleted the texture-fit-tof-fitting branch January 27, 2026 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants