Skip to content

Add flag to FitPeaks and PDCalibration to control the parameter inheritance from successful fits#41021

Open
andy-bridger wants to merge 8 commits intomainfrom
update-fit-peaks-parameter-sharing
Open

Add flag to FitPeaks and PDCalibration to control the parameter inheritance from successful fits#41021
andy-bridger wants to merge 8 commits intomainfrom
update-fit-peaks-parameter-sharing

Conversation

@andy-bridger
Copy link
Copy Markdown
Collaborator

@andy-bridger andy-bridger commented Mar 6, 2026

Description of work

Not sure if this is a bug or just a slightly misleading comment but previously the behaviour was:

Peaks are fit sequentially (either left to right or right to left), with the peak parameters initially taken from

  1. The same peak in another spectrum
  2. Or either: the algorithm's parameter estimation routines; or the defaults for that peak function (which have the ability to be good if there is an appropriate formula in the INSTR_Parameter.xml file).

(This PR is meant to address when the first check is unsuccessful, either when the fit is performed on the first spectrum of the workspace, or it just happens that the peak in question hasn't been successfully fit elsewhere.)

Once a peak in the given spectrum has been fit, however, the parameters are then taken in the order:

  1. The same peak in another spectrum
  2. The parameters of the last peak in this spectrum to be successfully fit (with any parameters marked as fixed in the Parameter File overwriting those copied from the nearest peak)
  3. The defaults for that peak function if they exist on the IDF (which have the ability to be good if there is an appropriate formula in the INSTR_Parameter.xml file).
  4. The algorithm's internal parameter estimation routines

This copying from the last peak in this spectrum, in general, a good way of setting starting parameters for new fits that are physically sensible and has been retained as the default behaviour.

The new functionality is to optionally turn off this parameter copying behaviour in situations where the peak parameters are highly wavelength dependent but aren't marked as fixed in the Parameters File, and so copying the parameters from a peak at a potentially very different wavelength gives a worse starting position than the default/ data-driven estimation. (it essentially lets the user skip step 2 straight to step 3)

Specifically, this seems like it will be useful for setting up the calibration for IMAT (#40842)

To test:


Reviewer

Your comments will be used as part of the gatekeeper process. Comment clearly on what you have checked and tested during your review. Provide an audit trail for any changes requested.

As per the review guidelines:

  • Is the code of an acceptable quality? (Code standards/GUI standards)
  • Has a thorough functional test been performed? Do the changes handle unexpected input/situations?
  • Are appropriately scoped unit and/or system tests provided?
  • Do the release notes conform to the guidelines and describe the changes appropriately?
  • Has the relevant (user and developer) documentation been added/updated?
  • If the PR author isn’t in the mantid-developers or mantid-contributors teams, add a review comment rerun ci to authorize/rerun the CI

Gatekeeper

As per the gatekeeping guidelines:

  • Has a thorough first line review been conducted, including functional testing?
  • At a high-level, is the code quality sufficient?
  • Are the base, milestone and labels correct?

@andy-bridger andy-bridger added the Diffraction Issues and pull requests related to diffraction label Mar 6, 2026
@github-actions github-actions bot added this to the Release 6.16 milestone Mar 6, 2026
@RichardWaiteSTFC
Copy link
Copy Markdown
Contributor

I think it might be a bit more complicated that in the PR description - the best peak to copy from is actually one at nearest d-spacing in adjacent spectrum (usually). I think I tried to do that at one point? Perhaps we can discuss offline?

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

andy-bridger commented Mar 6, 2026

I think it might be a bit more complicated that in the PR description - the best peak to copy from is actually one at nearest d-spacing in adjacent spectrum (usually). I think I tried to do that at one point? Perhaps we can discuss offline?

Sorry, yes you are right, initially there is a check to see if it can copy good fit parameters from the same peak in another spectrum (this PR doesn't effect that behaviour), but when this check fails (notably for the first spectrum) the sequence is as I describe in the description and it is for this situation I would like the option to not copy the last successful fit.

(I have updated the description to reflect)

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

andy-bridger commented Mar 6, 2026

I have looked at it closer, we could just do without this PR completely if we needed to for my specific use case, I hadn't set the BackToBackExponential parameters A and B in the IMAT_Parameters.xml file to fixed.

I think it might be worth including this option, because I don't think it is very clean to have the flag fixed (nominally marking which parameters should not be refined) also double as the exclusive flag for a parameter which should not be copied across between fits and then should still be refined....?

@RichardWaiteSTFC
Copy link
Copy Markdown
Contributor

I have looked at it closer, we could just do without this PR completely if we needed to for my specific use case, I hadn't set the BackToBackExponential parameters A and B in the IMAT_Parameters.xml file to fixed.

I think it might be worth including this option, because I don't think it is very clean to have the flag fixed (nominally marking which parameters should not be refined) also double as the exclusive flag for a parameter which should not be copied across between fits and then should still be refined....?

Agreed, I want to be able to fix the parameters which have been explicitly set on setMatrixWorkspace but let the nominated fwhm parameter (S in this case) be free to be optimised. This option would make the fitting a lot more robust for BackToBackExponential peak shape at least

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

andy-bridger commented Mar 13, 2026

For completeness I did a deeper look at the copying of <fixed /> parameters and I don't think it is working exactly as the comments in the code suggest (#41065 (comment)). It does basically seem to be doing this:

Agreed, I want to be able to fix the parameters which have been explicitly set on setMatrixWorkspace but let the nominated fwhm parameter (S in this case) be free to be optimised. This option would make the fitting a lot more robust for BackToBackExponential peak shape at least

I don't think this PR needs to address this issue, for now I would like the option of having the parameter file contain good starting values, but ones which require refinement, and for this I would like to be able to ensure the starting values are the ones from the parameter file, not from a previous peak in the spectra

@andy-bridger andy-bridger marked this pull request as ready for review March 13, 2026 10:22
@thomashampson thomashampson moved this to Waiting for Review in ISIS core workstream v6.16.0 Mar 18, 2026
@thomashampson thomashampson moved this from Waiting for Review to In Review in ISIS core workstream v6.16.0 Mar 19, 2026
@jclarkeSTFC jclarkeSTFC requested a review from Copilot March 19, 2026 10:12
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a new boolean flag, CopyLastGoodPeakParameters, to FitPeaks (and plumbs it through PDCalibration) to optionally disable inheriting initial peak-shape parameters from the most recently successful peak fit within the same spectrum.

Changes:

  • Add CopyLastGoodPeakParameters property to FitPeaks, defaulting to true, and use it to gate intra-spectrum parameter inheritance.
  • Add the same property to PDCalibration and forward it to the internal FitPeaks child algorithm.
  • Add unit tests validating the new property default and exercising CopyLastGoodPeakParameters=false for both FitPeaks and PDCalibration.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
Framework/Algorithms/src/FitPeaks.cpp Declares the new property and gates intra-spectrum parameter copying.
Framework/Algorithms/inc/MantidAlgorithms/FitPeaks.h Adds a new member to store the flag.
Framework/Algorithms/src/PDCalibration.cpp Declares the new property and forwards it to FitPeaks.
Framework/Algorithms/test/FitPeaksTest.h Adds tests for property default and CopyLastGoodPeakParameters=false behavior.
Framework/Algorithms/test/PDCalibrationTest.h Adds tests for property default and CopyLastGoodPeakParameters=false behavior.
Comments suppressed due to low confidence (1)

Framework/Algorithms/src/FitPeaks.cpp:1266

  • CopyLastGoodPeakParameters is only applied to the direct parameter-copy block here, but other logic later in fitSpectrumPeaks still uses neighborPeakSameSpectrum to decide whether to apply user-specified starting parameters / whether to do parameter observation. When CopyLastGoodPeakParameters=false, subsequent peaks in the same spectrum can unintentionally skip user-provided initial parameters and/or skip observation (because neighborPeakSameSpectrum becomes true after the first good fit even though no copying happens). Consider incorporating m_copyLastGoodPeakParameters into the later decision logic (e.g., treat "neighbor peak available" as false when copy is disabled) so the flag fully disables intra-spectrum inheritance.
    } else if (neighborPeakSameSpectrum && m_copyLastGoodPeakParameters) {
      // set the peak parameters from last good fit from ANY peak in the spectrum
      for (size_t i = 0; i < peakfunction->nParams(); ++i) {
        peakfunction->setParameter(i, lastGoodPeakParameters[prev_peak_index][i]);
      }
    }

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +1594 to +1609
if (!fitpeaks.isExecuted())
return;

API::MatrixWorkspace_sptr main_out_ws = std::dynamic_pointer_cast<API::MatrixWorkspace>(
AnalysisDataService::Instance().retrieve("PeakPositionsWS_noCopy"));
TS_ASSERT(main_out_ws);
TS_ASSERT_EQUALS(main_out_ws->getNumberHistograms(), 3);

// Fitted peak positions must be correct regardless of how parameters are seeded.
const auto &fitted_positions_0 = main_out_ws->histogram(0).y();
TS_ASSERT_EQUALS(fitted_positions_0.size(), 2);
TS_ASSERT_DELTA(fitted_positions_0[0], 5.0, 1.E-6);
TS_ASSERT_DELTA(fitted_positions_0[1], 10.0, 1.E-6);

const auto &fitted_positions_2 = main_out_ws->histogram(2).y();
TS_ASSERT_EQUALS(fitted_positions_2.size(), 2);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the number of threads should be set in the setUp and reset on the tearDown to be safe (if it isn't already)?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry just seen @jclarkeSTFC comment!

Copy link
Copy Markdown
Contributor

@jclarkeSTFC jclarkeSTFC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, Copilot comments look sensible, I think for the OMP threads in the test, maybe using the setup and tear down methods would do the trick?

Needs a release note as well

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

I agree that setting the threads in the setup and tear down would work, but having looked into RAII I actually quite like that solution. Using the setup/teardown would mean all tests run on one thread which feels like it is more likely to have adverse performance impacts, especially if more tests show up in future using larger datasets?

@RichardWaiteSTFC
Copy link
Copy Markdown
Contributor

I agree that setting the threads in the setup and tear down would work, but having looked into RAII I actually quite like that solution. Using the setup/teardown would mean all tests run on one thread which feels like it is more likely to have adverse performance impacts, especially if more tests show up in future using larger datasets?

Yeh that's fair, although we shouldn't be testing with large datasets, any large datasets can go in a separate performance test class with a different setUp etc.

I think for this algorithm maybe we should be running all tests on one thread if the result could depend on the order the spectra are looped over (which it definitely has at some points- don't know whether it still does).

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

I think for this algorithm maybe we should be running all tests on one thread if the result could depend on the order the spectra are looped over (which it definitely has at some points- don't know whether it still does).

Good point, into the setup it goes!

@andy-bridger
Copy link
Copy Markdown
Collaborator Author

I've added a release note and a section into FitPeaks alg doc to explain what I understand to be the current priority of the parameter inheritance. In checking this, I think I found a small bug in the cross-spectrum comparison - the last good peak parameters are persistent so they are simply, as the name suggests, the last good fit to that peak (not necessarily in the previous spectrum). The check for whether to apply this, however, was necessarily checking the previous spectrum to see if the detector ids were close, rather than the spectrum the fit had actually come from

// Check whether current spectrum's pixel (detector ID) is close to its
// previous spectrum's pixel (detector ID).
try {
if (wi > 0 && samePeakCrossSpectrum) {
// First spectrum or discontinuous detector ID: do not start from same
// peak of last spectrum
std::shared_ptr<const Geometry::Detector> pdetector =
std::dynamic_pointer_cast<const Geometry::Detector>(m_inputMatrixWS->getDetector(wi - 1));
std::shared_ptr<const Geometry::Detector> cdetector =
std::dynamic_pointer_cast<const Geometry::Detector>(m_inputMatrixWS->getDetector(wi));
// If they do have detector ID
if (pdetector && cdetector) {
auto prev_id = pdetector->getID();
auto curr_id = cdetector->getID();
if (prev_id + 1 != curr_id)
samePeakCrossSpectrum = false;
} else {
samePeakCrossSpectrum = false;
}
} else {
// first spectrum in the workspace: no peak's fitting result to copy
// from
samePeakCrossSpectrum = false;
}
} catch (const std::runtime_error &) {
// workspace does not have detector ID set: there is no guarantee that the
// adjacent spectra can have similar peak profiles
samePeakCrossSpectrum = false;
}
// Set starting values of the peak function
if (samePeakCrossSpectrum) { // somePeakFit
// Get from local best result
for (size_t i = 0; i < peakfunction->nParams(); ++i) {
peakfunction->setParameter(i, lastGoodPeakParameters[peak_index][i]);
}

I would appreciate it if, as part of the review, you agree with my summary of the preference of peak parameter sharing and that this checking of previous spectrum ID was a bug (I think it was previously letting more cross peaks be shared than it intended as in many scenarios the previous spectrum contains the previous detID but this is not necessarily the spectrum sourcing the last good fit).

An argument could be made that if this is a bug fix it should be done separately and with a release note explaining the change in behaviour, I'm not sure how much of an issue it was. I fixed it here because otherwise it made the explaination of the peak parameter sharing even more obtuse "copy parameters from the last good fit of the peak (in any previous spectrum) only if the immediately proceeding spectrum has a close detector ID"

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 26, 2026

Unit test results

2 863 tests   2 863 ✅  13h 15m 59s ⏱️
    1 suites      0 💤
    1 files        0 ❌

Results for commit b8faed9.

♻️ This comment has been updated with latest results.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 26, 2026

System test results

808 tests   792 ✅  3h 1m 50s ⏱️
 11 suites   16 💤
 11 files      0 ❌

Results for commit b8faed9.

♻️ This comment has been updated with latest results.

Copy link
Copy Markdown
Contributor

@jclarkeSTFC jclarkeSTFC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks for adding the release note. I don't think the bug needs to go into a separate PR.

<value val="0,1,0"/>
</parameter>

<parameter name="BackToBackExponential:S" type="fitting">
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to include these Engin-X changes?

Copy link
Copy Markdown
Collaborator Author

@andy-bridger andy-bridger Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just to get a test route for a SampleWorkspace that didn't have any fixed parameters to contend with. I'll remove them as I am not sure how they play with real datasets in terms of fixing or not.

{EDIT: I won't do the below here, I'll do it with the IMAT changes as the IC function needs updating

I might replace this with some free IkedaCarpenterPV defaults which I need to introduce anyway, serve the same purpose, and will have no adverse effect to existing scripts (as the current default ikeda carpenter values do not give a workable starting point for enginx data so any script which did use it {I doubt there even are any} will be explicitly setting the parameters)}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Diffraction Issues and pull requests related to diffraction

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants