You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have another question about the flex ddG protocol – I am interested whether the scores predicted by it require an additional re-weighting/re-scaling to better fit some experimental benchmarks.
This question has arisen from a discussion I had about other Rosetta methods. For example, Rosetta Cartesian, as I understood it, outputs the scores in Rosetta Energy Units (REU), which are then multiplied by a scaling factor (1.0/1.84 in case of the talaris2014 function) to fit the actual experimental benchmarks in kcal/mol – it is described on page 20 here: https://pmc.ncbi.nlm.nih.gov/articles/instance/5515585/bin/NIHMS867713-supplement-Suppl__Mat_.pdf.
“The energy gap between the refined mutant structure and the refined wild-type structure, multiplied by a energy-function-specific scaling factor, becomes the predicted mutational ddG. Scaling factors are introduced to fit the overall scale of estimated values to actual experimental free energies measured in kcal/mole. A least-squares fit determined a scaling factor for talaris2014 of 1.0/1.84 and for opt-nov15 of 1.0/2.94. <...> Correct assignment of stabilizing mutation implies predicting ddG < -1.0 for a mutation with experimental ddG < -1.0, and correct assignment of destabilizing ones is the opposite (ddG > 1.0).” The experimental benchmark set used by Rosetta Cartesian was mentioned here: https://onlinelibrary.wiley.com/doi/epdf/10.1002/prot.22921?src=getftr&utm_source=acs&getft_integrator=acs
Flex ddG also uses talaris2014 function, whose scores in REU are then re-weighted using a GAM nonlinear re-weighting scheme, to the best of my understanding. The objective of this re-weighting was to reduce the absolute error between flex ddG predictions and known experimental values. Therefore, as I understand it, flex ddG was benchmarked on a different experimental dataset (ZEMu) than Rosetta Cartesian, the GAM re-weighting was already implemented, and the ddG talaris2014-GAM scores, as it seems, can be used as is (?).
But still, there are many deeply technical things to consider, and I am worried I might have misunderstood or missed something important. I kindly ask more advanced users to share their knowledge and experience with these methods.Do you use the talaris2014-GAM scores from flex ddG as is, without any additional re-weighting? After GAM-reweighting, are these scores in kcal/mol, or still in REU? Do you interpret these scores using the criteria suggested by the authors: “stabilizing mutations are defined as those with ΔΔG ≤ −1.0 kcal/mol, neutral as those with −1.0 kcal/mol < ΔΔG < 1.0 kcal/mol, and destabilizing as those with ΔΔG ≥ 1.0 kcal/mol”?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Rosetta community,
I have another question about the flex ddG protocol – I am interested whether the scores predicted by it require an additional re-weighting/re-scaling to better fit some experimental benchmarks.
This question has arisen from a discussion I had about other Rosetta methods. For example, Rosetta Cartesian, as I understood it, outputs the scores in Rosetta Energy Units (REU), which are then multiplied by a scaling factor (1.0/1.84 in case of the talaris2014 function) to fit the actual experimental benchmarks in kcal/mol – it is described on page 20 here:
https://pmc.ncbi.nlm.nih.gov/articles/instance/5515585/bin/NIHMS867713-supplement-Suppl__Mat_.pdf.
“The energy gap between the refined mutant structure and the refined wild-type structure, multiplied by a energy-function-specific scaling factor, becomes the predicted mutational ddG. Scaling factors are introduced to fit the overall scale of estimated values to actual experimental free energies measured in kcal/mole. A least-squares fit determined a scaling factor for talaris2014 of 1.0/1.84 and for opt-nov15 of 1.0/2.94. <...> Correct assignment of stabilizing mutation implies predicting ddG < -1.0 for a mutation with experimental ddG < -1.0, and correct assignment of destabilizing ones is the opposite (ddG > 1.0).” The experimental benchmark set used by Rosetta Cartesian was mentioned here: https://onlinelibrary.wiley.com/doi/epdf/10.1002/prot.22921?src=getftr&utm_source=acs&getft_integrator=acs
Flex ddG also uses talaris2014 function, whose scores in REU are then re-weighted using a GAM nonlinear re-weighting scheme, to the best of my understanding. The objective of this re-weighting was to reduce the absolute error between flex ddG predictions and known experimental values. Therefore, as I understand it, flex ddG was benchmarked on a different experimental dataset (ZEMu) than Rosetta Cartesian, the GAM re-weighting was already implemented, and the ddG talaris2014-GAM scores, as it seems, can be used as is (?).
But still, there are many deeply technical things to consider, and I am worried I might have misunderstood or missed something important. I kindly ask more advanced users to share their knowledge and experience with these methods. Do you use the talaris2014-GAM scores from flex ddG as is, without any additional re-weighting? After GAM-reweighting, are these scores in kcal/mol, or still in REU? Do you interpret these scores using the criteria suggested by the authors: “stabilizing mutations are defined as those with ΔΔG ≤ −1.0 kcal/mol, neutral as those with −1.0 kcal/mol < ΔΔG < 1.0 kcal/mol, and destabilizing as those with ΔΔG ≥ 1.0 kcal/mol”?
Thank you so much in advance!
Yana
Beta Was this translation helpful? Give feedback.
All reactions