Add TGLFNNukaeaTransportModel#1436
Conversation
9685829 to
dba2acb
Compare
This is fine for now. Rotation in TORAX is coming this Q3.
|
0c6356e to
7199b4e
Compare
|
We found an issue with the version of the NN we were using. Things are making a bit more sense now, I believe. chi profiles looking broadly good. Will look at De, Ve / particle fluxes when I next have a chance - as the NN outputs main ion flux I think there just needs to be some reweighting to get the electron flux. |
81ed5ee to
b32109c
Compare
|
iterhybrid_predictor_corrector, with multimachine TGLFNNukaea vs QLKNN. Getting reasonable matches for χₑ, χᵢ in ρ > 0.5. Mismatch for 0.2 < ρ < 0.5 looks like difference in turbulent modes. Mismatch for ρ < 0.2 not particularly relevant due to patch transport? Any thoughts @lorenzozanisi @fcasson @jcitrin? |
b32109c to
03f8582
Compare
|
Progress update: been working on a toy case with @fcasson, we've now fixed various problems with the output normalisations and conversion to chis. Next step is to make sure we're doing the right thing with particle transport. Toy tokamak, initial condition, comparison between JETTO and TORAX |
0b8a368 to
ee6cb6d
Compare
| ) | ||
| chi_i = -P_i / ( | ||
| core_profiles.n_i.face_value() * dT_i_drhon * geo.g1_over_vpr_face | ||
| ) |
There was a problem hiding this comment.
The denormalisation of TGLF is slightly different to QuaLiKiz.
In particular, TGLF normalises everything with respect to the electron temperature and density, which means that chi_i can't be denormalised in the same way.
Due to the difficulties I had trying to track down bugs in this interface, I would strongly advocate for this sort of layout - denormalisation, conversion to power, conversion to chi - rather than what is done in QLKNN.
However, if there are any speed improvements that can be made by changing what is computed, they definitely take priority.
| ) | ||
| # For stability, we also set purely diffusive transport at some minimum | ||
| # threshold of the temperature gradient | ||
| D_eff_mask &= abs(tglf_inputs.lref_over_lne) >= transport.An_min |
There was a problem hiding this comment.
The D/V splitting could plausibly be reused from QLK, I just found it easier writing it out when debugging.
torax/tests/test_data/test_iterhybrid_predictor_corrector_tglfnn_ukaea.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Summarised some open questions / details. @jcitrin this is now ready for review :)
Thanks @fcasson for all your in-depth support in getting this over the line!
Some of these questions (eg memoisation and weights loading, which is currently the cause of the failing test) may also be worth revisiting with @hamelphi on fusion-surrogates. Currently fusion_surrogates[tglfnnukaea] requires installing PyTorch (!) because our weights are distributed as .pt or .onnx, and at the start of this process it was deemed preferable to reimplement the network in JAX (which means loading weights from Pytorch .pt) rather than rely on ONNX loaders that may not have a reliable maintenance pathway.
There was a problem hiding this comment.
Yes that looks roughly as expected.
In general we see more impact of trapped electron drive in TGLF compared to QuaLiKiz. In lower-density reactor-relevant scenarios like the hybrid scenario, this will be more prevalent.
Intuitively what this means is while ITG dominated (see chi_i/chi_e>1) , the increased trapped electron drive should increase the density peaking as we go closer to the ITG-TEM boundary. This is indeed what is seen in TGLF.
The increased density peaking should decrease a bit the ITG turbulence threshold, resulting in the lower Ti. The higher density due to peaking should result in more Ti-Te coupling, reducing Te/Ti, which is also what we see.
d02ff2a to
d2eb21e
Compare
|
As of this morning, proposed new method for distributing weights via pip installable package currently under review in internal UKAEA repos. This would massively simplify loading here and in fusion_surrogates. |
|
@jcitrin @hamelphi updated to reflect google-deepmind/fusion_surrogates#32. This fixes the problems with Git LFS, and also simplifies the dependency tree. Tests are failing until fusion_surrogates PR is merged and new version is published to PyPI. Tests pass on a local install, with the version of fusion surrogates from google-deepmind/fusion_surrogates#32. |
5ca0e5e to
0126a5c
Compare
Changes to TGLFBasedTransportModel - Override _make_core_transport with TGLF-specific calculation - Correct output normalisations - Correct particle transport Other changes: - Update dependencies
0126a5c to
a07351b
Compare
jcitrin
left a comment
There was a problem hiding this comment.
Will start the internal review now.
Could we have a followup PR with documentation?
|
Forgot about docs! Added now. |
|
Looking at the pyproject.toml: it looks like tglfnnukaea is not installed by default. Wouldn't it be easier for users to have it installed by default? It may be a workhorse model so users may be surprised if they don't have it from the getgo |
Sounds good to me. We should keep in mind that this will add 60Mb to the deps. Though the current default install is around 1Gb (the largest chunk being jaxlib at 300Mb), so it is relatively small. We could also make it default for fusion_surrogates. |
Yep, not averse to this! Before we do, it might be good to do some speed/performance testing, because I've found that some change over the last few commits has caused a big drop in performance. Could we have it as an optional dependency for the time being until it's a bit more stress-tested? |
So shall we keep it as is for now, and then in a future PR make it default once the performance issues are solved? |
|
Sounds good to me! |
-- f6eaa06 by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add TGLFNNukaeaTransportModel Changes to TGLFBasedTransportModel - Override _make_core_transport with TGLF-specific calculation - Correct output normalisations - Correct particle transport Other changes: - Update dependencies -- a07351b by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Switch to using fusion_surrogates==0.4.3 and tglfnn_ukaea Python package -- 3ffbc0f by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add documentation FUTURE_COPYBARA_INTEGRATE_REVIEW=#1436 from google-deepmind:tglfnn-ukaea 3ffbc0f PiperOrigin-RevId: 823087120
-- f6eaa06 by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add TGLFNNukaeaTransportModel Changes to TGLFBasedTransportModel - Override _make_core_transport with TGLF-specific calculation - Correct output normalisations - Correct particle transport Other changes: - Update dependencies -- a07351b by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Switch to using fusion_surrogates==0.4.3 and tglfnn_ukaea Python package -- 3ffbc0f by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add documentation FUTURE_COPYBARA_INTEGRATE_REVIEW=#1436 from google-deepmind:tglfnn-ukaea 3ffbc0f PiperOrigin-RevId: 823087120
-- f6eaa06 by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add TGLFNNukaeaTransportModel Changes to TGLFBasedTransportModel - Override _make_core_transport with TGLF-specific calculation - Correct output normalisations - Correct particle transport Other changes: - Update dependencies -- a07351b by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Switch to using fusion_surrogates==0.4.3 and tglfnn_ukaea Python package -- 3ffbc0f by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add documentation FUTURE_COPYBARA_INTEGRATE_REVIEW=#1436 from google-deepmind:tglfnn-ukaea 3ffbc0f PiperOrigin-RevId: 823087120
|
Thanks for the benchmark analysis. I am not surprised that QLKNN_7_11 is faster than TGLFNN since it is a much smaller model. The specific model was picked with inference time as one of the metrics in mind. That also explains why GPUs do not help in that case, since I suppose the overhead is large compared to the inference time. GPUs could become more relevant for QLKNN with large batch inference. On the flip side, I do expect the quality of TGLFNN predictions to be more accurate, or at least to have a closer correlation with the base model. |
-- f6eaa06 by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add TGLFNNukaeaTransportModel Changes to TGLFBasedTransportModel - Override _make_core_transport with TGLF-specific calculation - Correct output normalisations - Correct particle transport Other changes: - Update dependencies -- a07351b by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Switch to using fusion_surrogates==0.4.3 and tglfnn_ukaea Python package -- 3ffbc0f by Theo Brown <7982453+theo-brown@users.noreply.github.com>: Add documentation FUTURE_COPYBARA_INTEGRATE_REVIEW=#1436 from google-deepmind:tglfnn-ukaea 3ffbc0f PiperOrigin-RevId: 823087120
|
Congratulations @theo-brown , @lorenzozanisi ! It's a great milestone having this in TORAX. Looking forward to using it! Rotation will come soon. |
|
Thanks @theo-brown @hamelphi @fcasson and @jcitrin for pulling this together - excited to see the NNs in action within TORAX! |











Implementing TGLF-NN-UKAEA in TORAX.