Multiapp with keep_solution_during_restore=true problem with history dependent subapp materials
#32229
Replies: 3 comments 6 replies
-
|
@bwspenc @dschwen @hugary1995 |
Beta Was this translation helpful? Give feedback.
-
|
The issue is here https://github.com/idaholab/moose/blob/next/modules/solid_mechanics/src/materials/ComputeFiniteStrain.C#L57-L59 It calculates old deformation gradient using old displacement gradients. This saves us some memory apparently. |
Beta Was this translation helpful? Give feedback.
-
|
Taking a look. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Check these boxes if you have followed the posting rules.
Question
I made the below input files as a minimum working example to show the problem I'm having with a multiapp and
keep_solution_during_restore=truewith picard iterations. My main app is a transient simulation that calls a full solve multiapp that is doing a thermal mechanical transient solve. The thermomechanical solve has history dependent material properties (incremental deformation). I run a transient for the thermomechanical solve until it reaches equilibrium. During the picard solves, I would like touse keep_solution_during_restore=trueso I will be closer to the equilibration of the subapp. However, the displacement keeps growing by the same amount for every picard solve, see the below main app output for the postprocessor..
use keep_solution_during_restore=falsegives me the correct behavior. I didn't really look into the issue and using the new mechanics kernel system solves the problem but I wanted to see if anyone knows the problem or if I should create an issue. Maybe it should error out if you have history dependent material properties.main.i:
subapp.i:
Beta Was this translation helpful? Give feedback.
All reactions