You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the Boundary Conditions (BC) we fix the knuckles, as they would be attached to the rocket, by setting the corresponding mesh vertices to Dirichlet. Additionally we add an out of plane load at the very end of the fin geometry. Placing the load here allows us to approximate the aerodynamic forces experience by the whole grid fin whilst de-coupling the load from the bars movement. The boundary conditions are illustrated here:
15
+
For the Boundary Conditions (BC) we fix the knuckles, as they would be attached to the rocket, by setting the corresponding mesh vertices to Dirichlet. Additionally we add an out of plane load at the very end of the fin geometry as a Von Neumann BC. Placing the load here allows us to approximate the aerodynamic forces experience by the whole grid fin whilst de-coupling the load from the bars movement. The boundary conditions are illustrated here:
16
16
17
17

18
18
19
-
To simulate the grid fin under Max-Q loads, we employ a linear elastic solver, assuming small deformations and Hookean material behavior. Our optimization goal is to maximize stiffness, which is mathematically equivalent to minimizing Compliance. Minimizing Compliance minimizes the total strain energy stored in the body, ensuring the fin remains rigid and aerodynamically effective during re-entry.
20
-
19
+
To simulate the grid fin under Max-Q loads, we employ a linear elastic solver, assuming small deformations and Hookean material behavior. Our optimization goal is to maximize stiffness, which is mathematically equivalent to minimizing Compliance.
21
20
22
21
## Workflow
23
22
@@ -44,7 +43,23 @@ As a very first experiment we compare the compliance of the two initial conditio
44
43
45
44

46
45
47
-
If we apply
46
+
47
+
For the optimization we have tried two optimizers, the classical Adam optimizer and Method of Moving Asymptotes (MMA) (Svanberg, 1987). We observed similar performance with the two optimzers, where both were very sensitive to the scale of the gradients and the learning rate.
When applying the optimization on the random intitial conditions we can a steady decrease in the compliance, where the bars slowly arrange to a more optimal distribution.
0 commit comments