-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsearch.json
More file actions
1352 lines (1352 loc) · 619 KB
/
search.json
File metadata and controls
1352 lines (1352 loc) · 619 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[
{
"objectID": "tutorials/synthIRF/synthIRF.html",
"href": "tutorials/synthIRF/synthIRF.html",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "",
"text": "In this tutorial we will introduce a few different methods that can be used to compute the impulse response function (IRF) of a numerical system when the mass, stiffness and damping matrices are known. Specifically, we will look at modal synthesis (for proportionally damped systems only), numerical integration, and inverse Fourier transform methods.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthIRF/synthIRF.html#a-little-bit-of-theory",
"href": "tutorials/synthIRF/synthIRF.html#a-little-bit-of-theory",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "A little bit of theory",
"text": "A little bit of theory\nSo, what is an impulse response function (IRF) and why might we want it? As the name suggests, the IRF of a system is its response when excited by a (unit) impulse, i.e. an external force of very short duration. More specifically, the unit impulse response of a system is the response obtained by considering the external forcing \\(F(t)\\) to be that of a Dirac delta function, defined mathematically as \\[\n\\begin{align}\n \\delta(t-a) &= 0 \\quad \\quad \\mbox{for } t\\neq a\\\\\n \\int_{-\\infty}^\\infty \\delta(t-a){\\rm d}t &=1\n\\end{align}\n\\tag{1}\\] For all intents and purposes we can think of this \\(\\delta\\) as a very sharp and sudden kick which disappears immediately after impact. The response of a system (say displacement, \\({x}(t)\\)) due to such excitation is labelled \\({g}(t)\\) and termed the IRF.\nNow, why might we want the IRF? It can be shown (see our previous discussion of forced single DoF systems) that, for a linear and time invariant system, the response due to an arbitrary time varying force \\(F(t)\\) can be expressed in the form of a convolution integral involving the IRF, \\[\nx(t) = \\int_0^t F(\\tau) g(t-\\tau) d\\tau\n\\tag{2}\\] In vibration analysis, Equation 2 is also called Duhamel’s integral; it states that the response of a system at time \\(t\\) is simply the super-position of all previously applied forces weighted appropriately by the IRF. Key to its application is knowing the IRF of the system.\nIn this tutorial we will consider general multi-DoF system that are described by an equation of motion of the form, \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}(t) + \\mathbf{C}\\dot{\\mathbf{x}}(t) + \\mathbf{K}\\mathbf{x}(t) = \\mathbf{f}(t).\n\\tag{3}\\] where we will seek the solution \\(\\mathbf{x}(t)\\rightarrow \\mathbf{g}(t)\\) for the case whereby \\(\\mathbf{f}(t) \\rightarrow \\delta_j(0)\\), where \\(\\delta_j(0)\\) is a delta function applied to the \\(j\\)th DoF only.\nTo illustrate the various methods for computing \\(\\mathbf{g}(t)\\), we will use the same numerical mass-spring system as in our previous Synthesising FRFs tutorial, as illustrated in Figure 1.\n\n\n\n\n\n\nFigure 1: Numerical mass-spring assembly used for FRF synthesis example.\n\n\n\nWe begin by re-defining the mass and stiffness matrices \\(\\mathbf{M}\\) and \\(\\mathbf{K}\\) for the system in Figure 1 and computing the system FRFs for inspection purposes.\n\n\nCode\nimport numpy as np\nnp.set_printoptions(linewidth=250)\nimport warnings\nwarnings.filterwarnings('ignore')\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport sys as sys\n\nf = np.linspace(0,100,1000) # Generate frequency array\nω = 2*np.pi*f # Convert to radians\n# Define component masses and build matrices\nm1, m2, m3, m4, m5 = 1, 2, 2, 1, 0.5\nm6, m7, m8, m9, m10, m11 = 0.5, 2, 1, 3, 5,0.5\nM = np.diag(np.array([m1,m2,m3+m6,m4+m7,m5+m8,m9,m10,m11])) # Coupled AB\n\n# Define component stiffnesses and build matrices (complex stiffness -> structural damping)\nη = 0.05 # Structural loss factor\nk1, k2, k3, k4, k5 = 1, 1.5, 0.7, 1, 0.5\nk6, k7, k8, k9, k10, k11 = 0.5, 2, 1, 0.2, 1, 0.5\nK = np.array([[k1+k2+k5, -k2, 0, 0, -k5, 0, 0, 0],\n [-k2, k2+k3+k4, -k3, -k4, 0, 0, 0, 0],\n [0, -k3, k3+k6, 0, 0, -k6, 0, 0],\n [0, -k4, 0, k4+k7, 0, 0, -k7, 0],\n [-k5, 0, 0, 0, k5+k8, 0, -k8, 0],\n [0, 0, -k6, 0, 0, k6+k9, 0, -k9],\n [0, 0, 0, -k7, -k8, 0, k7+k8+k10, -k10],\n [0, 0, 0, 0, 0, -k9, -k10, k9+k10+k11]])*1e5\n\nC_prop = 0.0001*K + 1*M # Proportional damping matrix\n\n# Direct solution:\nY_C_dir_prop = np.linalg.inv(K +1j* np.einsum('i,jk->ijk',ω, C_prop) - np.einsum('i,jk->ijk',ω**2, M))\n\nfig, ax = plt.subplots()\nax.semilogy(f, np.abs(Y_C_dir_prop[:,0,-1]),label='Direct')\nax.grid(True)\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('Admittance (m/N)')\n# plt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 2: Frequency Response Function (FRF) of the numerical mass-spring system considered in this tutorial.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthIRF/synthIRF.html#modal-synthesis",
"href": "tutorials/synthIRF/synthIRF.html#modal-synthesis",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "Modal synthesis",
"text": "Modal synthesis\nWe saw in our treatment of single DoF systems that the IRF can be synthesied by applying an initial velocity that is inverse proportional to the mass, \\(\\dot{x}_0 = 1/M\\). The IRF is then given by, \\[\n\\begin{align}\n g(t) &= \\frac{1}{\\omega_d m} \\sin(\\omega_d t) e^{-\\zeta\\omega_n t} \\quad &t>0 \\\\\n &=0 &t<0\n\\end{align}\n\\] The same approach can be taken for a multi-DoF system by treating each mode as a single DoF system and super-imposing their individual IRFs (we can do this because our system is linear and we are assuming proportional damping!). From our treatment of general multi-DoF systems we have for the system response due an initial displacement \\(\\mathbf{x}_0\\) and velocity \\(\\dot{\\mathbf{x}}_0\\), \\[\n\\begin{align}\n \\mathbf{x}(t) &= \\sum_r^N \\mathbf{u}_r \\left({\\color{red}\\mathbf{u}^{\\rm T}_r\\mathbf{M} \\mathbf{x}_0 }\\, \\cos\\omega_{d,r} t + {\\color{green}\\frac{1}{\\omega_{d,r} }\\mathbf{u}^{\\rm T}_r\\mathbf{M} \\left(\\dot{\\mathbf{x}}_0 + \\zeta_r\\omega_{r} \\mathbf{x}_0 \\right)}\\, \\sin\\omega_{d,r} t \\right)e^{-\\zeta_r\\omega_{r} t}\n\\end{align}\n\\tag{4}\\] Considering the IRF, for which \\(\\mathbf{x}_0 = \\mathbf{0}\\) by definition, Equation 4 reduces to, \\[\n\\begin{align}\n \\mathbf{x}(t) &= \\sum_r^N \\mathbf{u}_r \\overbrace{\\frac{1}{\\omega_{d,r}}\\mathbf{u}^{\\rm T}_r\\mathbf{M} \\dot{\\mathbf{x}}_0}^{A_n}\\, \\sin(\\omega_{d,r} t) \\, e^{-\\zeta_r\\omega_{r} t}\n\\end{align}\n\\tag{5}\\] where \\(A_n\\) is the amplitude of each modal contribution, \\(\\omega_{d,r} = \\omega_{n,r}\\sqrt{1-\\zeta_r^2}\\) and \\(\\omega_{n,r}\\) are the damped and undamped natural frequency of the \\(r\\)th mode, and \\(\\zeta_{r}\\) is the corresponding damping factor. The undamped natural frequencies \\(\\omega_{n,r}\\) are obtained from the solution of the eigen-problem, \\[\n(\\mathbf{K} - \\omega_{n,r}\\mathbf{M}) \\mathbf{u}_r = \\mathbf{0}\n\\] Since we have assumed that the damping is proportional (i.e. \\(\\mathbf{C} = \\alpha \\mathbf{M} + \\beta \\mathbf{K}\\)), the damping factor for mode \\(r\\) is given by, \\[\n\\zeta_r = \\frac{\\mathbf{u}_r^{\\rm T}\\mathbf{C}\\mathbf{u}_r}{2\\omega_{n,r}}\n\\] Note that the mass term usually found in the denominator of \\(\\zeta\\) corresponds to modal mass, which owing to our mass normalisation is simply 1.\nShown in Figure 3 is the IRF generated for the mass spring system in Figure 1 using the modal synthesis in Equation 5.\n\n\nCode: compute impulse response function using modal summation\n# Modal solution:\nλ, Φ_ = sp.linalg.eig(K, M) # Get mode shapes Φ_ and natural frequencies λ=ω_n**2\nΦ = Φ_@np.diag(1/np.sqrt(np.diag((Φ_.T@M@Φ_)))) # Mass normalise mode shapes\nΦCΦ = Φ.T @ C_prop @ Φ # Check modal damping matrix is diagonal\n\nnumDoF = 8 # Number of coupled DoFs\nT = 10 # Total time \nnsamp = 10000 # Number of samples\ndt = (T/nsamp) # Sample rate\nt = np.linspace(0, T, nsamp) # Time vector\n\nx = np.zeros((numDoF, nsamp)) # Initialize displacement array for each mass\nx0 = np.zeros((numDoF,1)) # Initial displacement\nxd0 = np.zeros((numDoF,1)) # Initial velocity\nxd0[0] = 1/M[0,0] # Apply impulse to specific mass\n\nfor n in range(numDoF): # Loop over modal contributions\n ωr = np.sqrt(λ[n]) # Undamped natural frequency from Eigen-solution\n ζ = ΦCΦ[n,n]/(2*ωr) # Calculate modal damping factor\n ωd = ωr * np.sqrt(1-ζ**2) # Damped natural frequency\n\n A = 1/ωd*Φ[:,n].T@M@xd0 # Amplitude of nth mode\n x = x + np.outer(Φ[:,n],(A*np.sin(ωd*t))*np.exp(-ζ*ωr*t)) # Add modal contribution to total.\n\nfig, ax = plt.subplots()\nax.plot(t, x[5,:],label='Modal')\nax.grid(True)\nax.set_xlabel('Time (S)')\nax.set_ylabel('Displacement (m)')\nax.set_xlim([-0.05,5])\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 3: Impulse response of mass-spring system generated by modal synthesis. Excitation at mass 1, response at mass 5.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthIRF/synthIRF.html#numerical-integration",
"href": "tutorials/synthIRF/synthIRF.html#numerical-integration",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "Numerical integration",
"text": "Numerical integration\nWhilst the modal synthesis approach is straightforward and easy to implement, it is quite limited. First of all, it requires proportional damping model to ensure the underlying EoM can be diagonalised. Second, although we are mostly interested in linear structures, it would be nice to have the ability to deal with non-linearities if they arise; this is generally not possible, or at least very difficult, with the modal formulation. As such, we would like a more general method for computing the system impulse response encompassing fewer assumptions. This is achieved by working directly with the EoM in the physical domain and using numerical integration techniques.\nNow, numerical integration is a massive topic with loads of different techniques available. In this tutorial we will limit ourself to two particular approaches. The first is a multi-step second order integration method called using the Newmark-\\(\\beta\\) method. The second is a first order integration method that we implement using established algorithms/functions. Both methods can be applied to systems with non-proportional damping and, with some minor modifications, to non-linear systems also.\nBroadly speaking, we can catagorise an integration scheme by the type of equation they are applied to. Direct methods are applied directly to the governing ODE, for example our second order EoM in Equation 15. The Newmark-\\(\\beta\\) method described below is an example of a direct method. There also exist a suite of first-order methods that are applicable to first-order ODEs, this includes the popular family of Runge-Kutta methods, among many others.\nA further categorisation of integration methods is whether they are explicit or implicit. In short, explicit methods solve for the response at a future time step using the solution from the current and previous time steps only. Implicit methods instead determine the response at the future time step by solving an equation that also depends on the response at the future time step. Explicit methods are generally more computationally efficient than implicit methods, though have stricter requirements regarding their stability.\n\nInitial conditions for IRF computation\nThe numerical integration schemes described below take as an input the arbitrary discretized forcing signal \\(\\mathbf{f}_n\\) and output the resulting system response. In order to obtain the system Impulse Response Function (IRF), it is necessary to force the system in such a way that approximates the ideal Dirac function described above. This can be done in a few different ways as detailed below.\n\nApplication of an initial force at \\(n=0\\)\nTo approximate a Dirac excitation we can set the initial displacement and velocity to zero, \\[\n\\mathbf{x}_0 = \\mathbf{0}\n\\] \\[\n\\dot{\\mathbf{x}}_0 = \\mathbf{0}\n\\] and consider a force of amplitude \\(\\frac{2}{\\Delta t}\\) applied at timestep 0, \\[\n \\mathbf{f}_0 = \\frac{2}{\\Delta t} \\mathbf{1}_j\n\\] The reason for the force amplitude of \\(\\frac{2}{\\Delta t}\\) is to ensure that when integrated over one whole time step, the resulting impulse has a value of 1 (as per the definition of the Dirac, see Equation 1). By setting the initial force \\(\\mathbf{f}_0\\) to \\(\\frac{2}{\\Delta t}\\) and all remaining forces to 0, we approximate the Dirac \\(\\delta(t=0)\\) as a right angle triangle with base \\(\\Delta t\\) and height \\(\\frac{2}{\\Delta t}\\); the total area under this triangle is 1.\nWith the initial displacement and velocity set to zero, the EoM reduced to, \\[\n\\mathbf{M}\\ddot{\\mathbf{x}}_0 = \\frac{2}{\\Delta t} \\mathbf{1}_j\n\\] which can be solved for the initial acceleration produced by the applied force \\(\\mathbf{f}_0\\), \\[\n\\ddot{\\mathbf{x}}_0 = \\frac{2}{\\Delta t} \\mathbf{M}^{-1} \\mathbf{1}_j\n\\tag{6}\\] Equation 6 is the initial acceleration generated by the applied force (i.e. approximation of \\(\\delta(0)\\)), and can be used as an initial condition (instead of using \\(\\mathbf{f}_0\\) directly) to compute the system IRF.\n\n\nApplication of initial force at \\(n=1\\)\nA similar idea to the previous approach is to apply the an initial force, but instead at the second time step. In this case we set the initial displacement, velocity, and acceleration to zero, \\[\n\\mathbf{x}_0 = \\mathbf{0}\n\\] \\[\n\\dot{\\mathbf{x}}_0 = \\mathbf{0}\n\\] \\[\n\\ddot{\\mathbf{x}}_0 = \\mathbf{0}\n\\] and define the force at \\(n=1\\) as, \\[\n \\mathbf{f}_1 = \\frac{1}{\\Delta t}\\mathbf{1}_j\n\\] Note that the factor of two is no longer in the force amplitude. This is because our force signal is now approximating the Dirac as an equilateral triangle with base length of \\(2\\Delta t\\); the force is 0 at \\(n=0\\), rises linearly up to a value of \\(\\frac{1}{\\Delta t}\\) at time \\(\\Delta t\\), and then back down to 0 at a time \\(2\\Delta t\\). Again, the total area under this triangle is 1.\nAlso, unlike the previous case, with \\(\\mathbf{f}_1=\\frac{1}{\\Delta t}\\mathbf{1}_j\\) our Dirac is effectively being applied on the second time step; hence the IRFs obtained must all be shifted back by 1 sample to align correctly.\nThe main advantage of this particular approach is that we avoid having to compute the inverse of the mass matrix \\(\\mathbf{M}\\), which for a large numerical model might have some computationally non-negligible expense.\n\n\nApplitcation of an step velocity at \\(n=0\\)\nThe final method of approximating a Dirac is be considering a step velocity at \\(n=0\\). To do so we first integrate our EoM over the infintesimal interval \\([-\\delta t, \\, +\\delta t]\\), \\[\n \\int_{-\\delta t}^{+\\delta t} \\left[\\mathbf{M}\\ddot{\\mathbf{x}}(t) + \\mathbf{C}\\dot{\\mathbf{x}}(t) + \\mathbf{K}\\mathbf{x}(t) \\right] {\\rm d}t = \\int_{-\\delta t}^{+\\delta t} \\mathbf{f}(t) {\\rm d}t\n\\] If we consider the force to be a Dirac \\(\\mathbf{f}(t) = \\delta(0)\\) the right hand side integrates to 1 for the DoF being excited, i.e. \\(\\mathbf{1}_j\\). Integrating each term on the left hand side, \\[\n \\mathbf{M}\\int_{-\\delta t}^{+\\delta t}\\ddot{\\mathbf{x}}(t) {\\rm d}t+ {\\color{gray}\\mathbf{C}\\int_{-\\delta t}^{+\\delta t}\\dot{\\mathbf{x}}(t) {\\rm d}t+ \\mathbf{K}\\int_{-\\delta t}^{+\\delta t}\\mathbf{x}(t) {\\rm d}t} = \\mathbf{1}_j\n\\] whilst noting that the since our system is initially at rest (i.e. \\(\\mathbf{x}_0 = \\mathbf{0}\\)), the displacement and absement (integral of displacement) terms resulting the from the damping and stiffness integrals (in gray) must be 0, we obtain, \\[\n \\mathbf{M}(\\dot{\\mathbf{x}}(+\\delta t)-\\dot{\\mathbf{x}}(-\\delta t)) = \\int_{-\\delta t}^{+\\delta t} \\mathbf{f}(t) {\\rm d}t\n\\] Again, since our system is initially at rest, \\(\\dot{\\mathbf{x}}(-\\delta t)=0\\). This leaves us with the initial velocity relation, \\[\n\\mathbf{M}\\dot{\\mathbf{x}}_0 = \\mathbf{1}_j \\rightarrow \\dot{\\mathbf{x}}_0 = \\mathbf{M}^{-1} \\mathbf{1}_j\n\\] Note that the initial velocity \\(\\dot{\\mathbf{x}}_0\\) is not enough to completely represent the Dirac forcing; we must also include the initial acceleration that results. Noting that \\(\\mathbf{x}_0 = \\mathbf{0}\\) and that the Dirac forcing is accounted for by \\(\\dot{\\mathbf{x}}_0\\), our initial EoM reduces to, \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}_0 + \\mathbf{C}\\dot{\\mathbf{x}}_0 = \\mathbf{0}\n\\] from which we obtain for the initial acceleration, \\[\n\\ddot{\\mathbf{x}}_0 = -\\mathbf{M}^{-1}\\mathbf{C}\\dot{\\mathbf{x}}_0\n\\] In summary, by using the initial conditions: \\[\n\\mathbf{x}_0 = \\mathbf{0}\n\\] \\[\n\\dot{\\mathbf{x}}_0 = \\mathbf{M}^{-1} \\mathbf{1}_j\n\\] \\[\n\\ddot{\\mathbf{x}}_0 = -\\mathbf{M}^{-1}\\mathbf{C}\\dot{\\mathbf{x}}_0\n\\] we can compute the IRF of a system through numerical integration.\n\n\n\nNewmark-\\(\\beta\\) method\nFrom Equation 15, the dynamic equilibrium at timestep \\(n+1\\) can be written as, \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}_{n+1} + \\mathbf{C}\\dot{\\mathbf{x}}_{n+1} + \\mathbf{K}\\mathbf{x}_{n+1} = \\mathbf{f}_{n+1}.\n\\tag{7}\\]\nTo solve Equation 7, the Newmark-\\(\\beta\\) method approximates the displacement and velocity at \\(n+1\\) by, \\[\n\\mathbf{x}_{n+1} = \\overbrace{\\mathbf{x}_n + \\Delta_t \\dot{\\mathbf{x}}_n + \\Delta_t^2(0.5-\\beta) \\ddot{\\mathbf{x}}_n}^{{\\rm Predictor: \\,} \\mathbf{\\tilde{x}}_{n+1} } + \\overbrace{\\Delta_t^2 \\beta \\ddot{\\mathbf{x}}_{n+1}}^{{\\rm Corrector}}\n\\tag{8}\\] and \\[\n\\dot{\\mathbf{x}}_{n+1} = \\overbrace{\\dot{\\mathbf{x}}_{n} + \\Delta_t (1-\\gamma) \\ddot{\\mathbf{x}}_n}^{{\\rm Predictor: \\,} \\dot{\\mathbf{\\tilde{x}}}_{n+1} } + \\overbrace{\\Delta_t \\gamma \\ddot{\\mathbf{x}}_{n+1}}^{{\\rm Corrector}}\n\\tag{9}\\] where the constants \\(0\\leq\\gamma\\leq 1\\) and \\(0\\leq\\beta\\leq 1\\) control the relative weighting of the \\(n\\) and \\(n+1\\) acceleration contributions to \\(\\mathbf{x}_{n+1}\\) and \\(\\dot{\\mathbf{x}}_{n+1}\\), and also the stability properties of the scheme. A common choice that yields an implicit and unconditionally stable scheme is \\(\\gamma = 1/2\\) and \\(\\beta = 1/4\\) (termed average constant acceleration method and equivalent to the trapezoidal integration rule).\n\nNote that at a given time step \\(n\\), the acceleration term \\(\\ddot{\\mathbf{x}}_{n+1}\\) is not available and so Equation 8 and Equation 9 cannot be computed directly. The idea behind the Newmark-\\(\\beta\\) method is to approximate \\(\\mathbf{x}_{n+1}\\) and \\(\\dot{\\mathbf{x}}_{n+1}\\) using the terms that are available – these are termed the predictors \\(\\mathbf{\\tilde{x}}_{n+1}\\) and \\(\\dot{\\mathbf{\\tilde{x}}}_{n+1}\\) – and later correct for the missing terms when an estimate for the acceleration \\(\\ddot{\\mathbf{x}}_{n+1}\\) is available.\nSubstituting Equation 8 and Equation 9 into Equation 7 we \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}_{n+1} + \\mathbf{C}\\left[\\dot{\\mathbf{\\tilde{x}}}_{n+1} + \\Delta_t \\gamma \\ddot{\\mathbf{x}}_{n+1} \\right] + \\mathbf{K}\\left[\\mathbf{\\tilde{x}}_{n+1} + \\Delta_t^2 \\beta \\ddot{\\mathbf{x}}_{n+1} \\right] = \\mathbf{f}_{n+1}\n\\tag{10}\\] and moving all predictor terms to the right hand side we have, \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}_{n+1} + \\mathbf{C} \\Delta_t \\gamma \\ddot{\\mathbf{x}}_{n+1} + \\mathbf{K} \\Delta_t^2 \\beta \\ddot{\\mathbf{x}}_{n+1} = \\mathbf{f}_{n+1} - \\mathbf{C}\\dot{\\mathbf{\\tilde{x}}}_{n+1} - \\mathbf{K}\\mathbf{\\tilde{x}}_{n+1}\n\\tag{11}\\] Factoring out the acceleration term on the left hand side, \\[\n \\overbrace{\\left(\\mathbf{M} + \\mathbf{C} \\Delta_t \\gamma + \\mathbf{K} \\Delta_t^2 \\beta \\right)}^{\\mathbf{A}} \\ddot{\\mathbf{x}}_{n+1} = \\mathbf{f}_{n+1} - \\mathbf{C}\\dot{\\mathbf{\\tilde{x}}}_{n+1} - \\mathbf{K}\\mathbf{\\tilde{x}}_{n+1}\n\\tag{12}\\] we can compute the inverse of the constant terms and solve for \\(\\ddot{\\mathbf{x}}_{n+1}\\), \\[\n\\ddot{\\mathbf{x}}_{n+1} = \\mathbf{A}^{-1}\\left(\\mathbf{f}_{n+1} - \\mathbf{C}\\dot{\\mathbf{\\tilde{x}}}_{n+1} - \\mathbf{K}\\mathbf{\\tilde{x}}_{n+1}\\right)\n\\tag{13}\\] After computing the acceleration \\(\\ddot{\\mathbf{x}}_{n+1}\\), we can correct our estimates of displacement and velocity, \\[\n\\mathbf{x}_{n+1} = \\mathbf{\\tilde{x}}_{n+1} + \\overbrace{\\Delta_t^2 \\beta \\ddot{\\mathbf{x}}_{n+1}}^{{\\rm Corrector}}\n\\] and \\[\n\\dot{\\mathbf{x}}_{n+1} = \\dot{\\mathbf{\\tilde{x}}}_{n+1} + \\overbrace{\\Delta_t \\gamma \\ddot{\\mathbf{x}}_{n+1}}^{{\\rm Corrector}}\n\\] We now have estimates for \\(\\mathbf{x}_{n+1}\\), \\(\\dot{\\mathbf{x}}_{n+1}\\) and \\(\\ddot{\\mathbf{x}}_{n+1}\\) and can proceed to the next time step.\nIn the code snippet below implement the Newmark-\\(\\beta\\) method and apply it to our discrete mass-spring system to compute its IRF. To do this use the initial velocity condition \\(\\dot{\\mathbf{x}}_0 = \\mathbf{1}_j m_j\\) as described above. We see that the resulting IRF is in agreement with that of the modal synthesis before.\n\n\nCode: compute the IRF using Newmark method and compare against modal solution and scipy.integrate.ode\n# Define Newmark functions:\ndef newmarkImp(M, C, K, L, dt, e, gamma=0.5, beta=0.25, type='initForce'):\n\n ndof = M.shape[0] # Number of DOFs\n tSteps = int(L/dt)\n\n if type == 'initForce': # Initial force applied by acceleration IC\n Pj = np.zeros((ndof,))\n Pj[e] = 1\n u = np.zeros((ndof, tSteps))\n ud = np.zeros((ndof, tSteps))\n udd = np.zeros((ndof, tSteps))\n udd[:, 0] = np.linalg.inv(M)@Pj*2/dt\n f = np.zeros((ndof, tSteps))\n f[e, 0] = 0\n elif type == 'stepVel': # Initial force applied by equivalent initial velocity IC\n Pj = np.zeros((ndof,))\n Pj[e] = 1\n u = np.zeros((ndof, tSteps))\n ud = np.zeros((ndof, tSteps))\n udd = np.zeros((ndof, tSteps))\n ud[:, 0] = np.linalg.inv(M)@Pj\n udd[:, 0] = -np.linalg.inv(M)@C@ud[:, 0]\n f = np.zeros((ndof, tSteps))\n f[e, 0] = 0\n elif type == 'secondForce': # Dirac force at second time step\n Pj = np.zeros((ndof,))\n Pj[e] = 1\n u = np.zeros((ndof, tSteps))\n ud = np.zeros((ndof, tSteps))\n udd = np.zeros((ndof, tSteps))\n f = np.zeros((ndof, tSteps))\n f[e, 1] = 1/dt\n\n t = np.linspace(0, L-dt, int(L/dt)) # Time array\n A = np.linalg.inv(M+gamma*dt*C+beta*(dt**2)*K) # Acceleration update matrix\n\n for tn in range(1, t.shape[0]): # Start loop at second time step\n # Prediction step:\n # Displacement prediction based on previous u, ud and udd\n u[:, tn] = u[:, tn-1]+dt*ud[:, tn-1]+(0.5-beta)*dt**2*udd[:, tn-1]\n # Velocity prediction based on previous ud and udd\n ud[:, tn] = ud[:, tn-1]+(1-gamma)*dt*udd[:, tn-1]\n # Acceleration prediction based on current u and ud\n udd[:, tn] = A@(f[:, tn]-K@u[:, tn]-C@ud[:, tn])\n\n # Correction step:\n # Displacement update using prediced udd\n u[:, tn] = u[:, tn]+dt**2*beta*udd[:, tn]\n # Velocity update using prediced udd\n ud[:, tn] = ud[:, tn]+dt*gamma*udd[:, tn]\n \n return u, ud, udd\n\n# %% Define system parameters and initial conditions\nndof = 8 # Number of DoFs\n\nu_NM, ud_NM, udd_NM = newmarkImp(M, C_prop, K, T, dt, 0, gamma=0.5, beta=0.25,\n type='stepVel')\n\n\nfig, ax = plt.subplots()\nax.plot(t, x[5,:],label='Modal')\nax.plot(t, u_NM[5,:],'--',label='Newmark')\nax.grid(True)\nax.set_xlabel('Time (S)')\nax.set_ylabel('Displacement (m)')\nax.set_xlim([0,5])\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 4: Impulse response of mass-spring system generated using second order Newmark integration. Excitation at mass 1, response at mass 5.\n\n\n\n\n\n\n\nFirst order integration\nThere exists a wide range of numerical methods for solving ordinary differential equations (ODE). Many of these are designed to solve general first-order Initial Value Problems (IVPs). These are problems that take the form, \\[\n\\frac{dy}{dt} = f(y,t) \\quad \\quad \\mbox{with} \\quad \\quad y(t_0) = y_0\n\\tag{14}\\] where \\(y(t)\\) is the sought after solution, \\(f(y,t)\\) is a function that describes its derivative with respect to \\(t\\) and \\(y_0\\) is the initial value of \\(y\\) at \\(t=0\\). The fact that we have limited ourselves to first order ODE here (meaning that no higher order derivatives are present in Equation 14) at first sounds somewhat limiting. However, as we will see shortly, higher-order ODEs can be converted into a larger system of first-order equations by introducing extra variables, and so Equation 14 is broadly applicable to most IVPs.\nThe advantage of working with a first order equation, as in Equation 14, is that we can avoid having to code our own time stepping integration algorithm and instead use one of the many preexisting functions/packages such as ODE45 in MATLAB, or scipy.integrate.ode in Python. It is beyond the scope of this tutorial to give a detailed description of the various methods available to solve Equation 14; this information is available through the relevant MATLAB/scipy documentation. In the numerical example below we will use the standard scipy.integrate.ode function.\nOur first job is to convert our second order ODE into a first order ODE. Recall our EoM for a general MDOF system, \\[\n \\mathbf{M}\\ddot{\\mathbf{x}}(t) + \\mathbf{C}\\dot{\\mathbf{x}}(t) + \\mathbf{K}\\mathbf{x}(t) = \\mathbf{f}(t)\n\\tag{15}\\] To convert Equation 15 into a first order ODE we combine the displacement and velocity into a single solution vector, as with their respective derivatives, \\[\n\\mathbf{y} = \\left(\\begin{array}{c} \\mathbf{x} \\\\ \\dot{\\mathbf{x}} \\end{array}\\right) \\quad \\mbox{and} \\quad \\dot{\\mathbf{y}} = \\left(\\begin{array}{c} \\dot{\\mathbf{x}} \\\\ \\ddot{\\mathbf{x}} \\end{array}\\right)\n\\] Inspecting Equation 15, we can rewrite our ODE in the form, \\[\n\\dot{\\mathbf{y}} = \\mathbf{A} \\mathbf{y} + \\mathbf{B}\\mathbf{f} \\quad \\quad \\left(\\begin{array}{c} \\dot{\\mathbf{x}} \\\\ \\ddot{\\mathbf{x}} \\end{array}\\right) = \\left[\\begin{array}{cc}\\mathbf{0} & \\mathbf{I} \\\\ \\mathbf{M}^{-1}\\mathbf{K} & \\mathbf{M}^{-1}\\mathbf{C} \\end{array}\\right]\\left(\\begin{array}{c} \\mathbf{x} \\\\ \\dot{\\mathbf{x}} \\end{array}\\right) + \\left[\\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{M}^{-1} \\end{array}\\right]\\mathbf{f}\n\\tag{16}\\] where it is noted that the top row of Equation 16 simply states that \\(\\dot{\\mathbf{x}} = \\dot{\\mathbf{x}}\\). Notice that we have doubled the number of unknowns in our solution (\\(\\mathbf{x}\\in\\mathbb{R}^N\\) but \\(\\mathbf{y}\\in\\mathbb{R}^{2N}\\)), but reduced the problem to first order in \\(\\mathbf{y}\\). Equation 16 can now be dealt with using standard IVP solvers such as ODE45 in MATLAB, or scipy.integrate.ode in Python.\nIn the code snippet below we use the Python scipy.integrate.ode package to solve our ODE. Note that to do this we must write a function that describes our first-order ODE, taking as inputs the solution vector \\(\\mathbf{y}\\) and the current time step \\(t_i\\), as well as the system properties (\\(\\mathbf{M}\\), \\(\\mathbf{C}\\) and \\(\\mathbf{K}\\)). To obtain the impulse response function as the solution, we set \\(\\mathbf{f} = 0\\) and the initial velocity of the \\(j\\)th mass as \\(\\dot{\\mathbf{x}}_{0,j} = 1/M_j\\). As expected we get a near identical IRF compared to the modal summation from before. Note however, that the numerical IRF is only an approximation in the sense that numerical integration is an approximate method for computing the solution to an IVP, requiring a balance between accuracy and computation effort through the specification of error tolerances, etc.\n\n\nCode: compute the IRF using scipy.integrate.ode and compare against modal solution\n# Imports\nfrom scipy.integrate import odeint\n\n# Define ODE function\ndef mass_spring(y, ti, M, C, K, f, om=0):\n invM = np.linalg.inv(M) # Pre-compute inverse mass matrix\n A = np.block([[np.zeros(M.shape), np.eye(M.shape[0])],\n [-invM@K, -invM@C]]) # State matrix\n\n b = np.vstack((np.zeros((M.shape[0], 1)),\n invM.dot(f*np.cos(2*np.pi*om*ti)))) # Input matrix = 0\n dydt = np.squeeze(A@y) + np.squeeze(b) # Compute derivative\n return dydt\n\n\n# Initial conditions (displacement, velocity):\nu0 = np.zeros((numDoF,))\nud0 = np.zeros((numDoF,))\nud0[0] = 1/M[0, 0]\ny0 = np.concatenate((u0, ud0))\n# Applied force:\nf = np.zeros((numDoF, 1))\nom = np.zeros((numDoF, 1)) # Excitation frequency\n\n# ODE integration using first-order representation\nsol = odeint(mass_spring, y0, t, args=(M, C_prop, K, f, om), rtol=1e-5, atol=1e-5)\nu_ODE = sol[:, 0:8].T # Displacement IRF\nud_ODE = sol[:, 8:].T # Velocity IRF\n\nfig, ax = plt.subplots()\nax.plot(t, x[5,:],label='Modal')\nax.plot(t, u_NM[5,:],'--',label='Newmark')\nax.plot(t, u_ODE[5,:],'-.',label='ODE')\nax.grid(True)\nax.set_xlabel('Time (S)')\nax.set_ylabel('Displacement (m)')\nax.set_xlim([0,5])\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 5: Impulse response of mass-spring system generated using first order numerical integration. Excitation at mass 1, response at mass 5.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthIRF/synthIRF.html#inverse-fourier-transform",
"href": "tutorials/synthIRF/synthIRF.html#inverse-fourier-transform",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "Inverse Fourier transform",
"text": "Inverse Fourier transform\nFinally, we can compute the system impulse response directly from its Frequency Response Function (FRF) \\(H(\\omega)\\) by using the inverse Fourier transform. To do so we simply need to reconstruct the complex double sided spectrum from the single sided spectrum that is \\(H(\\omega)\\), \\[\nS(\\vec{\\omega}) = \\left[\\begin{array}{ccccc|cccc}\nH(0) & H(\\omega_0) & H(\\omega_1) & \\cdots & H(\\omega_N) & H^*(\\omega_N) & H^*(\\omega_{N-1}) & \\cdots & H^*(\\omega_0)\n\\end{array}\\right]\n\\] With \\(S(\\vec{\\omega})\\) built, IRF is then given by, \\[\nG(\\,\\vec{t}\\,) = \\frac{1}{\\Delta t} \\mbox{iFT}\\left(S(\\vec{\\omega}) \\right)\n\\] where \\(\\Delta t\\) is the sample rate of the IRF and \\(\\mbox{iFT}(\\square)\\) is any standard implementation of the inverse Fourier Transform (for example using scipy.fft.ifft).\nNote that there are some important relations between the sample rate \\(\\Delta t\\), impulse length \\(T\\), frequency resolution \\(\\Delta f\\) and maximum frequency \\(f_N\\) of the FRF that must be satisfied to obtain a corretly scaled IRF in both time and amplitude:\n\nThe sample rate \\(\\Delta t\\) is inversely proportional to the so-called Nyquist frequency, which for our purposes corresponds to the maximum frequency of the measured FRF \\(f_{N} = 1/(2 \\Delta t)\\).\nThe total number of samples \\(N_{samp}\\) is related to the sample rate \\(\\Delta t\\) and the impulse length \\(T\\) by \\(N_{samp} = \\Delta t T\\).\nThe frequency resolution \\(\\Delta f\\) is related to the maximum frequency and the number of samples \\(N_{samp}\\) by \\(\\Delta f = f_{N}/N_{samp}\\)\n\nIn the code snippet below we synthesis the FRF of the assembly with an appropriate \\(\\Delta f\\) and \\(f_N\\) and compute the IRF by the iFFT. We see that the IRF agrees with all previous methods.\n\n\nCode: compute the IRF using inverse Fourier transform of the Frequency Response Function\nT = 10 # Total impulse time \nnsamp = 10000 # Number of samples\nΔt = T/nsamp # Sample rate\nf_nq = 1/dt/2 # Nyquist limit (max FRF frequency)\nΔf = f_nq/nsamp # Frerquency resolution\n\nf = np.linspace(0,f_nq,int(nsamp/2)) # Generate frequency array\nω = 2*np.pi*f # Convert to radians\n# Direct solution:\nY_C_dir_prop = np.linalg.inv(K +1j* np.einsum('i,jk->ijk',ω, C_prop) - np.einsum('i,jk->ijk',ω**2, M))\n\ndoubleSidedH = np.hstack((Y_C_dir_prop[:,0,5], np.flip(Y_C_dir_prop[1:,0,5].conj()))) # Build double sided spectrum\n\nu_FFT = 1/Δt * np.real(sp.fft.ifft(doubleSidedH))\n\nfig, ax = plt.subplots()\nax.plot(t, x[5,:],label='Modal')\nax.plot(t, u_NM[5,:],'--',label='Newmark')\nax.plot(t, u_ODE[5,:],'-.',label='ODE')\nax.plot(t[0:-1], u_FFT[:],':',label='iFFT')\nax.grid(True)\nax.set_xlabel('Time (S)')\nax.set_ylabel('Displacement (m)')\nax.set_xlim([0,5])\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 6: Impulse response of mass-spring system generated using inverse Fourier transform of the system FRF. Excitation at mass 1, response at mass 5.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthIRF/synthIRF.html#summary",
"href": "tutorials/synthIRF/synthIRF.html#summary",
"title": "Synthesizing IRFs from mass, stiffness and damping",
"section": "Summary",
"text": "Summary\nIn this tutorial we have introduced the Impulse Response Function (IRF) as a time domain representation of a system’s dynamics. Knowing the IRF of a system allows us to predict its response to an arbitary time varying input force. It is a very useful tool in structural dynamics and forms the basis of Impulse-based Substructuring (IBS, a substructuring technique formulated in the time domain).\nWe have covered 3 different approaches for computing IRFs, including modal synthesis (only applicable to proportional damped systems), numericl integration (including both Newmark multi-step and first order methods) and inverse Fourier transform of the system FRF. All methods were demonstrated on a simple discrete mass-spring system, and shown to return the same IRF.",
"crumbs": [
"Tutorials",
"Synthesising Impulse Response Functions"
]
},
{
"objectID": "tutorials/synthFRF/synthFRF_1.html",
"href": "tutorials/synthFRF/synthFRF_1.html",
"title": "Synthesising FRFs from mass, stiffness and damping",
"section": "",
"text": "In this tutorial we introduce the key equations for synthesising FRFs from pre-existing numerical models, specifically physical domain representations where mass and stiffness matrices are known (e.g. by the Finite Element method). The reader is referred to the many text books in the field for further detail on the modelling itself, see for example Petyt (2010).",
"crumbs": [
"Tutorials",
"Synthesising Frequency Response Functions"
]
},
{
"objectID": "tutorials/synthFRF/synthFRF_1.html#a-little-bit-of-theory",
"href": "tutorials/synthFRF/synthFRF_1.html#a-little-bit-of-theory",
"title": "Synthesising FRFs from mass, stiffness and damping",
"section": "A little bit of theory",
"text": "A little bit of theory\nIn the physical domain, a component is characterised by its mass, stiffness and damping characteristics, which together form the following equation of motion, \\[\n \\mathbf{M}\\mathbf{\\ddot{x}}(t) + \\mathbf{C}\\mathbf{\\dot{x}}(t) + \\mathbf{K}\\mathbf{x}(t) = \\mathbf{f}(t).\n\\tag{1}\\] To obtain a component FRF matrix from Equation 1, two approaches will be discussed; the direct (matrix inversion) method and modal synthesis.\nIn either case, we begin by assuming a harmonic forcing such that \\(\\mathbf{f}(t) = \\mathbf{f} e^{i\\omega t}\\). Owing to linearity, the displacement, velocity and acceleration response will also be periodic with the same frequency. Equation 1 can thus be transformed into the frequency domain, and expressed in terms of displacement \\(\\mathbf{x}(\\omega)\\) as, \\[\n \\left[-\\omega^2\\mathbf{M} + i\\omega\\mathbf{C} + \\mathbf{K}\\right]\\mathbf{x}(\\omega) = \\mathbf{D}(\\omega) \\mathbf{x}(\\omega) = \\mathbf{f}(\\omega).\n\\tag{2}\\] where \\(\\mathbf{D}(\\omega)\\) represents the dynamic stiffness of the component.\n\nDirect method\nThe direct determination of the mobility involves computing the matrix inverse of \\(\\mathbf{D}\\) and pre-multiplying both sides of Equation 2 by it, giving, \\[\n \\mathbf{x}(\\omega) = \\mathbf{R}(\\omega)\\mathbf{f}(\\omega)\n\\] where, \\[\n \\mathbf{R}(\\omega) =\\mathbf{D}^{-1}(\\omega) = \\left[-\\omega^2\\mathbf{M} + i\\omega\\mathbf{C} + \\mathbf{K}\\right]^{-1}\n\\tag{3}\\] is the dynamic compliance of the component. The mobility is then obtained by differentiation, i.e. scaling by \\(i\\omega\\), \\[\n \\mathbf{Y}(\\omega) = i\\omega\\mathbf{R}(\\omega).\n\\tag{4}\\]\nThis direct method is appropriate when working with small models containing not too many DoFs. However, as the matrix inversion has to be performed at each frequency, for large systems this becomes computationally expensive and is not recommended. Furthermore, this direct approach provides the full mobility matrix that includes every DoF within the component model. For virtual prototyping applications, this is usually not necessary since the black box models adopted for components only require interface DoFs to be described in full.\nA reduced mobility matrix can be obtained by considering Equation 2 in a partitioned form (omitting frequency dependence for clarity), \\[\n \\left(\\begin{array}{c}\n \\mathbf{f}_r \\\\ \\mathbf{f}_d\n \\end{array}\\right) = \\left[\\begin{array}{cc}\n \\mathbf{D}_{rr} & \\mathbf{D}_{rd} \\\\\n \\mathbf{D}_{dr} & \\mathbf{D}_{dd} \n \\end{array}\\right] \\left(\\begin{array}{c}\n \\mathbf{x}_r \\\\ \\mathbf{x}_d\n \\end{array}\\right)\n\\tag{5}\\] where \\(r\\) denotes the set of DoFs we wish to retain, and \\(d\\) those that are required for computation but are not needed in the dynamic description of the component. Assuming no external forces are applied to the any of the \\(d\\) DoFs, \\(\\mathbf{f}_d = \\mathbf{0}\\), Equation 5 can be rearranged to yield, \\[\n \\mathbf{f}_r = \\left(\\mathbf{D}_{rr} - \\mathbf{D}_{rd}\\mathbf{D}_{dd}^{-1} \\mathbf{D}_{dr}\\right)\\mathbf{x}_r = \\mathbf{\\hat{D}}_{rr} \\mathbf{x}_r\n\\tag{6}\\] where \\(\\mathbf{\\hat{D}}_{rr}\\) is the reduced dynamic stiffness matrix, from which the reduced mobility matrix is obtained by inversion and scaling, \\[\n\\mathbf{\\hat{Y}}_{rr} = i\\omega \\mathbf{\\hat{D}}_{rr}^{-1}.\n\\] Note that although reduced in size to include only the retained DoFs \\(r\\), Equation 6 still involves the inversion of the stiffness sub-matrix \\(\\mathbf{D}_{dd}^{-1}\\), which for large systems is still computationally demanding. More efficient reduction schemes are available at the cost of introducing some approximation in the estimation of the component dynamics.\n\n\nModal synthesis\nThe modal synthesis approach involves determining the mode shapes and natural frequencies of a component, and expressing its FRF as a truncated summation of modes.\nTo formulate the modal synthesis of a mobility matrix, we begin by considering the case of an undamped and unforced system, whose equation of motion reads, \\[\n \\mathbf{K}\\mathbf{x}(\\omega) = \\omega^2\\mathbf{M}\\mathbf{x}(\\omega).\n\\tag{7}\\] Physically, we are interested in a special type of solution to Equation 7, where all coordinates execute synchronous motion, i.e. they have the same time dependence so that the ratio between the motion of any two points remains constant. For an \\(N\\) dimensional system, there exist \\(N\\) such solutions to (EoMfreq_2?) in the form, \\[\n \\mathbf{K}\\mathbf{\\tilde{\\phi}}_i = \\lambda_i\\mathbf{M}\\mathbf{\\tilde{\\phi}}_i.\n\\tag{8}\\] where \\(\\lambda_i = \\omega_i^2\\) are the so-called system eigenvalues (representing the squared natural frequencies of the component), and \\(\\mathbf{\\tilde{\\phi}}_i\\) are the corresponding eigenvectors (representing the relative deformations of the component at frequency \\(\\omega_i\\) that satisfy (EoMfreq_2?)). The eigenvectors \\(\\mathbf{\\phi}_i\\) are unique only in the sense that the ratio between the motion at any two points is constant. In other words, it is only the shape of the component’s deflection that is well defined, not its amplitude. To render the eigenvectors unique a normalisation process is required. The most common normalisation scheme is that of mass normalisation, whereby the normalised eigenvectors are obtained as \\(\\phi_i = {\\tilde{\\phi}_i}\\left(\\tilde{\\phi}_i^{\\rm T}\\sqrt{\\mathbf{M}}\\tilde{\\phi}_i\\right)^{-1}\\), or in matrix form \\(\\boldsymbol{\\Phi} = \\boldsymbol{\\tilde{\\Phi}}\\left(\\boldsymbol{\\tilde{\\Phi}}^{\\rm T}\\sqrt{\\mathbf{M}}\\boldsymbol{\\tilde{\\Phi}}\\right)^{-1}\\). With this mass normalisation we have that, \\[\n \\mathbf{\\phi}_i^{\\rm T}\\mathbf{M}\\mathbf{\\phi}_i=1\n\\tag{9}\\] and \\[\n \\mathbf{\\phi}_i^{\\rm T}\\mathbf{K}\\mathbf{\\phi}_i=\\omega_i^2.\n\\] This latter relation can be easily proven by pre-multiplying both sides of Equation 8 by \\(\\mathbf{\\phi}_i^{\\rm T}\\). Similarly, from Equation 8 and the uniqueness of eigenvalues, it is straightforward to show that the component eigenvectors/mode shapes are orthogonal, such that, \\[\n \\mathbf{\\phi}^{\\rm T}_r\\mathbf{M}\\mathbf{\\phi}_s = 0\n\\] and \\[\n \\mathbf{\\phi}^{\\rm T}_r\\mathbf{K}\\mathbf{\\phi}_s = 0, r\\neq s.\n\\tag{10}\\] The normalised eigenvectors described above are referred to as the component’s normal modes, or simply its modes.\nNoting that any \\(N\\) dimensional vector can be represented as a linear combination of \\(N\\) linearly independent vectors, and that normal modes are themselves orthogonal and linearly independent, the displacement \\(\\mathbf{x}\\) can be expressed in the form, \\[\n \\mathbf{x} = \\Sigma_i^N \\mathbf{\\phi}_i q_i = \\boldsymbol{\\Phi}\\mathbf{q}\n\\tag{11}\\] where \\(\\boldsymbol{\\Phi} = \\left[ \\mathbf{\\phi}_1, \\mathbf{\\phi}_2, \\cdots, \\mathbf{\\phi}_N\\right]\\) is a modal matrix whose columns represent the normal modes and \\(\\mathbf{q}=\\mathbf{q}_0e^{i\\omega t}\\) is a vector of modal amplitudes describing the contribution of the each mode to the response \\(\\mathbf{x}\\). With the modal matrix \\(\\boldsymbol{\\Phi}\\) being constant, we have also that \\(\\mathbf{\\dot{x}} = \\boldsymbol{\\Phi}\\mathbf{\\dot{q}}\\) and \\(\\mathbf{\\ddot{x}} = \\boldsymbol{\\Phi}\\mathbf{\\ddot{q}}\\).\nReturning now to the damped and forced equation of motion, \\[\n \\mathbf{M}\\mathbf{\\ddot{x}}(t) + \\mathbf{C}\\mathbf{\\dot{x}}(t) + \\mathbf{K}\\mathbf{x}(t) = \\mathbf{f}(t)\n\\tag{12}\\] substituting the displacement, velocity and acceleration by their modal representations from Equation 11 and its time derivatives, whilst pre-multiplying Equation 12 by the transposed mode shape matrix \\(\\boldsymbol{\\Phi}^{\\rm T}\\) yields, \\[\n \\mathbf{\\bar{M}}\\mathbf{\\ddot{q}}(t) + \\mathbf{\\bar{C}}\\mathbf{\\dot{q}}(t) + \\mathbf{\\bar{K}}\\mathbf{q}(t) = \\mathbf{Q}(t)\n\\tag{13}\\] with, \\[\n \\mathbf{\\bar{M}} = \\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{M}}\\boldsymbol{\\Phi}, \\quad \\mathbf{\\bar{C}} = \\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{C}}\\boldsymbol{\\Phi}, \\quad \\mathbf{\\bar{K}} = \\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{K}}\\boldsymbol{\\Phi}, \\quad\\mathbf{Q} = \\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{f}}.\n\\] \\(\\mathbf{\\bar{M}}\\), \\(\\mathbf{\\bar{K}}\\) and \\(\\mathbf{\\bar{C}}\\) are called the modal mass, stiffness and damping matrices, and \\(\\mathbf{{Q}}\\) the modal force vector. Recalling the mass normalised and orthogonality conditions in Equation 9 and Equation 10, \\[\n \\mathbf{\\bar{M}} = \\mathbf{I}, \\quad \\mathbf{\\bar{K}}=\\mbox{diag}\\left[\\omega^2_1, \\omega^2_2, \\cdots, \\omega^2_N\\right]=\\boldsymbol{\\Lambda}\n\\] Equation 13 becomes, \\[\n \\left[-\\omega^2\\mathbf{I} + i\\omega\\mathbf{\\bar{C}} + \\boldsymbol{\\Lambda}\\right]\\mathbf{q} = \\mathbf{Q}(t).\n\\] Inversion and pre-multiplication of both sides by the bracketed term yields a solution for the modal amplitudes, \\[\n \\mathbf{q} = \\left[-\\omega^2\\mathbf{I} + i\\omega\\mathbf{\\bar{C}} + \\boldsymbol{\\Lambda}\\right]^{-1}\\mathbf{Q}(t).\n\\] Substituting the modal amplitudes into Equation 11, whilst recalling \\(\\mathbf{Q} = \\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{f}}\\), \\[\n \\mathbf{x} = \\boldsymbol{\\Phi} \\left[-\\omega^2\\mathbf{I} + i\\omega\\mathbf{\\bar{C}} + \\boldsymbol{\\Lambda}\\right]^{-1}\\boldsymbol{\\Phi}^{\\rm T}\\mathbf{{f}}\n\\] from which we can identify the displacement FRF as, \\[\n \\mathbf{Y}(\\omega) = \\boldsymbol{\\Phi}\\left[\\boldsymbol{\\Lambda}-\\omega^2\\mathbf{I} + i\\omega \\mathbf{\\bar{C}}\\right]^{-1}\\boldsymbol{\\Phi}^{\\rm T}.\n\\tag{14}\\] Equation 14 provides a means of synthesising the FRF matrix of a component based on its natural frequencies, mode shapes and modal damping characteristics. To provide an efficient synthesis, there are two important considerations.\nFirst, the number of modes used. Whilst an \\(N\\) DoF system will have \\(N\\) natural frequencies and associated mode shapes, often only a reduced subset of these are required to provide a good estimate of the mobility matrix. This is achieved by truncating the modal matrix \\(\\boldsymbol{\\Phi}\\) to include only the first \\(n\\) columns \\(\\boldsymbol{\\Phi} = \\left[ \\mathbf{\\phi}_1, \\mathbf{\\phi}_2, \\cdots, \\mathbf{\\phi}_n\\right]\\) (i.e. the mode shapes of the first \\(n\\) natural frequencies). The matrix \\(\\boldsymbol{\\Lambda}\\) should similarly be truncated. It is generally suggested to include modes up to twice the highest frequency (\\(f_{max}\\)) of interest, however this general guidance may not be sufficient for VAVP applications. Going back to the modal matrix, the rows of \\(\\boldsymbol{\\Phi}\\) correspond to the component’s DoFs and unwanted rows can readily be discarded to obtain a reduced mobility matrix including only the DoFs of interest.\nSecond, we require the square bracketed term to be diagonal, such that its matrix inversion becomes computationally trivial. Whilst the \\(-\\omega^2\\mathbf{I}\\) and \\(\\boldsymbol{\\Lambda}\\) terms satisfy this requirement, \\(\\mathbf{\\bar{C}}\\) is generally fully populated. Hence, we are interested in simplified models of damping that yield diagonal modal damping matrices. Here we will discus two popular representations of damping that yield a diagonal system of equations; proportional damping, and structural damping.\n\nProportional damping\nProportional damping is based on the notion that if the mass and stiffness matrices are both diagonalised by the modal matrix, then too so must their linear combination. Hence, a proportional damping matrix is defined as, \\[\n\\mathbf{C}_p = \\alpha\\mathbf{K}+\\beta\\mathbf{M}\n\\tag{15}\\] where \\(\\alpha\\) and \\(\\beta\\) are a pair of constants which can be determined based on experimental measurement. Based on Equation 15, the modal damping matrix becomes, \\[\n\\mathbf{\\bar{C}} = \\mathbf{\\Phi}^{\\rm T}\\mathbf{C}_p\\mathbf{\\Phi} = \\left[\\begin{array}{ccc}\\bar{C}_{11} & & \\\\\n& \\ddots & \\\\\n& & \\bar{C}_{NN}\n\\end{array}\\right]\n\\] where \\(C_nn\\) is the modal damping of the \\(n\\)th mode.\n\n\nStructural damping\nAnother appropriate (i.e. diagonalisable) model of damping is structural damping (also known as hysteric damping) which is represented in the component equation of motion as, \\[\n \\left[-\\omega^2\\mathbf{M} + \\left(\\mathbf{K}+i\\mathbf{H}\\right)\\right]\\mathbf{x}(\\omega) = \\mathbf{f}(\\omega).\n\\tag{16}\\] The complex stiffness matrix \\(\\left(\\mathbf{K}+i\\mathbf{H}\\right)\\) is most easily obtained by replacing Young’s modulus \\(E\\) by a complex one of the form \\(E(1+i\\eta)\\) when deriving the component’s stiffness matrix, where \\(\\eta\\) is the loss factor. Equivalently, Equation 14 can be written as, \\[\n \\mathbf{Y}(\\omega) = \\boldsymbol{\\Phi}\\left[\\boldsymbol{\\Lambda}(1+i\\eta)-\\omega^2\\mathbf{I}\\right]^{-1}\\boldsymbol{\\Phi}^{\\rm T}.\n\\] The loss factor \\(\\eta\\) takes into account all losses, including material and coupling losses. Typical values are between 1% and 5% and tend to be higher where there are connections which allow sliding such as bolted rather than welded joints. In the absence of a physical component to determine damping characteristics, structural damping with an estimated loss factor is the simplest option.",
"crumbs": [
"Tutorials",
"Synthesising Frequency Response Functions"
]
},
{
"objectID": "tutorials/synthFRF/synthFRF_1.html#mass-spring-system",
"href": "tutorials/synthFRF/synthFRF_1.html#mass-spring-system",
"title": "Synthesising FRFs from mass, stiffness and damping",
"section": "Mass-spring system",
"text": "Mass-spring system\nIn this section we apply both the direct inversion and modal summation approaches to synthesis the FRFs of the mass-spring system depicted in Figure 1. Given the discrete nature of the mass-spring system, as will only consider the case whereby all modes are included in the modal summation. In the next example we consider a simple FE model of a continuous beam; here we will address the notion of modal truncation.\n\n\n\n\n\n\nFigure 1: Numerical mass-spring assembly used for FRF synthesis example.\n\n\n\nWe begin by defining the mass and stiffness matrices \\(\\mathbf{M}\\) and \\(\\mathbf{K}\\) for the system in Figure 1.\n\n\nCode\nimport numpy as np\nnp.set_printoptions(linewidth=250)\nimport warnings\nwarnings.filterwarnings('ignore')\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport sys as sys\n\nf = np.linspace(0,100,1000) # Generate frequency array\nω = 2*np.pi*f # Convert to radians\n\n# Define component masses and build matrices\nm1, m2, m3, m4, m5 = 1, 2, 2, 1, 0.5\nm6, m7, m8, m9, m10, m11 = 0.5, 2, 1, 3, 5,0.5\nM_C = np.diag(np.array([m1,m2,m3+m6,m4+m7,m5+m8,m9,m10,m11])) # Coupled AB\n\n# Define component stiffnesses and build matrices (complex stiffness -> structural damping)\nη = 0.05 # Structural loss factor\nk1, k2, k3, k4, k5 = 1, 1.5, 0.7, 1, 0.5\nk6, k7, k8, k9, k10, k11 = 0.5, 2, 1, 0.2, 1, 0.5\nK_C = np.array([[k1+k2+k5, -k2, 0, 0, -k5, 0, 0, 0],\n [-k2, k2+k3+k4, -k3, -k4, 0, 0, 0, 0],\n [0, -k3, k3+k6, 0, 0, -k6, 0, 0],\n [0, -k4, 0, k4+k7, 0, 0, -k7, 0],\n [-k5, 0, 0, 0, k5+k8, 0, -k8, 0],\n [0, 0, -k6, 0, 0, k6+k9, 0, -k9],\n [0, 0, 0, -k7, -k8, 0, k7+k8+k10, -k10],\n [0, 0, 0, 0, 0, -k9, -k10, k9+k10+k11]])*1e5\n\n\nprint('Mass matrix of assembly (kg):\\n' + str(M_C))\nprint('')\nprint('Stiffness matrix of assembly (m/N):\\n' + str(K_C))\nprint('')\n\n\nMass matrix of assembly (kg):\n[[1. 0. 0. 0. 0. 0. 0. 0. ]\n [0. 2. 0. 0. 0. 0. 0. 0. ]\n [0. 0. 2.5 0. 0. 0. 0. 0. ]\n [0. 0. 0. 3. 0. 0. 0. 0. ]\n [0. 0. 0. 0. 1.5 0. 0. 0. ]\n [0. 0. 0. 0. 0. 3. 0. 0. ]\n [0. 0. 0. 0. 0. 0. 5. 0. ]\n [0. 0. 0. 0. 0. 0. 0. 0.5]]\n\nStiffness matrix of assembly (m/N):\n[[ 300000. -150000. 0. 0. -50000. 0. 0. 0.]\n [-150000. 320000. -70000. -100000. 0. 0. 0. 0.]\n [ 0. -70000. 120000. 0. 0. -50000. 0. 0.]\n [ 0. -100000. 0. 300000. 0. 0. -200000. 0.]\n [ -50000. 0. 0. 0. 150000. 0. -100000. 0.]\n [ 0. 0. -50000. 0. 0. 70000. 0. -20000.]\n [ 0. 0. 0. -200000. -100000. 0. 400000. -100000.]\n [ 0. 0. 0. 0. 0. -20000. -100000. 170000.]]\n\n\n\nNext, we define the system’s damping matrix \\(\\mathbf{C}\\). We will consider three cases in this example; proportional damping, non-proportional damping and structural damping. We start with proportional damping.\n\nProportional damping\nWith proportional damping, the damping matrix is defined as a linear combination of the stiffness and mass matrices, \\[\n\\mathbf{C}_p = \\alpha\\mathbf{K}+\\beta\\mathbf{M}\n\\] The damping matrix \\(\\mathbf{C}_p\\) can be used directly in Equation 3 to compute the FRF by matrix inversion. Alternatively, since the mode shape matrix \\(\\mathbf{\\Phi}\\) diagonalise both the stiffness and mass matrices, we have that, \\[\n\\mathbf{\\bar{C}}_p = \\mathbf{\\Phi}^{\\rm T}\\mathbf{C}_p\\mathbf{\\Phi} = \\left[\\begin{array}{ccc}\\bar{C}_{11} & & \\\\\n& \\ddots & \\\\\n& & \\bar{C}_{NN}\n\\end{array}\\right]\n\\] is also a diagonal matrix. This allows us to use the more efficient modal summation to compute the FRF. In the code below we implement both these approaches.\nNote that to implement the modal solution we first solve the eigenproblem to obtain mode shapes and natural frequencies. We then mass normalise the mode shapes. To check everything this is as it should be, plot the modal mass, stiffness and damping matrices to make sure they are diagonal - we see from Figure 2 they are.\n\n\nCode\n# Proportional damping\nC_C_prop = 0.0001*K_C + 1*M_C # Proportional damping matrix\nprint('Damping matrix (proportional αK+βM) of assembly (m/Ns):\\n' + str(C_C_prop))\n\n# Modal solution:\nλ, Φ_ = sp.linalg.eig(K_C, M_C) # Get mode shapes Φ_ and natural frequencies λ=ω_n**2\nΦ = Φ_@np.diag(1/np.sqrt(np.diag((Φ_.T@M_C@Φ_)))) # Mass normalise mode shapes\nΦCΦ = Φ.T @ C_C_prop @ Φ # Check modal damping matrix is diagonal\n\nfigs, axs = plt.subplots(1,3,figsize=(10, 30))\naxs[0].matshow(np.log(np.abs(Φ.T @ M_C @ Φ)))\naxs[0].set_xlabel('Mass matrix')\naxs[1].matshow(np.log(np.abs(Φ.T @ K_C @ Φ)))\naxs[1].set_xlabel('Stiffness matrix')\naxs[2].matshow(np.log(np.abs(ΦCΦ)))\naxs[2].set_xlabel('Prop. damping matrix')\nplt.show()\n\n\nDamping matrix (proportional αK+βM) of assembly (m/Ns):\n[[ 31. -15. 0. 0. -5. 0. 0. 0. ]\n [-15. 34. -7. -10. 0. 0. 0. 0. ]\n [ 0. -7. 14.5 0. 0. -5. 0. 0. ]\n [ 0. -10. 0. 33. 0. 0. -20. 0. ]\n [ -5. 0. 0. 0. 16.5 0. -10. 0. ]\n [ 0. 0. -5. 0. 0. 10. 0. -2. ]\n [ 0. 0. 0. -20. -10. 0. 45. -10. ]\n [ 0. 0. 0. 0. 0. -2. -10. 17.5]]\n\n\n\n\n\n\n\n\nFigure 2: Plots of modal mass, stiffness and prop. damping matrices. Off-diagonals are of the order of machine precision.\n\n\n\n\n\nHaving confirmed we have diagonal modal matrices, we compute the FRF matrix using Equation 14. Note that the modal summation avoids having to compute a matrix inverse. The inverse of a diagonal matrix is just the inverse of its diagonal elements. We also compute the FRF directly, which does require a matrix inversion. Figure 3 below compares the two FRFs. As expected, they’re identical.\n\n\nCode\n# Modal solution cont.:\nY_C_modal_prop = np.zeros((f.shape[0], 8, 8), dtype=complex) # Allocate space\nfor fi in range(f.shape[0]): # Loop over frequency and compute FRF using modal summation\n Y_C_modal_prop[fi, :, :] = Φ @ np.linalg.inv((np.diag(λ) - ω[fi]**2*np.eye(M_C.shape[0]) + 1j*ω[fi]*ΦCΦ)) @ Φ.T\n # Matrix inverse term is diagonal (up to machine precision), can be replaced with scalar inversion of the diagonal:\n Y_C_modal_prop[fi, :, :] = Φ @ np.diag(1/(λ - ω[fi]**2 + 1j*ω[fi]*np.diag(ΦCΦ))) @ Φ.T\n\n# Direct solution:\nY_C_dir_prop = np.linalg.inv(K_C +1j* np.einsum('i,jk->ijk',ω, C_C_prop) - np.einsum('i,jk->ijk',ω**2, M_C))\n\nfig, ax = plt.subplots()\nax.semilogy(f, np.abs(Y_C_dir_prop[:,0,-1]),label='Direct')\nax.semilogy(f, np.abs(Y_C_modal_prop[:,0,-1]),linestyle='dashed',label='Modal')\nax.grid(True)\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('Admittance (m/N)')\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 3: Displacement FRF obtained by modal summation with proportional damping and directly by matrix inversion.\n\n\n\n\n\nAs well as being more computationally efficient, another advantage of the modal summation approach is that we can choose to inspect the mode shapes directly to visuallise the deformation of the structure at each of the resonant frequencies. This can be quite useful, for example when choosing where to place a structural connection or modification. In Figure 4 below we plot each mode shape of the mass spring system as a row of quivers. From this we can see for example, that at the first mode at 11 Hz, ass the masses are moving in same direction. Whereas for the second mode at 20 Hz, DoFs 3 and 6 are moving out of phase with the remaining masses. We can also see that mode 7 at 94 Hz is localised to DoF 8.\n\n\nCode\n# Visualise mode shapes\nsort_index = np.argsort(λ) # Get ordered index\nλ_sort = λ[sort_index] # Sort natural freqs\nΦ_sort = Φ[:,sort_index] # Sort mode shapes\n\nX,Y = np.meshgrid(range(8),range(1,9))\nfig = plt.figure()\nplt.quiver(Y,X,np.zeros_like(Φ_sort[:,:]), Φ_sort[:,:], Φ_sort[:,:]/np.abs(Φ_sort[:,:]) ,scale=10, width=0.003)\nplt.xlabel('DoF index')\nplt.ylabel('Mode frequency')\nplt.yticks(range(8),np.round(np.sqrt(np.real(λ_sort))/(2*np.pi),2))\nplt.ylim([-0.5, 8])\nplt.show()\n\n\n\n\n\n\n\n\nFigure 4: Mode shape visualisation for mass spring system in Figure 1. The x-axis denotes the DoF index, whist y-axis indicates the mode number. The arrows show the relative motion of the masses.\n\n\n\n\n\n\n\nNon-proportional damping\nNow lets move onto non-proportional damping. For this case we simply define an arbitary damping matrix, though with the same connectivity as the stiffness matrix. The corresponding FRF matrix can be obtained directly by matrix inversion as before. The modal summation approach, however, falters. We see in the modal damping plot in Figure 5, that the result is non-diagonal; there is coupling between the modes due to the non-proportional damping.\n\n\nCode\n# Non-proportional damping\nc1, c2, c3, c4, c5 = 2, 3.1, 1.7, 0.5, 1.5\nc6, c7, c8, c9, c10, c11 = 3.5, 5.2, 2, 2.2, 2, 1.5\nC_C = np.array([[c1+c2+c5, -c2, 0, 0, -c5, 0, 0, 0],\n [-c2, c2+c3+c4, -c3, -c4, 0, 0, 0, 0],\n [0, -c3, c3+c6, 0, 0, -c6, 0, 0],\n [0, -c4, 0, c4+c7, 0, 0, -c7, 0],\n [-c5, 0, 0, 0, c5+c8, 0, -c8, 0],\n [0, 0, -c6, 0, 0, c6+c9, 0, -c9],\n [0, 0, 0, -c7, -c8, 0, c7+c8+c10, -c10],\n [0, 0, 0, 0, 0, -c9, -c10, c9+c10+c11]])*1e1\n# Modal solution:\nλ, Φ_ = sp.linalg.eig(K_C, M_C) # Get mode shapes Φ_ and natural frequencies λ=ω_n**2\nΦ = Φ_@np.diag(1/np.sqrt(np.diag((Φ_.T@M_C@Φ_)))) # Mass normalise mode shapes\nΦCΦ = Φ.T @ C_C @ Φ # Check modal damping matrix is diagonal\n\nfigs, axs = plt.subplots(1,3,figsize=(10, 30))\naxs[0].matshow(np.log(np.abs(Φ.T @ M_C @ Φ)))\naxs[0].set_xlabel('Mass matrix')\naxs[1].matshow(np.log(np.abs(Φ.T @ K_C @ Φ)))\naxs[1].set_xlabel('Stiffness matrix')\naxs[2].matshow(np.log(np.abs(ΦCΦ)))\naxs[2].set_xlabel('Prop. damping matrix')\nplt.show()\n\n\n\n\n\n\n\n\nFigure 5: Plots of modal mass, stiffness and non-prop. damping matrices. Off-diagonal mass and stiffness terms the same as before. The off-diagonal damping terms are no longer close to zero.\n\n\n\n\n\nIf we consdier the diagonal elements of the damping matrix alone, and compute the FRF by modal summation as before, we see in Figure 6 a noticeable deviation from that of the direct FRF.\n\n\nCode\n# Modal solution cont.:\nY_C_modal_nonprop = np.zeros_like(Y_C_modal_prop) # Allocate space\nfor fi in range(f.shape[0]): # Loop over frequency and compute FRF using modal summation\n Y_C_modal_nonprop[fi, :, :] = Φ @ np.diag(1/(λ - ω[fi]**2 + 1j*ω[fi]*np.diag(ΦCΦ))) @ Φ.T\n # The above solution give an incorrect FRF - ΦCΦ is not diagonal and so off-diagonals are missing. \n # The following solution gives the exact solution, \n # Y_C_modal_prop[fi, :, :] = Φ @ np.linalg.inv((np.diag(λ) - ω[fi]**2*np.eye(M_C.shape[0]) + 1j*ω[fi]*ΦCΦ)) @ Φ.T\n # but requires matrix inverse, so no benifit over direct solution.\n\n# Direct solution:\nY_C_dir_prop = np.linalg.inv(K_C +1j* np.einsum('i,jk->ijk',ω, C_C) - np.einsum('i,jk->ijk',ω**2, M_C))\n\nfig, ax = plt.subplots()\nax.semilogy(f, np.abs(Y_C_dir_prop[:,0,-1]),label='Direct')\nax.semilogy(f, np.abs(Y_C_modal_nonprop[:,0,-1]),linestyle='dashed',label='Modal')\nax.grid(True)\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('Admittance (m/N)')\nplt.legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 6: Displacement FRF obtained by modal summation with non-proportional damping and directly by matrix inversion.\n\n\n\n\n\nNote that if we replaced the diagonal inverse with a full matrix inverse and retain the full damping matrix, we obtain the exact same result as the direct method, as we have included the cross coupling in the modal damping matrix. Though of course we loose the efficiency we get by avoiding a full matrix inverse.\n\n\nStructural damping\nLets now consider structural damping. For structural damping we define the so-called *structural loss factor$ \\(\\eta\\) (typically \\(0.01\\leq \\eta \\leq 0.1\\)). For the direct method, we use the structural loss factor to compute a complex stiffness matrix, before proceeding as normal. Note that we no longer have a damping matrix to deal with, just mass and (complex) stiffness. For the modal approach we compute the mode shapes and natural frequencies using the mass and initial stiffness matrices. Then, in the modal summation we apply the structural damping directly to the squared natural frequencies, \\(\\mathbf{\\Lambda} \\rightarrow \\mathbf{\\Lambda} (1+ i\\eta)\\). Alternatively, we could compute the natural frequencies and mode shapes using the complex stiffness matrix. In this case the resulting natural frequencies will be complex, already having the loss factor term included.\n\n\nCode\n# Structural damping\nK_Cη= K_C*(1+1j*η) # Complex stiffness matrix including structural damping\n\n# Modal solution using real stiffness (i.e. undamped):\nλ, Φ_ = sp.linalg.eig(K_C, M_C) # Get mode shapes Φ_ and natural frequencies λ=ω_n**2\nΦ = Φ_@np.diag(1/np.sqrt(np.diag((Φ_.T@M_C@Φ_)))) # Mass normalise mode shapes\n\nY_C_modal_struct = np.zeros_like(Y_C_modal_prop) # Allocate space\nfor fi in range(f.shape[0]): # Loop over frequency and compute FRF using modal summation\n Y_C_modal_struct[fi, :, :] = Φ @ np.diag(1/(λ*(1+1j*η) - ω[fi]**2)) @ Φ.T\n # We would get the same result if the complex stiffness K_Cη was used to solve for complex eigenvalues, λη:\n # Y_C_modal_struct[fi, :, :] = Φ @ np.diag(1/(λη - ω[fi]**2)) @ Φ.T\n\n# Direct solution:\nY_C_dir_struct = np.linalg.inv(K_Cη - np.einsum('i,jk->ijk',ω**2, M_C))\n\nfig, ax = plt.subplots()\nax.semilogy(f, np.abs(Y_C_dir_prop[:,0,-1]),color='gray',label='Prop. damping')\nax.semilogy(f, np.abs(Y_C_dir_struct[:,0,-1]),label='Direct')\nax.semilogy(f, np.abs(Y_C_modal_struct[:,0,-1]),linestyle='dashed',label='Modal')\nax.grid(True)\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('Admittance (m/N)')\nplt.legend()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 7: Displacement FRF obtained by modal summation with structural damping and directly by matrix inversion.\n\n\n\n\n\nAs seen in Figure 7 below, the direct and modal solutions yield identical FRFs. Also shown in the figure, is the FRF obtained using the proportional damping, as in Figure 3. The difference observed between the structural and proportional damping is due to the factor of \\(i\\omega\\) that accompanies the proportional damping, leading to great levels of damping as frequency increases. Structural damping, beging proportional to stiffness, is constant with frequency.",
"crumbs": [
"Tutorials",
"Synthesising Frequency Response Functions"
]
},
{
"objectID": "tutorials/synthFRF/synthFRF_1.html#finite-element-beam",
"href": "tutorials/synthFRF/synthFRF_1.html#finite-element-beam",
"title": "Synthesising FRFs from mass, stiffness and damping",
"section": "Finite element beam",
"text": "Finite element beam\nIn this section we consider the FRFs of a ‘continuous’ system, in particular a beam. We again compare the FRFs obtained direct matrix inversion, and those by modal summation. We do this for both proportional and structural damping. In addition to this, we investigate the effect of modal truncation, i.e. discarding the contribution of modes occuring above the maximum frequency of interest (here taken to be 1 kHz).\n\n\n\n\n\n\nFigure 8: Finite element beam model, including translational and rotational DoFs.\n\n\n\n\nFE mass and stiffness matrices\nTo model our continuous beam, we use the Finite Element method. A beam of length \\(L\\) is discretised into \\(N_e\\) short sections (termed finite elements), each of which is characterised by four DoFs, a translation and rotation at each end. These individual finite elements are described by a pair of matrices that describe, respectively, their discretised inertial and elastic properties, \\[\nm_e = \\frac{\\rho A L}{420}\\left[\\begin{array}{cccc}\n156 & 22L & 54 & -13L\\\\\n22L & 4L^2 & 13L & -3L^2\\\\\n54 & 13L&156 & -22L \\\\\n-13L & -3L^2 & -22L & 4L^2\n\\end{array}\\right] \\quad \\quad\nk_e = \\frac{EI}{L^3}\\left[\\begin{array}{cccc}\n12 & 6L & -12 & 6L \\\\\n6L & 4L^2 & -6L & 2L^2 \\\\\n-12 & -6L & 12 & -6L \\\\\n6L & 2L^2 & -6L & 4L^2\n\\end{array}\\right]\n\\] To model the complete beam (of length \\(L\\)), \\(N_e\\) element matrices are assembled by summing together the overlapping DoFs of neighbouring elements (e.g. the translational stiffness on the right hand side to element \\(n\\) is added to the translational stiffness on the left hand side of element \\(n+1\\), etc.). For berevity, we ommit further technical details here; the interested reader is referred to the excellent introductory book by Petyt (2010), from which I have based the FE code below.\n\n\nCode\ndef beamMK(dim, matProp, numE=100):\n # This function takes as input the dimensions and materials properties of the beam, \n # and outputs the global mass and stiffness matrices. \n\n E = matProp[1] # Young's modulus\n ρ = matProp[0] # Density\n L = dim[0]/(numE) # Beam length\n A = dim[2]*dim[1] # Cross-sectional area\n I_ = dim[1] * dim[2]**3/12 # Moment of inertia\n # Allocate space:\n M = np.zeros((2*numE+2, 2*numE+2))\n K = np.zeros((2*numE+2, 2*numE+2))\n # Element mass matrix:\n m_e = ρ*A*L/420 * np.array([[156, 22*L, 54, -13*L],\n [22*L, 4*L**2, 13*L, -3*L**2],\n [54, 13*L, 156, -22*L],\n [-13*L, -3*L**2, -22*L, 4*L**2]])\n # Element stiffness matrix:\n k_e = E*I_/(L**3) * np.array([[12, 6*L, -12, 6*L],\n [6*L, 4*L**2, -6*L, 2*L**2],\n [-12, -6*L, 12, -6*L],\n [6*L, 2*L**2, -6*L, 4*L**2]])\n # Assemble global mass and stiffness matrices\n for elm in range(0, numE*2-1, 2):\n M[elm:elm+4, elm:elm+4] = M[elm:elm+4, elm:elm+4] + m_e\n K[elm:elm+4, elm:elm+4] = K[elm:elm+4, elm:elm+4] + k_e\n # Define clamped left boundary by removing left most translational and rotational DoFs:\n M = np.delete(M, (1, 0), 0)\n M = np.delete(M, (1, 0), 1)\n K = np.delete(K, (1, 0), 0)\n K = np.delete(K, (1, 0), 1)\n return M, K\n\n# Define beam properties\nL = 1 # Length\nW = 0.05 # Width\nH = 0.01 # Height\nE = 200e9 # Young's modulus\nρ = 7000 # Density\n\n# Get mass and stiffness matrices for beam with 50 elements:\nM, K = beamMK([L, W, H], [ρ, E], numE=50)\n\n# Plot structure of global mass and stiffness matrices\nfigs, axs = plt.subplots(1,2,figsize=(10, 20))\naxs[0].matshow(np.log(np.abs(M)))\naxs[0].set_xlabel('Mass matrix')\naxs[1].matshow(np.log(np.abs(K)))\naxs[1].set_xlabel('Stiffness matrix')\nplt.show()\n\n\n\n\n\n\n\n\nFigure 9: Structure of the global mass and stiffness matrices for a beam. Both matrices are sparse since only neighbouring elements are coupled to one another.\n\n\n\n\n\nIn Figure 9 we plot the structure of the global mass and stiffness matrices. We see that these matrices are largely sparse. This is typical of a finite element mass/stiffness matrix, as only neightbouring elements are coupled to one another.\n\n\nFRF functions\nNow that we have a means of building its mass and stiffness matrices, we are able to compute the beams FRF matrix. In the code below we define two FRF functions; one using the direct matrix inversion method, and the other a modal summation.\n\n\nCode\ndef directFRF(M,K,C,f):\n # This function computes the FRF matrix directly by matrix inversion over the frequency range of f.\n ω = 2*np.pi*f # Radian frequency\n return np.linalg.inv(K + 1j* np.einsum('i,jk->ijk', ω, C) - np.einsum('i,jk->ijk',ω**2, M))\n\ndef modalFRF(M,K,C,f, nm=-1):\n # This function computes the FRF by modal summation (with nm modes) over the frequency range of f.\n ω = 2*np.pi*f # Radian frequency\n λ, Φ_ = sp.linalg.eig(K, M) # Get mode shapes Φ_ and natural frequencies λ=ω_n**2\n sort_index = np.argsort(np.real(λ)) # Get ordered index\n λ = λ[sort_index] # Sort natural freqs\n Φ_ = Φ_[:,sort_index] # Sort mode shapes\n Φ = Φ_@np.diag(1/np.sqrt(np.diag((Φ_.T @ M @ Φ_)))) # Mass normalise mode shapes\n ΦCΦ = Φ.T @ C @ Φ # Check modal damping matrix is diagonal\n Y = np.zeros((f.shape[0],M.shape[0],M.shape[0]),dtype=complex) # Allocate space\n for fi in range(f.shape[0]): # Loop over frequency and compute FRF using modal summation\n Y[fi, :, :] = Φ[:,0:nm] @ np.diag(1/(λ[0:nm] - ω[fi]**2 + 1j*ω[fi]*np.diag(ΦCΦ[0:nm,0:nm]))) @ Φ[:,0:nm].T\n return Y, λ, Φ \n\n\n\n\nProportional damping\nWe first consider the case of proportional damping. The FRFs are computed using both direct and modal approaches over the frequdncy range 1-1000 Hz. For the modal summation, we include all 98 modes (based on the 98 DoFs that define the clamped beam), 91 of which are above our max frequency of interest (1 kHz); the maximum modal frequency included is 36.7 kHz. Shown in Figure 10 is the relation between the modal number and corresponding frequency.\n\n\nCode\nC = 0.00001*K + 1*M # Proportional damping matrix\nf = np.linspace(1,1000,1000) # Frequency vector\nY_direct = directFRF(M,K,C,f) # FRF by direct method\nY_modal, λ, Φ = modalFRF(M,K,C,f) # FRF by modal summation\n\nprint('Number of DoFs: ' + str(M.shape[0]))\nprint('Max resonant frequency: ' + str(round(np.sqrt(np.real(λ[-1]))/(2*np.pi))))\n\nfig, axs = plt.subplots()\naxs.semilogy(range(0,λ.shape[0]), np.sqrt(np.real(λ))/(2*np.pi),label='Direct')\naxs.set_xlabel('Mode number')\naxs.set_ylabel('Modal frequency (Hz)')\nplt.show()\n\n\nNumber of DoFs: 100\nMax resonant frequency: 367621\n\n\n\n\n\n\n\n\nFigure 10: Relation betwee the modal number and corresponding frequency for the finite element beam.\n\n\n\n\n\nShown in Figure 11 are the translational and rotational point FRFs for the tip of the clamped beam (DoFs \\(N-1\\) and \\(N\\)) obtained by direct and modal methods.\n\n\nCode\nfig, axs = plt.subplots(1,2)\naxs[0].semilogy(f, np.abs(Y_direct[:,-2,-2]),label='Direct')\naxs[0].semilogy(f, np.abs(Y_modal[:,-2,-2]),label='Modal',linestyle='dashed')\naxs[0].set_xlabel('Frequency (Hz)')\naxs[0].set_ylabel('Admittance (m/N)')\naxs[1].semilogy(f, np.abs(Y_direct[:,-1,-1]))\naxs[1].semilogy(f, np.abs(Y_modal[:,-1,-1]),linestyle='dashed')\naxs[1].set_xlabel('Frequency (Hz)')\naxs[1].set_ylabel('Admittance (1/Nm)')\naxs[0].legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 11: Translational (left) and rotational (right) point FRFs at the end of the clamped beam obtained using direct and modal summation approaches with proportional damping.\n\n\n\n\n\nWe see from Figure 11 that the direct and modal methods give near identical FRFs.\n\n\nStructural damping\nWe now consider structural damping. We adopt the same configuration as above, execpt the the proportional damping matrix is now set to zero, and a complex stiffness matrix defined using the structural loss factor \\(\\eta\\).\n\n\nCode\nη = 0.02 # Structural loss factor\nC = np.zeros_like(K) # Proportional damping matrix\nKη = K*(1+1j*η) # Complex stiffness matrix\nY_direct = directFRF(M,Kη,C,f) # FRF by direct method\nY_modal, λ, Φ = modalFRF(M,Kη,C,f) # FRF by modal summation\n\nfig, axs = plt.subplots(1,2)\naxs[0].semilogy(f, np.abs(Y_direct[:,-2,-2]),label='Direct')\naxs[0].semilogy(f, np.abs(Y_modal[:,-2,-2]),label='Modal',linestyle='dashed')\naxs[0].set_xlabel('Frequency (Hz)')\naxs[0].set_ylabel('Admittance (m/N)')\naxs[1].semilogy(f, np.abs(Y_direct[:,-1,-1]))\naxs[1].semilogy(f, np.abs(Y_modal[:,-1,-1]),linestyle='dashed')\naxs[1].set_xlabel('Frequency (Hz)')\naxs[1].set_ylabel('Admittance (1/Nm)')\naxs[0].legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 12: Translational (left) and rotational (right) point FRFs at the end of the clamped beam obtained using direct and modal summation approaches with structural damping.\n\n\n\n\n\nAgain, we see from Figure 12 that the direct and modal approaches yield the same FRFs.\n\n\nTruncation error\nWe have seen from the above sections, that the modal summation approach is able to provide an FRF in excellent agreement with the direct matrix inversion. So far, our modal summation has been ‘complete’, i.e. we have included all modes in our summation. It is often convenient (for computaitonal efficiency) to reduce the number of mode required to be esimtated when solving the eigenproblem, i.e. truncating the modal summation to some reduced order \\(N_t < N\\). The argument is that the modes occuring outside of the bandwidth of interest likely contribute a negiigable amount to the FRF within the bandwidth. A typical rule of thumb is to include modes up to twice the highest frequency of interest (for our example this would correspond to around 2 kHz).\nShown in Figure 13 are the translational and rotational FRFs obtained by modal summation with a truncation of 7, 14 and 28 modes, compared against the direct FRF. These modal truncations correspond to maximum modal frequencies of 1 kHz, 4.4 kHz, and 18.4 kHz, respectively. We see that with only 7 modes, both translational and rotational FRFs provide poor FRF esimtates, particularly around anti-resonances (where out of band modes contribute the most). With 14 modes, the translatonal FRFs fall into agreement, whilst the rotational FRF esitmates remain poor even with 28 modes included.\n\n\nCode\nY_modal_7 = modalFRF(M,Kη,C,f,7)[0]\nY_modal_14 = modalFRF(M,Kη,C,f,14)[0]\nY_modal_28 = modalFRF(M,Kη,C,f,28)[0]\n\nfig, axs = plt.subplots(1,2)\naxs[0].semilogy(f, np.abs(Y_direct[:,-2,-2]),label='Direct')\naxs[0].semilogy(f, np.abs(Y_modal_7[:,-2,-2]),label='N=7: f_max=' + str(round(np.sqrt(np.real(λ[6]))/(2*np.pi),1)),linestyle='dashed')\naxs[0].semilogy(f, np.abs(Y_modal_14[:,-2,-2]),label='N=14: f_max=' + str(round(np.sqrt(np.real(λ[13]))/(2*np.pi),1)),linestyle='dashed')\naxs[0].semilogy(f, np.abs(Y_modal_28[:,-2,-2]),label='N=28: f_max=' + str(round(np.sqrt(np.real(λ[27]))/(2*np.pi),1)),linestyle='dashed')\naxs[0].set_xlabel('Frequency (Hz)')\naxs[0].set_ylabel('Admittance (m/N)')\naxs[1].semilogy(f, np.abs(Y_direct[:,-1,-1]))\naxs[1].semilogy(f, np.abs(Y_modal_7[:,-1,-1]),linestyle='dashed')\naxs[1].semilogy(f, np.abs(Y_modal_14[:,-1,-1]),linestyle='dashed')\naxs[1].semilogy(f, np.abs(Y_modal_28[:,-1,-1]),linestyle='dashed')\naxs[1].set_xlabel('Frequency (Hz)')\naxs[1].set_ylabel('Admittance (1/Nm)')\naxs[0].legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 13: Effect of modal truncation (7, 14 and 28 modes) on the estimation of translational (left) and rotational (right) FRFs.\n\n\n\n\n\nTo more clearly illustrate the error/convergence of the modal FRF, we compute an error for each level of truncation. For simplicity we compute the error considering the tip FRFs only. The total error is obtained considering both translational and rotational FRFs, i.e. using the FRF matrix, \\[\n\\mathbf{Y} = \\left[\\begin{array}{cc} Y_{zz} & Y_{z\\alpha} \\\\ Y_{\\alpha z} & Y_{\\alpha \\alpha}\n\\end{array}\\right]\n\\] We also compute the errors based on translational \\(Y_{zz}\\) and rotational \\(Y_{\\alpha \\alpha}\\) FRFs indivudally. The results are presented in left of Figure 14. We se that the translational FRF converges quickly, and the total error is dominated by the rotational components.\n\n\nCode\nerrorY = []\nerrorY_z = []\nerrorY_α = []\nerrorZ = []\nerrorZ_z = []\nerrorZ_α = []\n\ntruncRange = range(7,100,1)\nY_modal, λ, Φ = modalFRF(M,Kη,C,f)\nω = 2*np.pi*f\n\nZ_direct = np.linalg.inv(Y_direct[:,-2:,-2:])\n\nfor nm in truncRange:\n Y_modal_tunc = np.zeros((f.shape[0],M.shape[0],M.shape[0]),dtype=complex) # Allocate space\n Z_modal_tunc = np.zeros((f.shape[0],2,2),dtype=complex) # Allocate space\n for fi in range(f.shape[0]): # Loop over frequency and compute FRF using modal summation\n Y_modal_tunc[fi, :, :] = Φ[:,0:nm] @ np.diag(1/(λ[0:nm] - ω[fi]**2)) @ Φ[:,0:nm].T\n Z_modal_tunc[fi,:,:] = np.linalg.inv(Y_modal_tunc[fi, -2:, -2:])\n errorY.append((np.sum((np.abs(Y_modal_tunc[:,-2:,-2:]-Y_direct[:,-2:,-2:])/np.abs(Y_direct[:,-2:,-2:])))))\n errorY_z.append((np.sum((np.abs(Y_modal_tunc[:,-2,-2]-Y_direct[:,-2,-2]))/np.abs(Y_direct[:,-2,-2]))))\n errorY_α.append((np.sum((np.abs(Y_modal_tunc[:,-1,-1]-Y_direct[:,-1,-1]))/np.abs(Y_direct[:,-1,-1]))))\n errorZ.append((np.sum((np.abs(Z_modal_tunc-Z_direct)/np.abs(Z_direct)))))\n errorZ_z.append((np.sum((np.abs(Z_modal_tunc[:,0,0]-Z_direct[:,0,0]))/np.abs(Z_direct[:,0,0]))))\n errorZ_α.append((np.sum((np.abs(Z_modal_tunc[:,1,1]-Z_direct[:,1,1]))/np.abs(Z_direct[:,1,1]))))\n\n\nfig, axs = plt.subplots(1,2)\naxs[0].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorY, label='Total error')\naxs[0].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorY_z, label='Trans. error')\naxs[0].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorY_α, label = 'Rot. error')\naxs[0].set_xlabel('Max f_N in modal summation (Hz)')\naxs[0].set_ylabel('Total error in Y')\naxs[0].legend()\naxs[1].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorZ, label='Total error')\naxs[1].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorZ_z, label='Trans. error')\naxs[1].semilogx(np.sqrt(λ[truncRange])/(2*np.pi), errorZ_α, label = 'Rot. error')\naxs[1].set_ylabel('Total error in Z')\naxs[1].set_xlabel('Max f_N in modal summation (Hz)')\n# axs[1].legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 14: Effect of modal truncation (7, 14 and 28 modes) on the estimation of translational (left) and rotational (right) FRFs.\n\n\n\n\n\nRecall that we are often interested in the inverse of the FRF matrix, for example when substructuring two or more components together. On the right of Figure 14 we show the total, translational and rotational errors for the inverse of the tip FRF matrix \\(\\mathbf{Z}=\\mathbf{Y}^{-1}\\). We see that the truncation error has increased by around an order of magnitude, and that now the translational and rotational components have a relatively similar levels of error. This result is to be expected, as the each element of the impedance matrix depends on all elements of the FRF matrix, i.e. the truncation error on the rotational FRFs contributes to the translational errors of the impedance. This result highlights the importance of checking the convergence of any modal-based FRF matrices, especially if their inverse is to be computed, e.g. for substructuring.\n\n\nMode shapes\nFinally, in Figure 15 we plot the first 5 mode shapes of our clamped beam. No particular reason why, its just nice to see some mode shapes!\n\n\nCode\nfig, ax = plt.subplots()\nax.plot(range(0,49*2,2), 1+Φ[0:-2:2,0]*0.5 ,label='1')\nax.plot(range(0,49*2,2), 2+Φ[0:-2:2,1]*0.5 ,label='2')\nax.plot(range(0,49*2,2), 3+Φ[0:-2:2,2]*0.5 ,label='3')\nax.plot(range(0,49*2,2), 4+Φ[0:-2:2,3]*0.5 ,label='4')\nax.plot(range(0,49*2,2), 5+Φ[0:-2:2,4]*0.5 ,label='5')\n# ax.grid(True)\nax.set_xlabel('DoF number')\nax.set_ylabel('Mode number')\nplt.show()\n\n\n\n\n\n\n\n\nFigure 15: Illustration of the first 5 modes for the clamped FE beam.",
"crumbs": [
"Tutorials",
"Synthesising Frequency Response Functions"
]
},
{
"objectID": "tutorials/synthFRF/synthFRF_1.html#summary",
"href": "tutorials/synthFRF/synthFRF_1.html#summary",
"title": "Synthesising FRFs from mass, stiffness and damping",
"section": "Summary",
"text": "Summary\nIn this tutorial we introduced the direct (matrix inverse) and modal summation approaches for computing the FRFs of a system that is described by its mass and stiffness matrices.\nThe modal summation approach has the advantage of reduced computation cost; it avoids the need to explicity compute a matrix inverse and can be truncated to capture the modes within a particular bandwidth. Furthermore, it can provide additonal physical insight into the system behvaviour, for example by inspecting the system’s modes shapes.\nKey to the modal approach is the notion that the natural modes of a system diagonalise its equations of motion. Whilst this is generally the case for the mass and stiffness matrices, it is not so for the damping matrices. Therefore to adopt the modal approach, an appropriate (diagonalisable) form of damping must be adopted. We considered two common representations, proportional and structural.\nWe presented two numerical examples. The first, a discrete mass-spring system, and the second a continuous beam discretised by the Finite Element method. In both cases we compared the FRFs obtained by direct and modal approaches, showing that they yield identical FRFs. For the beam case, we further investigated the effect of modal truncation on the error in an FRF estimate.",
"crumbs": [
"Tutorials",
"Synthesising Frequency Response Functions"
]
},
{
"objectID": "tutorials/blockedForce/blockedForce_1.html",
"href": "tutorials/blockedForce/blockedForce_1.html",
"title": "Blocked force: in situ determination and validation",
"section": "",
"text": "In this tutorial we will apply the in situ method for blocked force estimation to the numerical mass-spring system depicted in Figure 1. Note that this is the same coupled assembly considered in the substructuring example. As before, the system is comprised of two components; a vibration source \\(S\\) in green, and a passive receiver structure \\(R\\) in blue. These components are characterised by their free-interface FRFs \\(\\mathbf{Y}_S\\in\\mathbb{C}^{5\\times 5}\\) and \\(\\mathbf{Y}_R\\in\\mathbb{C}^{6\\times 6}\\), respectively. The coupled assembly \\(C\\) is similarly characterised by its FRF matrix, \\(\\mathbf{Y}_C\\in\\mathbb{C}^{8\\times 8}\\). the source and reciever components are connected along the interface \\(c\\), composed of the coupled DoFs 3, 4 and 5.\nTo simulate a vibration source, two ‘internal’ forces are applied within the source component, to DoFs 1 and 2. These forces induce an ‘operational response’, which is used in conjunction with the interface FRF matrix \\(\\mathbf{Y}_{Ccc}\\in\\mathbb{C}^{3\\times 3}\\) to infer the blocked force. To illustrate the key property of the blocked force - independence and transferability - we obtain the blocked force form two different assemblies; these assemblies comprise the same source component, but different receivers. We also consider on-board and modified receiver validations.",
"crumbs": [
"Tutorials",
"In situ blocked force (mass-spring example)"
]
},
{
"objectID": "tutorials/blockedForce/blockedForce_1.html#build-model-of-coupled-assembly",
"href": "tutorials/blockedForce/blockedForce_1.html#build-model-of-coupled-assembly",
"title": "Blocked force: in situ determination and validation",
"section": "Build model of coupled assembly",
"text": "Build model of coupled assembly\nRight, the first thing we need to do before we can start looking at the in situ method is to build a model of the coupled assembly depicted in Figure 1. For reasons that will become clear shortly, we will do this using a substructuring approach. That is, we will build the source and reciever FRFs \\(\\mathbf{Y}_S\\) and \\(\\mathbf{Y}_R\\) separately, and then use substructuring to couple them together. These steps are covered in more detail in the substructuring example. This approach will allow us to build quite easily a second assembly with a modified recevier; we just modify the mass matrix of \\(B\\), and redo the substructuring step.\nIn the code below we define the mass and stiffness matrices for the source, and two receivers. We compute their respective FRF matrices, before substructuring together two assemblies, denoted \\(C1\\) and \\(C2\\) here. The figure below shows the point FRFs at each of the interface DoFs (3,4 and 5) for the two assemblies. We can clearly see that their dynamics are quite different (as expected given the modified receiver mass values in \\(C2\\).)\n\n\nCode\nimport numpy as np\nnp.set_printoptions(linewidth=100)\nimport matplotlib.pyplot as plt\nimport scipy as sp\nimport sys as sys\n\n# Define component masses and build matrices\nm1, m2, m3, m4, m5 = 1, 2, 2, 1, 0.5\nm6, m7, m8, m9, m10, m11 = 0.5, 2, 1, 3, 5,0.5\nM_A = np.diag(np.array([m1,m2,m3,m4,m5])) # Comp A\nM_B1 = np.diag(np.array([m6,m7,m8,m9,m10,m11])) # Comp B\nM_B2 = np.diag(np.array([m6*2,m7*4,m8/3,m9,m10*2,m11/2])) # Comp B\n\n# Define component stiffnesses and build matrices (complex stiffness -> structural damping)\nη = 0.05 # Structural loss factor\nk1, k2, k3, k4, k5 = 1e5*(1+1j*η), 1.5e5*(1+1j*η), 0.7e5*(1+1j*η), 1e5*(1+1j*η), 0.5e5*(1+1j*η)\nk6, k7, k8, k9, k10, k11 = 0.5e5*(1+1j*η), 2e5*(1+1j*η), 1e5*(1+1j*η), 0.2e5*(1+1j*η), 1e5*(1+1j*η),0.5e5*(1+1j*η)\nK_A = np.array([[k1+k2+k5, -k2, 0, 0, -k5],\n [-k2, k2+k3+k4, -k3, -k4, 0],\n [0, -k3, k3, 0, 0],\n [0, -k4, 0, k4, 0],\n [-k5, 0, 0, 0, k5]])\nK_B = np.array([[k6, 0, 0, -k6, 0, 0],\n [0, k7, 0, 0, -k7, 0],\n [0, 0, k8, 0, -k8, 0],\n [-k6, 0, 0, k6+k9, 0, -k9],\n [0, -k7, -k8, 0, k7+k8+k10, -k10],\n [0, 0, 0, -k9, -k10, k9+k10+k11]])\n\nf = np.linspace(0,100,1000) # Generate frequency array\nω = 2*np.pi*f # Convert to radians\n# Compute FRF matrices over frequency range (have used einsum( ) to show an alternative loop-free approach)\nY_A = np.linalg.inv(K_A - np.einsum('i,jk->ijk',ω**2, M_A))\nY_B1 = np.linalg.inv(K_B - np.einsum('i,jk->ijk',ω**2, M_B1))\nY_B2 = np.linalg.inv(K_B - np.einsum('i,jk->ijk',ω**2, M_B2))\n\nY1 = np.zeros((f.shape[0],11,11), dtype='complex') # Allocate space for uncoupled assembly 1\nY2 = np.zeros((f.shape[0],11,11), dtype='complex') # Allocate space for uncoupled assembly 2\nfor fi in range(f.shape[0]): # Build block diagonal (uncoupled) FRF matrix\n Y1[fi,:,:] = sp.linalg.block_diag(Y_A[fi,:,:],Y_B1[fi,:,:])\n Y2[fi,:,:] = sp.linalg.block_diag(Y_A[fi,:,:],Y_B2[fi,:,:])\n# Boolean equilibrium matrix:\nL = np.array([[1,0,0,0,0,0,0,0,0,0,0],\n [0,1,0,0,0,0,0,0,0,0,0],\n [0,0,1,0,0,1,0,0,0,0,0],\n [0,0,0,1,0,0,1,0,0,0,0],\n [0,0,0,0,1,0,0,1,0,0,0],\n [0,0,0,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,0,0,0,0,1,0],\n [0,0,0,0,0,0,0,0,0,0,1]]).T\n\n# Allocate space for coupled FRF matrices\nY_C1 = np.zeros((f.shape[0],8,8), dtype='complex') # Allocate space for coupled assembly 1\nY_C2 = np.zeros((f.shape[0],8,8), dtype='complex') # Allocate space for coupled assembly 2\nfor fi in range(f.shape[0]): # Loop over frequency and compute coupled FRF using primal and dual formulations\n Y_C1[fi,:,:] = np.linalg.inv(L.T@np.linalg.inv(Y1[fi,:,:])@L)\n Y_C2[fi,:,:] = np.linalg.inv(L.T@np.linalg.inv(Y2[fi,:,:])@L)\n\n\nfig, ax = plt.subplots(1,3)\nax[0].semilogy(f, np.abs(Y_C1[:,2,2]),label='C1',linewidth=2)\nax[1].semilogy(f, np.abs(Y_C1[:,3,3]),linewidth=2)\nax[2].semilogy(f, np.abs(Y_C1[:,4,4]),linewidth=2)\nax[0].semilogy(f, np.abs(Y_C2[:,2,2]),label='C2',linewidth=2,linestyle='dashed')\nax[1].semilogy(f, np.abs(Y_C2[:,3,3]),linewidth=2,linestyle='dashed')\nax[2].semilogy(f, np.abs(Y_C2[:,4,4]),linewidth=2,linestyle='dashed')\nax[0].set_ylim([1e-7,1e-3])\nax[1].set_ylim([1e-7,1e-3])\nax[2].set_ylim([1e-7,1e-3])\nax[1].set_yticks([])\nax[2].set_yticks([])\nax[0].set_xlabel('Frequency (Hz)')\nax[1].set_xlabel('Frequency (Hz)')\nax[2].set_xlabel('Frequency (Hz)')\nax[0].set_ylabel('Admittance (m/N)')\nax[0].legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 2: Coupled point FRFs at the interface DoFs 3, 4 and 5 (left to right) in C1 and C2. We see that the two assemblies have significantly different dynamics (owing to the changes made in the receiver of C2).\n\n\n\n\n\nThe FRF matrices \\(\\mathbf{Y}_{C1cc}\\) and \\(\\mathbf{Y}_{C2cc}\\) though crucial for the in situ blocked force method, and not enough by themselves. We also need an operational response \\(\\mathbf{v}_c\\) (obtained over the same set of interface DoFs; 3,4 and 5) for each assembly. We can easily synthesis such a response by applying some arbitrary internal forces within the source component. Importantly, we must apply the same internal forces for both assemblies, otherwise we will have effectively different sources.\nIn the code below we define the arbitary internal forces, build a force vector, and apply it to both the assembly’s coupled FRFs. The result is two operational response vectors, from which we plot those obtained at the interface DoFs. As expected given their different receivers, the two sets of operational responses display quite different dynamics.\n\n\nCode\n# Define internal forces:\nf1 = 0.5+1j*0.2\nf2 = 1.2-1j*0.7\nfi = np.array([f1,f2,0,0,0,0,0,0])\n# Compute operational response in both assemblies due to internal forces\nv1 = np.einsum('ijk,k->ij', Y_C1, fi)\nv2 = np.einsum('ijk,k->ij', Y_C2, fi)\n\nfig, ax = plt.subplots(1,3)\nax[0].semilogy(f, np.abs(v1[:,2]),label='C1',linewidth=2)\nax[1].semilogy(f, np.abs(v1[:,3]),linewidth=2)\nax[2].semilogy(f, np.abs(v1[:,4]),linewidth=2)\nax[0].semilogy(f, np.abs(v2[:,2]),label='C2',linewidth=2,linestyle='dashed')\nax[1].semilogy(f, np.abs(v2[:,3]),linewidth=2,linestyle='dashed')\nax[2].semilogy(f, np.abs(v2[:,4]),linewidth=2,linestyle='dashed')\nax[0].set_ylim([1e-7,1e-3])\nax[1].set_ylim([1e-7,1e-3])\nax[2].set_ylim([1e-7,1e-3])\nax[1].set_yticks([])\nax[2].set_yticks([])\nax[0].set_xlabel('Frequency (Hz)')\nax[1].set_xlabel('Frequency (Hz)')\nax[2].set_xlabel('Frequency (Hz)')\nax[0].set_ylabel('Displacement (m)')\nax[0].legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 3: Operational response at the interface DoFs 3, 4 and 5 (left to right) in C1 and C2, due to the internal forces applied to DoFs 1 and 2.",
"crumbs": [
"Tutorials",
"In situ blocked force (mass-spring example)"
]
},
{
"objectID": "tutorials/blockedForce/blockedForce_1.html#blocked-force-identification",
"href": "tutorials/blockedForce/blockedForce_1.html#blocked-force-identification",
"title": "Blocked force: in situ determination and validation",
"section": "Blocked force identification",
"text": "Blocked force identification\nNow that we have the coupled interface FRF and operational response, we are ready to apply the in situ blocked force method. As a reminder, the in situ equation takes the form, \\[\n\\mathbf{\\bar{f}}_c = \\mathbf{Y}_{Ccc}^{-1} \\mathbf{v}_c\n\\] Now according to the blocked force theory, if we implement this equation on our two different assemblies, we should return exactly the same blocked force, as it is an independent property of the vibration source.\nIn the code below we implement the in situ method (note that we have to be carefull to select the correct DoFs fro our FRF matrix and operational response vector) for both assemblies, and plot the results together. As expected we get the same set of forces!\nNote that we see only two resonant frequencies in the blocked force. This makes sense right? We have blocked the interface DoFs 3, 4 and 5, leaving only DoFs 1 and 2 free to move. Hence, only two resonant frequencies.\n\n\nCode\n# Apply in situ relation to obtain blocked force from both assemblies\nblkF1 = np.einsum('ijk,ik->ij', np.linalg.inv(Y_C1[:,2:5,2:5]), v1[:,2:5])\nblkF2 = np.einsum('ijk,ik->ij', np.linalg.inv(Y_C2[:,2:5,2:5]), v2[:,2:5])\n\nfig, ax = plt.subplots(1,3)\nax[0].semilogy(f, np.abs(blkF1[:,0]),label='C1',linewidth=2)\nax[1].semilogy(f, np.abs(blkF1[:,1]),linewidth=2)\nax[2].semilogy(f, np.abs(blkF1[:,2]),linewidth=2)\nax[0].semilogy(f, np.abs(blkF2[:,0]),label='C2',linewidth=2,linestyle='dashed')\nax[1].semilogy(f, np.abs(blkF2[:,1]),linewidth=2,linestyle='dashed')\nax[2].semilogy(f, np.abs(blkF2[:,2]),linewidth=2,linestyle='dashed')\nax[0].set_ylim([0.1e0,1e2])\nax[1].set_ylim([0.1e0,1e2])\nax[2].set_ylim([0.1e0,1e2])\nax[1].set_yticks([])\nax[2].set_yticks([])\nax[0].set_xlabel('Frequency (Hz)')\nax[1].set_xlabel('Frequency (Hz)')\nax[2].set_xlabel('Frequency (Hz)')\nax[0].set_ylabel('Force (N)')\nax[0].legend()\nplt.show() \n\n\n\n\n\n\n\n\nFigure 4: Comparison of blocked forces obtained from C1 and C2. From left to right we have the blocked force at DoFs 3, 4 and 5. The same blocked force is obtained from C1 and C2, illustrating invariance of blocked force to receiver dynamics.\n\n\n\n\n\nThough applied to a relatively simple mass-spring system, this is quite a remarkable result; we have extracted an independent property of the vibration source (its blocked force) using solely quantities obtained from the coupled assembly. Whats even more remarkable is that this also works when applied to real physical assemblies! Though as you might expect, there are some additional challenges related to obtaining good quality measurement data… As this is intended to be a simple introduction tutorial, we wont dwell on the practicalities of implementing the in situ method experimentally; perhaps we will in another tutorial sometime.",
"crumbs": [
"Tutorials",
"In situ blocked force (mass-spring example)"
]
},
{
"objectID": "tutorials/blockedForce/blockedForce_1.html#validation",
"href": "tutorials/blockedForce/blockedForce_1.html#validation",
"title": "Blocked force: in situ determination and validation",
"section": "Validation",
"text": "Validation\nNow since this is a numerical example, there isnt really much benifit in further validating the blocked forces, especially since we have already compared them from two assemblies. Though in practice, we rarely have access to two sets of blocked forces, or the true blocked force, though we want some confirmation that our blocked forces are correct, or at least able to reproduce an assembly response. To verify this capability we can use an on-board validation; we take the blocked force obtained from assembly \\(C1\\), and use it to predict the response at some other DoF in the recevier (DoF 8 in our case). If the response prediction agrees witht he direct response, we can be confident that we have the correct blocked force. A similar verification can be acheived using a modified receiver validation; we take the blocked force obtained from assembly \\(C1\\), and use it to predict the response in an assembly with a modified receiver (\\(C2\\) in our case).\nIn the code below we compute the on-board and modified receiver validation for both sets of forces. For berevity, we only plot those related to the forces obtained from assembly \\(C1\\). As expected, we get perfect agreement from both validations. These results further verify that not only is the blocked force an independent source property, but it is also transferable and can be used to reproduce the response field within a different receiver structure.\n\n\nCode\n# On-board validation: use blocked force to predict response in same assembly\nobVal1 = np.einsum('ik,ik->i', Y_C1[:,-1,2:5], blkF1)\nobVal2 = np.einsum('ik,ik->i', Y_C2[:,-1,2:5], blkF2)\n# Mod. receiver validation: use blocked force to predict response in modified assembly\nmodVal12 = np.einsum('ik,ik->i', Y_C1[:,-1,2:5], blkF2)\nmodVal21 = np.einsum('ik,ik->i', Y_C2[:,-1,2:5], blkF1)\n\n\nfig, ax = plt.subplots(1,2)\nax[0].semilogy(f, np.abs(v1[:,-1]),color='black',label='Direct',linewidth=4)\nax[0].semilogy(f, np.abs(obVal1),label='Pred.')\nax[0].set_xlabel('Frequency (Hz)')\nax[0].set_ylabel('Displacement (m)')\nax[0].legend()\nax[0].set_ylim([1e-8,1e-3])\nax[1].semilogy(f, np.abs(v2[:,-1]),color='black',label='Direct',linewidth=4)\nax[1].semilogy(f, np.abs(modVal21),label='Pred.')\nax[1].set_xlabel('Frequency (Hz)')\nax[1].set_ylim([1e-8,1e-3])\nplt.show() \n\n\n\n\n\n\n\n\nFigure 5: On-board (left) and modified receiver (right) validation. For on-board validation, blocked force from C1 is used to predict \\(v_8\\) in C1. For modified receiver blocked force from C1 is used to predict \\(v_8\\) in C2.",
"crumbs": [
"Tutorials",
"In situ blocked force (mass-spring example)"
]
},
{
"objectID": "tutorials/blockedForce/blockedForce_1.html#summary",
"href": "tutorials/blockedForce/blockedForce_1.html#summary",
"title": "Blocked force: in situ determination and validation",
"section": "Summary",
"text": "Summary\nIn this tutorial we have implmented the in situ method for blocked force estimation on a pair of numerical mass-spring assemblies with different recevier structures. We have verfied that the blocked froce obtained from the two assemblies are identical, even though the dynamics of the assemblies themselves are quite different (recall that they have the same source, but different receivers). We further verify the correctness of the blocked force by performing on-board and modified receiver validations, which involved using the blocked force to predict the operational response at some new location in the same assembly and one with a modified recevier, respectively.",
"crumbs": [
"Tutorials",
"In situ blocked force (mass-spring example)"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html",
"href": "introductions/VAVPs/interfaces/interfaces.html",
"title": "Interface representations",
"section": "",
"text": "In the preceding discussion of components, it was described how both passive and active components are characterised almost exclusively at their interfaces. The need for an accurate description of the interface dynamics has been emphasised but, so far, we have not discussed how this might be achieved. The subject is in fact so important that we now devote an entire page to it: in component characterisation and substructuring it is said that everything goes right or wrong at the interface!",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#single-point-interfaces",
"href": "introductions/VAVPs/interfaces/interfaces.html#single-point-interfaces",
"title": "Interface representations",
"section": "Single point interfaces",
"text": "Single point interfaces\nIn physical assemblies, a range of contact types occur, often comprising bolted connections, spot welds, among other fixings. When modeling connections, we encounter the problem that the real contact always occurs over a finite area; this ‘continuous’ form must be reduced to a ‘discrete’ form to be compatible with our FRF-based VAVP framework. When the contact is physically small compared with the structural wavelength we can treat the connection simply as a discreet point. The point-like* connection is the most commonly adopted model when building a VAVP (or at least the most common starting point). It is described entirely by the 6 rigid interface DoFs; translations in \\(x\\), \\(y\\) and \\(z\\), and rotations about each of these axes; \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\). For interfaces that extend over more than a fraction of a wavelength, this point-like model is no longer sufficient and some form of higher order model is required.\nRepresenting contact geometry is only part of the story however – the overarching aim of an interface model is to capture all important dynamics occuring at the interface. Whilst 6 DoFs might be sufficient for a point-like connection, it is often not known if the physical connection satisfies the point-like requirements aprior. Indeed, higher order ‘flexible’ DoFs might be required to sufficiently caputre the interface dynamics. This raises the important questions of a) how many DoFs are necessary to describe the interface and b) which ones they are. Later we will introduce the concept of interface completeness as a means of addressing these questions.\nHaving chosen a suitable representation of the contact, we must somehow populate it with data obtained from measurement. The VAVP framework requires that the interface representations of connected components must be compatible; component FRFs (and blocked forces) at an interface must share a matching set of DoFs to be coupled successfully. Unfortunately, practical limitations such as lack of access, mean that the required DoFs cannot usually be measured directly. Therefore, the measured data must be transformed from measurement coordinates to those of the interface model. An important concept then is is that of interface transformations.\n\nTransformation of measured FRFs\nThe idea of an interface transformation is quite general and by no means limited to the point-like representation. In short, it is the process of populating an interface model (a chosen set of co-located DoFs that are thought to best describe the physical interface) with data obtained from a set of translational-only FRF measurements that surround the connection point. We will use the point-like interface as a means of introducing the concept; generalistion to more complex representations will become obvious later.\nA single point-like interface is characterised by 6 DoFs; the translational \\(x\\), \\(y\\) and \\(z\\) DoFs, and their respective rotations \\(\\alpha\\), \\(\\beta\\), and \\(\\gamma\\). The equation governing the 6 DoF motion of a single point interface (left of Figure 1) is then, \\[\n \\left(\\begin{array}{c}\n v_{x} \\\\ v_{y} \\\\ v_{z} \\\\ v_{\\alpha}\\\\ v_{\\beta}\\\\ v_{\\gamma}\n \\end{array}\\right)=\\left[\\begin{array}{cccccc} Y_{{xx}} & Y_{{xy}} & Y_{{xz}} & Y_{{x\\alpha}} & Y_{{x\\beta}} & Y_{{x\\gamma}} \\\\\n Y_{{yx}} & Y_{{yy}} & Y_{{yz}} & Y_{{y\\alpha}} & Y_{{y\\beta}} & Y_{{y\\gamma}} \\\\\n Y_{{zx}} & Y_{{zy}} & Y_{{zz}} & Y_{{z\\alpha}} & Y_{{z\\beta}} & Y_{{z\\gamma}} \\\\\n Y_{{\\alpha x}} & Y_{{\\alpha y}} & Y_{{\\alpha z}} & Y_{{\\alpha\\alpha}} & Y_{{\\alpha\\beta}} & Y_{{\\alpha\\gamma}} \\\\\n Y_{{\\beta x}} & Y_{{\\beta y}} & Y_{{\\beta z}} & Y_{{\\beta\\alpha}} & Y_{{\\beta\\beta}} & Y_{{\\beta\\gamma}}\\\\\n Y_{{\\gamma x}} & Y_{{\\gamma y}} & Y_{{\\gamma z}} & Y_{{\\gamma\\alpha}} & Y_{{\\gamma\\beta}} & Y_{{\\gamma\\gamma}}\n \\end{array}\\right]\\left(\\begin{array}{c}\n f_{x} \\\\ f_{y} \\\\ f_{z} \\\\ f_{\\alpha}\\\\ f_{\\beta}\\\\ f_{\\gamma}\n \\end{array}\\right)\n\\] where for example \\(v_\\alpha\\) is the rotational response (say velocity) about the \\(x\\) axis, \\(f_\\gamma\\) is the moment force about the \\(z\\) axis, and \\(Y_{\\alpha\\gamma}\\) is the transfer FRF relating the two.\nRotational DoFs are generally not available by direct measurement (though rotational response sensors are available, their use is not very common). Even the direct measurement of a translational point FRF (e.g. \\(Y_{xx}\\)) is challenging; requiring an exact co-location of the applied force and measured response. In practice, to overcome these challenges it is typical to measure a series of translational FRFs surrounding the interface connection and through an appropriate transformation, use these to estimate the translational and rotational DoFs of interest.\nThe matrix of measured FRFs (right of Figure 1) satisfies the equation, \\[\n \\left(\\begin{array}{c}\n v_{1} \\\\ v_{2} \\\\ \\vdots \\\\ v_{N_v}\n \\end{array}\\right) = \\left[\\begin{array}{c c c c}Y_{11} & Y_{12} & \\cdots & Y_{1N_f}\\\\\n Y_{11} & Y_{22} & \\cdots & Y_{2N_f}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n Y_{N_v1} & Y_{N_v2} & \\cdots & Y_{N_vN_f}\\\\\n \\end{array}\\right] \\left(\\begin{array}{c}\n f_{1} \\\\ f_{2} \\\\ \\vdots \\\\ f_{N_f}\n \\end{array}\\right)\n\\] or \\[\n\\mathbf{\\hat{v}} = \\mathbf{\\hat{Y}}\\mathbf{\\hat{f}}\n\\tag{1}\\] where: \\(v_n\\) and \\(f_n\\) are, respectively, the response and applied force at the \\(n\\)th ; \\(N_v\\) and \\(N_f\\) denote the number of measured responses and applied forces, respectively; and the \\(\\hat{\\square}\\) accent is used to denote a measured quantity, as opposed to a transformed one.\n\n\n\n\n\n\nFigure 1: Illustration of ‘transformed’ (left) vs. ‘measured’ (right) FRFs. Measured FRFs related a set of discrete non-co-located forces and responses surrounding a connection point. Transformed FRFs relate the co-located forces and moments to the translational and rotational responses at a particular point.\n\n\n\nAssuming an entirely rigid interface, such that only translational and rotational motions are permitted, the set of measured translational responses \\(\\mathbf{\\hat{v}}\\in\\mathbb{C}^{N_v}\\) can be related to a set of mixed translational/rotational responses \\(\\mathbf{v}\\in\\mathbb{C}^{6}\\) (provided \\(N_v\\geq 6\\)) defined at some other location according to the linear transformation, \\[\n \\mathbf{{v}} = \\mathbf{T}_v \\mathbf{\\hat{v}}\n\\tag{2}\\] where \\(\\mathbf{T}_v\\in\\mathbb{C}^{6 \\times N_v}\\) is the so-called response transformation matrix.\nSimilarly, the set of applied (translational) forces \\(\\mathbf{\\hat{f}}\\in\\mathbb{C}^{N_f}\\) can be thought to generate a mixed set of translational and rotational forces \\(\\mathbf{{f}}\\in\\mathbb{C}^{6}\\) at some other location, again according to a linear transformation, \\[\n \\mathbf{f} = \\mathbf{R}_f^{\\rm T} \\mathbf{\\hat{f}} \\quad \\mbox{or} \\quad \\mathbf{\\hat{f}} = \\mathbf{T}_f^{\\rm T} \\mathbf{f}\n\\tag{3}\\] with \\(\\mathbf{T}_f^{} = \\mathbf{R}_f^{\\rm+} \\in\\mathbb{C}^{N_f \\times 6}\\) as the so-called force transformation matrix.\nSubstituting Equation 2 and Equation 3 into Equation 1 we obtain, \\[\n \\mathbf{{v}} = \\mathbf{T}_v \\mathbf{\\hat{Y}}\\mathbf{\\hat{f}} = \\overbrace{\\mathbf{T}_v \\mathbf{\\hat{Y}} \\mathbf{T}_f^{\\rm T} }^{\\mathbf{Y}}\\mathbf{f}\n\\tag{4}\\] which is a relation between the transformed force and response vectors. The matrix product \\(\\mathbf{Y}=\\mathbf{T}_v \\mathbf{\\hat{Y}} \\mathbf{T}_f^{\\rm T}\\in\\mathbb{C}^{6\\times 6}\\) thus describes the transformed FRF matrix or equivalently, the FRF matrix projected onto the chosen interface model. \\[\n \\left[\\begin{array}{cccccc} Y_{{x_ix_j}} & Y_{{x_iy_j}} & Y_{{x_iz_j}} & Y_{{x_i\\alpha_j}} & Y_{{x_i\\beta_j}} & Y_{{x_i\\gamma_j}} \\\\\n Y_{{y_ix_j}} & Y_{{y_iy_j}} & Y_{{y_iz_j}} & Y_{{y_i\\alpha_j}} & Y_{{y_i\\beta_j}} & Y_{{y_i\\gamma_j}} \\\\\n Y_{{z_ix_j}} & Y_{{z_iy_j}} & Y_{{z_iz_j}} & Y_{{z_i\\alpha_j}} & Y_{{z_i\\beta_j}} & Y_{{z_i\\gamma_j}} \\\\\n Y_{{\\alpha_i x_j}} & Y_{{\\alpha_i y_j}} & Y_{{\\alpha_i z_j}} & Y_{{\\alpha_i\\alpha_j}} & Y_{{\\alpha_i\\beta_j}} & Y_{{\\alpha_i\\gamma_j}} \\\\\n Y_{{\\beta_i x_j}} & Y_{{\\beta_i y_j}} & Y_{{\\beta_i z_j}} & Y_{{\\beta_i\\alpha_j}} & Y_{{\\beta_i\\beta_j}} & Y_{{\\beta_i\\gamma_j}}\\\\\n Y_{{\\gamma_i x_j}} & Y_{{\\gamma_i y_j}} & Y_{{\\gamma_i z_j}} & Y_{{\\gamma_i\\alpha_j}} & Y_{{\\gamma_i\\beta_j}} & Y_{{\\gamma_i\\gamma_j}}\n \\end{array}\\right]= \\mathbf{T}_v\\left[\\begin{array}{c c c c}Y_{11} & Y_{12} & \\cdots & Y_{1N}\\\\\n Y_{11} & Y_{22} & \\cdots & Y_{2N}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n Y_{M1} & Y_{M2} & \\cdots & Y_{MN}\\\\\n \\end{array}\\right] \\mathbf{T}_f^{\\rm T}\n\\tag{5}\\]\nConstruction of the transformation matrices \\(\\mathbf{T}_{v,f}\\) depends on the chosen interface representation. Examples include the finite different approxiamtion, discrete Fourier transform (for continuous interfaces), among others. In what follows we will consider the particular case of the Virtual Point transformation, which has become the most widely used representation in the contet of source characterisation, substrucutring and VAVP development.\nRegardless of the approach taken, the interpretation of Equation 5 remains the same. Pre-multiplication by \\(\\mathbf{T}_v\\) is an operation that acts on the measured response DoFs (i.e. the rows of the FRF matrix), projecting them onto the DoFs of the interface model. Similarly, post-multiplication by \\(\\mathbf{T}_f\\) acts to project the applied force DoFs (i.e. columns of the FRF matrix) onto the DoFs of interface model. They take the general form shown in Figure 2,\n\n\n\n\n\n\nFigure 2: General form of interface transformation matrix. Rows correspond to DoFs of the chosen interface model. Columns correspond to the measurement DoFs (applied forces or measured responses).\n\n\n\nwhere, for example, \\(T_{xn}\\) describes the projection/contribution of the \\(n\\)th measurement DoF (representing an applied force or measured response) onto the \\(x\\) DoF of the interface model.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#virtual-point-transormation",
"href": "introductions/VAVPs/interfaces/interfaces.html#virtual-point-transormation",
"title": "Interface representations",
"section": "Virtual point transormation",
"text": "Virtual point transormation\nThe Virtual Point (VP) transformation is a rigid body kinematic projection that relates a set of arbitrarily positioned (translational) response sensors and excitation forces, to the translational and rotational dynamics of a user defined Virtual Point (VP).\n\n\n\n\n\n\n\n\n\n\n\n(a) Force position and orientation.\n\n\n\n\n\n\n\n\n\n\n\n(b) Triaxial accelerometer position and orientation.\n\n\n\n\n\n\n\nFigure 3: Example relative coordinates for virtual point transformations. a) - force transformation. b) - response transformation.\n\n\n\nIf we assume that, locally, an interface connection behaves rigidly (i.e. behaves point-like), the translational response of an arbitrary sensor \\(\\hat{v}_n\\in\\mathbb{C}\\) can be expressed in terms of the combined translational and rotational motion of a virtual point \\(\\mathbf{v}\\in\\mathbb{C}^{6}\\) as, \\[\n \\hat{v}_n = \\overbrace{\\left[\\begin{array}{c c c} e_x & e_y & e_z \\end{array} \\right]}^{\\mathbf{e}_m}\n \\overbrace{\\left[\\begin{array}{c c c c c c} 1 & 0 & 0 & 0 & r_z & -r_y \\\\\n 0 & 1 & 0 & -r_z & 0 & r_x\\\\\n0 & 0 & 1 & r_y & -r_x & 0\\end{array} \\right]}^{\\mathbf{r}_m}\\left(\\begin{array}{c} v_x \\\\ v_y \\\\ v_z \\\\ v_{\\alpha} \\\\ v_{\\beta} \\\\ v_{\\gamma} \\end{array} \\right) +\\epsilon = \\mathbf{R}_{v_m}\\mathbf{v}+\\epsilon\n\\tag{6}\\]\n\n\n\n\n\n\n\nFigure 4: Orientation vector \\(\\mathbf{e}\\) from spherical coordinates.\n\n\n\n\n\n\n\n\n\nFigure 5: Orientation vector \\(\\mathbf{e}\\) from cross-product.\n\n\n\n\nIn the above, \\(\\mathbf{R}_{v_n}\\) represents a pair of linear transformations that account for the position and orientation of the sensor relative to the virtual point, or more precisely its coordinate system. The rightmost matrix \\(\\mathbf{r}_n\\) is the so-called Interface Deformation Mode (IDM) matrix that projects the motion of the VP onto the position of the sensor. The first three columns, which together form an identity matrix, simply state that an \\(x\\), \\(y\\), or \\(z\\) translation of the VP will yield identical translation of the measurement position. The remaining three columns describe how a rotational about each axis of the VP will induce a translation in the \\(x\\), \\(y\\) and \\(z\\) coordinates of the measurement position. The magnitude of the resulting translation clearly depend on the relative distance (in meters) between the VP and measurement position, described in the \\(x\\), \\(y\\) and \\(z\\) directions as \\(r_x\\), \\(r_y\\) and \\(r_z\\). The further away the VP and measurement position, the greater the translation will be from a given rotation.\nThe vector \\(\\mathbf{e}_n\\) then accounts for the orientation of the sensor, which may or may not be inline with the coordinate system used by the VP. Suppose for example the sensor is pointing vertically upwards in the \\(z\\) direction of the VP. The orientation vector would then be, \\[\n \\mathbf{e}_z = \\left[\\begin{array}{ccc}\n 0&0&1\n \\end{array}\\right].\n\\] More generally, the orientation vector can be determined from the sensor’s inclination and azimuthial angles as in Figure 4, \\[\n \\left[\\begin{array}{ccc} e_x & e_y & e_z \\end{array} \\right] = \\left[\\begin{array}{ccc\n } \\sin\\theta \\cos \\phi & \\sin \\theta \\sin \\phi & \\cos \\theta \\end{array} \\right]\n\\] or obtained from a CAD model using the normalised cross-product between two vectors \\(\\mathbf{a}\\) and \\(\\mathbf{b}\\) that define the orientation of the surface to which the sensor is attached, \\[\n \\mathbf{e} = \\frac{\\mathbf{a}\\times \\mathbf{b}}{||\\mathbf{a}|| \\cdot ||\\mathbf{b}||}\n\\] as in Figure 5.\nThe final term in Equation 6, \\(\\epsilon\\), represents the component of the response \\(\\hat{v}_n\\) that can not be reproduced by the virtual point with its underlying rigid assumption. This can be considered a residual flexibility.\nEquation 6 can be written for each sensor, leading to the system of equations, \\[\n\\left(\\begin{array}{c}\n \\hat{v}_1 \\\\\n \\hat{v}_2 \\\\\n \\vdots \\\\\n \\hat{v}_{N_v}\n\\end{array}\\right) = \\left[\\begin{array}{c c c c}\n\\mathbf{e}_1 & & &\\\\ & \\mathbf{e}_2 & &\\\\\n& & \\ddots & \\\\\n& & & \\mathbf{e}_{N_v}\n\\end{array} \\right] \\left[\\begin{array}{c} \\mathbf{r}_1 \\\\ \\mathbf{r}_2 \\\\ \\vdots \\\\ \\mathbf{r}_M \\end{array}\\right]\\left(\\begin{array}{c} v_x \\\\ v_y \\\\ v_z \\\\ v_{\\alpha} \\\\ v_{\\beta} \\\\ v_{\\gamma} \\end{array} \\right) + \\left(\\begin{array}{c}\n\\epsilon_1 \\\\\n\\epsilon_2 \\\\\n\\vdots \\\\\n\\epsilon_{N_v}\n\\end{array}\\right)\n\\tag{7}\\] or simply \\[\n \\mathbf{\\hat{v}} = \\mathbf{R}_v\\mathbf{v}+ \\mathbf{\\epsilon}\n\\tag{8}\\] where \\(\\mathbf{\\hat{v}}\\) is a vector of measured (translation) responses, and \\(\\mathbf{v}\\) is the vector of translational and rotational responses at the virtual point.\nWhen using the VP transformation, it is particularly convenient to use tri-axial accelerometers, as in this case, the accelerometer responses recorded in \\(x\\), \\(y\\) and \\(z\\) are done so at (approx.) the same location, and so they share \\(r_x\\), \\(r_y\\) and \\(r_z\\). For a single tri-axial sensor, Equation 6 then becomes, \\[\n \\left(\\begin{array}{c}\\hat{v}_{x,n}\\\\\n\\hat{v}_{y,n}\\\\\n\\hat{v}_{z,n}\n\\end{array}\\right) = \\overbrace{\\left[\\begin{array}{c c c} e_{x,x} & e_{x,y} & e_{x,z} \\\\\ne_{y,x} & e_{y,y} & e_{y,z}\\\\\ne_{z,x} & e_{z,y} & e_{z,z}\\end{array} \\right]}^{\\mathbf{E}_n}\n \\left[\\begin{array}{c c c c c c} 1 & 0 & 0 & 0 & r_z & -r_y \\\\\n 0 & 1 & 0 & -r_z & 0 & r_x\\\\\n0 & 0 & 1 & r_y & -r_x & 0\\end{array} \\right]\\left(\\begin{array}{c} v_x \\\\ v_y \\\\ v_z \\\\ v_{\\alpha} \\\\ v_{\\beta} \\\\ v_{\\gamma} \\end{array} \\right) +\n\\left(\\begin{array}{c}\\epsilon_{x,n}\\\\\n\\epsilon_{y,n}\\\\\n\\epsilon_{z,n}\n\\end{array}\\right)\n\\tag{9}\\] where \\(\\mathbf{E}_n\\) is now a matrix describing the orientation of the \\(n\\)th tri-axial sensor’s coordinate axes relative to that of the VP. For three tri-axial sensors (the minimum required to capture all 6 DoFs), Equation 7 becomes, \\[\n\\left(\\begin{array}{c}\n \\mathbf{\\hat{v}}_1 \\\\\n \\mathbf{\\hat{v}}_2 \\\\\n \\mathbf{\\hat{v}}_3\n\\end{array}\\right) = \\left[\\begin{array}{c c c}\n\\mathbf{E}_1 & &\\\\\n& \\mathbf{E}_2 &\\\\\n& & \\mathbf{E}_3\n\\end{array} \\right] \\left[\\begin{array}{c} \\mathbf{r}_1 \\\\ \\mathbf{r}_2 \\\\ \\mathbf{r}_3 \\end{array}\\right]\\left(\\begin{array}{c} v_x \\\\ v_y \\\\ v_z \\\\ v_{\\alpha} \\\\ v_{\\beta} \\\\ v_{\\gamma} \\end{array} \\right) + \\left(\\begin{array}{c}\n\\mathbf{\\epsilon}_1 \\\\\n\\mathbf{\\epsilon}_2 \\\\\n\\mathbf{\\epsilon}_3\n\\end{array}\\right)\n\\] where \\(\\mathbf{\\hat{v}}_n\\in\\mathbb{C}^3\\) is the vector of tri-axial responses for sensor \\(n\\).\nWe can now apply a similar idea to the external forces. Application of an arbitrary external force \\(f_n\\) (at coordinates \\(r_x\\), \\(r_y\\), \\(r_z\\) relative to the VP) generates a set of translational and rotational forces at the VP. These forces are related by the equation, \\[\n \\left(\\begin{array}{c} f_x \\\\ f_y \\\\ f_z \\\\ f_{\\alpha} \\\\ f_{\\beta} \\\\ f_{\\gamma} \\end{array} \\right) =\n \\overbrace{\\left[\\begin{array}{c c c c c c} 1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0& 1\\\\\n 0 & -r_z & r_y\\\\\n r_z&0&-r_x\\\\\n -r_y& r_x& 0\\end{array} \\right]}^{\\mathbf{r}_n^{\\rm T}} \\overbrace{\\left[\\begin{array}{c} e_x \\\\ e_y \\\\ e_z \\end{array} \\right]}^{\\mathbf{e}_n^{\\rm T}} \\hat{f}_n =\\mathbf{R}^{\\rm T}_{f_n} \\hat{f}_n.\n\\] where \\(\\mathbf{R}^{\\rm T}_{f_n}\\) again represents a pair of linear transformations that account for the position and orientation of the applied force. An equation of this form can be written for each excitation, leading to, \\[\n \\left(\\begin{array}{c} f_x \\\\ f_y \\\\ f_z \\\\ f_{\\alpha} \\\\ f_{\\beta} \\\\ f_{\\gamma} \\end{array} \\right) =\n \\left[\\begin{array}{c c c c c c} \\mathbf{r}_1^{\\rm T} & \\mathbf{r}_2^{\\rm T} & \\cdots & \\mathbf{r}_{N_f}^{\\rm T} \\end{array} \\right]\\left[\\begin{array}{cccc} \\mathbf{e}_1^{\\rm T} & & & \\\\ & \\mathbf{e}_2^{\\rm T} & &\\\\\n& & \\ddots & \\\\ & & & \\mathbf{e}_{N_f}^{\\rm T} \\end{array} \\right] \\left(\\begin{array}{c}\n \\hat{f}_1 \\\\\n \\hat{f}_2 \\\\\n \\vdots \\\\\n \\hat{f}_{N_f}\n\\end{array}\\right)\n\\] or simply \\[\n \\mathbf{f} = \\mathbf{R}^{\\rm T}_f\\mathbf{\\hat{f}}\n\\] where \\(\\mathbf{\\hat{f}}\\) is a vector of applied/measured translation forces, and \\(\\mathbf{f}\\) is the vector of resulting translational and moment forces at the VP. Note that whilst tri-axial accelerometers can be used to simplify the VP response measurement, no equivalent is available for excitations. These must be applied individually.\nThrough a least squares pseudo-inverse, the above equations yield a pair of transformation matrices \\(\\mathbf{T}_v\\) and \\(\\mathbf{T}_f\\), \\[\n \\mathbf{v} = \\left(\\mathbf{R}_v\\right)^+\\mathbf{\\hat{v}} = \\mathbf{T}_v\\mathbf{\\hat{v}}\n\\tag{10}\\] \\[\n \\mathbf{\\hat{f}} = \\left(\\mathbf{R}^{\\rm T}_f\\right)^+\\mathbf{f} = \\mathbf{T}_f^{\\rm T}\\mathbf{f}. \n\\tag{11}\\]\nTo apply the VP transformation to a measured FRF matrix, \\[\n \\mathbf{\\hat{v}} = \\mathbf{\\hat{Y}} \\mathbf{\\hat{f}}\n\\] we first substitute \\(\\mathbf{\\hat{f}}\\) from Equation 11, \\[\n \\mathbf{\\hat{v}} = \\mathbf{\\hat{Y}} \\mathbf{T}_f^{\\rm T}\\mathbf{f}\n\\tag{12}\\] before pre-multiplying both sides of Equation 12 by \\(\\mathbf{T}_v\\) to obtain, \\[\n \\mathbf{v} = \\mathbf{T}_v\\mathbf{\\hat{Y}} \\mathbf{T}_f^{\\rm T}\\mathbf{f}.\n\\tag{13}\\] Equation 13 takes the same form as Equation 4; it relates the VP transformed force \\(\\mathbf{f}\\) to the VP transformed response \\(\\mathbf{v}\\). As such, the matrix product must describe the VP transformed FRF matrix \\(\\mathbf{Y}\\), \\[\n \\mathbf{Y} = \\mathbf{T}_v\\mathbf{\\hat{Y}}\\mathbf{T}_f^{\\rm T}.\n\\] As described earlier, pre and post-multiplying by \\(\\mathbf{T}_{v}\\) and \\(\\mathbf{T}_f^{\\rm T}\\) acts to project the measured FRF matrix \\(\\mathbf{\\hat{Y}}\\) onto the (VP) interface model. The projected FRF matrix \\(\\mathbf{Y}\\) can now be used as part of any substructuring or source characterisation calculations.\nThe VP transformation has several advantages over the equivalent multi-point connection and finite difference approximation. Firstly, the translational and rotational DoFs of the VP are estimated based on the response of all sensors. This enables the over-determination of the problem, which can provide a more accurate estimate. A second advantage is that the resulting DoFs are exactly collocated at the VP. This is particularly useful when trying to combine measurements made on different components where all DoFs must be correctly aligned. This is also useful for combining measurements with models. The VP transformation has been extended further to include flexible interface modes (describe below), and rotational sensors.\nWhilst a powerful tool, the VP transformation has a notable disadvantage. It requires accurate information on both the position and orientation of each sensor and applied force (so to construct the transformation matrices \\(\\mathbf{T}_v\\) and \\(\\mathbf{T}_f\\)). If a CAD model of the interface is available this information may be readily obtained; without, the VP transformation might be subject to large uncertainties or simply not applicable.\n\nPerformance indicators\nThe VP transformations described above are based on the assumption that the interface can be described entirely by its rigid body dynamics. This is typically a valid assumption in the low/mid frequency range, where the structural wavelength is much larger than the size of the interface connection. However, as frequency increases so does the deformation of the interface and at higher frequencies there may be a need to assess the validity of the rigid body assumption.\nTransforming from a set of measured DoFs involves projecting \\(N_{v,f}\\geq 6\\) dimensional space representing the measurement DoFs (applied forces and measured responses), on to an \\(N=6\\) dimensional space representing the transformed rigid body DoFs (the interface model). In most cases, \\(N_{v,f}> 6\\) and so this projection potentially involves the loss of information.\nIf the interface does in fact behave rigidly, then the rank of its measured FRF matrix \\(\\mathbf{\\hat{Y}}_{cc}\\in\\mathbb{C}^{N_v\\times N_f}\\) must be limited to 6. In this case, the interface transformations can be applied in reverse to reconstruct the original measured FRF matrix from the transformed matrix, \\[\n \\mathbf{\\hat{Y}}' = \\mathbf{T}_v^{+}\\,\\mathbf{{Y}}\\,\\mathbf{T}_f^{\\rm +T} = \\mathbf{T}_v^{+}\\, \\left(\\mathbf{T}_v\\mathbf{\\hat{Y}}\\mathbf{T}_f^{\\rm T}\\right)\\,\\mathbf{T}_f^{\\rm +T} = \\mathbf{\\hat{Y}} \\quad \\quad \\mbox{if} \\quad \\mbox{rank}\\left(\\mathbf{\\hat{Y}}\\right) \\leq 6\n\\] where we use the \\(\\square'\\) accent to denote a reconstruction of the original FRF matrix.\nIf however, the interface possesses some flexibility, the rank of \\(\\mathbf{\\hat{Y}}_{cc}\\in\\mathbb{C}^{N_v\\times N_f}\\) has the potential to be greater than 6. In this case, the measured FRF matrix can only be reconstructed approximately, \\[\n \\mathbf{\\hat{Y}}' = \\mathbf{T}_v^{+}\\,\\mathbf{{Y}}\\,\\mathbf{T}_f^{\\rm +T} = \\mathbf{T}_v^{+}\\, \\left(\\mathbf{T}_v\\mathbf{\\hat{Y}}\\mathbf{T}_f^{\\rm T}\\right)\\,\\mathbf{T}_f^{\\rm +T} \\approx \\mathbf{\\hat{Y}} \\quad \\quad \\mbox{if} \\quad \\mbox{rank}\\left(\\mathbf{\\hat{Y}}\\right) > 6\n\\] Note that the reconstructed FRF matrix \\(\\mathbf{\\hat{Y}}'\\) includes only the rigid body dynamics; flexible interface dynamics in \\(\\mathbf{\\hat{Y}}\\) have been filtered out by applying the transformation followed by its inverse.\nIt is important to note that this action of ‘filtering out flexible dynamics’ implicitly requires an over-determined transformation with \\(N_{v,f}>6\\). In the determined case the transformation matrix becomes square (\\(N_{v,f}=6\\)), and assuming its inverse exists, it is exactly reversible. In other words, there are not enough measurement DoFs to observe flexibility; the interface is ‘under sampled’.\nIn the over-determined case, the above notion can be used to establish sensor and force consistency criteria that indicate the validity of the rigid interface assumption, as well as the quality of the sensor arrangement and applied forces.\n\nSensor consistency\nTo assess the consistency of the sensor arrangement we consider a unit force \\(f_h = 1\\) applied to the component in question at the arbitrary position \\(h\\). The directly measured velocity response satisfies the equation, \\[\n \\mathbf{\\hat{v}} = \\mathbf{\\hat{Y}}_h f_h = \\mathbf{\\hat{Y}}_h\n\\] where \\(\\mathbf{\\hat{Y}}_h\\) is a column vector of measured mobilities between force position \\(h\\) and the interface responses. Similarly, the transformed velocity response satisfies, \\[\n \\mathbf{v} = \\mathbf{Y}_h f_h = \\mathbf{Y}_h = \\mathbf{T}_v \\mathbf{\\hat{Y}}_h.\n\\] Applying the inverse transformation to the transformed velocity projects the response back onto the measured DoFs, \\[\n \\mathbf{\\hat{Y}}_h' = \\mathbf{T}_v^+ \\mathbf{T}_v \\mathbf{\\hat{Y}}_h.\n\\] where \\(\\square'\\) is used to denote filtering of rigid body dynamics.\nThe matrix product \\(\\mathbf{T}_v^+ \\mathbf{T}_v\\) has the effect of filtering out flexible interface modes, retaining only the rigid body dynamics.\nThe sensor consistency can be assessed by simply comparing the directly measured \\(\\mathbf{\\hat{v}}\\) and reconstructed (rigid body only) \\(\\mathbf{\\hat{v}}'\\) responses. This comparison can be done in a variety of ways. A popular approach is to use a correlation-based measure of similarity, \\[\n \\rho_{v,h} = \\mbox{Corr}\\left(\\mathbf{\\hat{Y}}_h,\\mathbf{\\hat{Y}}_h'\\right) \\qquad \\mbox{with} \\qquad\n \\mbox{Corr}\\left(\\mathbf{a},\\mathbf{b}\\right) = \\frac{ \\left|\\mathbf{a}^{\\rm H}\\mathbf{b} \\right|^2 }{ \\mathbf{a}^{\\rm H} \\mathbf{a} \\cdot \\mathbf{b}^{\\rm H}\\mathbf{b}}\n\\tag{14}\\] The sensor consistency \\(\\rho_{v,h}\\) is a frequency dependent scalar quantity that varies between 0 and 1. A value of 1 indicates rigid interface behaviour, i.e. the reconstructed (rigid body filtered) responses are identical to those measured directly. A value less than 1 indicates some degree of interface flexibility, which is lost through the applied transformation.\nNote that \\(\\rho_{v,h}\\) is evaluated for the force excitation at position \\(h\\), and that multiple force locations might be considered. In this case each force could be used to determine an independent \\(\\rho_{v,h}\\). Alternatively, the collection of measured and reconstructed responses can be vectorized (i.e. stacked into a single large column vector, denoted by \\(\\mbox{Vec}({\\square})\\)) and used to determine an overall sensor consistency, \\[\n\\rho_v = \\mbox{Corr}\\left(\\mbox{Vec}({\\mathbf{\\hat{Y}}}),\\mbox{Vec}({\\mathbf{\\hat{Y}}')}\\right).\n\\]\nWhilst the location of the applied forces is, in theory arbitrary, it is recommended to avoid using the interface so as to ensure that a global response is generated, as opposed to a local response that might be highly sensitive to the impact position.\n\n\nForce consistency\nBased on the above sensor criterion, it is straightforward to develop an equivalent impact consistency criterion so as to assess the quality of the applied forces.\nIn this case consider the response of a single interface sensor to all of the forces applied around the interface, \\[\n \\hat{v}_i = \\mathbf{\\hat{Y}}_i \\mathbf{\\hat{f}}\n\\] where $ _i$ is a row vector of directly measured mobilities between the applied forces and sensor \\(i\\).\nSimilarly, the response at sensor \\(i\\) due to the rigid body filtered forces is given by, \\[\n \\hat{v}_i' = \\mathbf{\\hat{Y}}_i \\mathbf{T}_f^{\\rm T} \\mathbf{f} = \\mathbf{\\hat{Y}}_i \\mathbf{T}_f^{\\rm T} \\mathbf{T}_f^{\\rm +T}\\mathbf{\\hat{f}} = \\mathbf{\\hat{Y}}_i' \\mathbf{\\hat{f}}\n\\] where \\(\\square'\\) is now used to denote filtering of rigid body forces. The impact consistency is then defined as, \\[\n \\rho_{f,i} = \\mbox{Corr}\\left(\\mathbf{\\hat{Y}}_i ,\\mathbf{\\hat{Y}}_i'\\right).\n\\] Like the sensor consistency, \\(\\rho_{f,i}\\) is bound between 0 and 1, with a value of 1 indicating perfect transmission of force between the applied impacts and the defined forces.\nTo obtain a single overall force consistency metric, we can again collect and vectorize the measured and reconstructed responses across all interface sensors, \\[\n\\rho_f = \\mbox{Corr}\\left(\\mbox{Vec}({\\mathbf{\\hat{Y}}}),\\mbox{Vec}({\\mathbf{\\hat{Y}}'})\\right).\n\\]\n\n\nCombined interface consistency\nHaving introduced the sensor and force consistency metrics above, it is perhaps obvious that one could formulate in the same manner a single overall consistency that combines the two. Indeed, applying both force and response transformations, alongside their respective inverses, one obtains the filtered FRF matrix, \\[\n\\mathbf{\\hat{Y}}' = \\mathbf{T}_v^{+}\\, \\left(\\mathbf{T}_v\\mathbf{\\hat{Y}}\\mathbf{T}_f^{\\rm T}\\right)\\,\\mathbf{T}_f^{\\rm +T}\n\\] which, by comparison against the original FRF matrix, forms an overall interface consistency metric, \\[\n\\rho = \\mbox{Corr}\\left(\\mbox{Vec}({\\mathbf{\\hat{Y}}}),\\mbox{Vec}({\\mathbf{\\hat{Y}}'})\\right).\n\\]",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#multi-point-interfaces",
"href": "introductions/VAVPs/interfaces/interfaces.html#multi-point-interfaces",
"title": "Interface representations",
"section": "Multi-point interfaces",
"text": "Multi-point interfaces\nIn the above we considered a single point interface, and introduced the Virtual Point transformation as an interface representation for point-like dynamics. In practice, it is rare that a structural component couples to another via a single connection point. Hence, it is necessary to extend this single-point treatment to the general multi-point case.\nFor the single-point interface, the full 6 DoF FRF matrix (see Equation 5) is obtained by, \\[\n\\mathbf{Y}=\\mathbf{T}_v\\mathbf{\\hat{Y}}\\mathbf{T}_f^{\\rm T}\n\\tag{15}\\] where \\(\\mathbf{\\hat{Y}}\\) is a matrix of measured translational mobilities, and \\(\\mathbf{T}_{v,f}\\) are a pair of transformation matrices. To extend this treatment to a multi-point case we simply note that a local interface model (i.e. a model pertaining to a single connection) depends only on the local measurement DoFs, i.e. those that surround the connection. Consequently, the transformed response and force DoFs of a multi-point interface are related to the measured response and force DoFs by, \\[\n \\left(\\begin{array}{c} \\mathbf{{v}}_{c_1} \\\\ \\vdots \\\\ \\mathbf{{v}}_{c_N}\\end{array}\\right) = \\left[\\begin{array}{ccc}\\mathbf{T}_{v,c_1} & & \\\\\n & \\ddots & \\\\\n & & \\mathbf{T}_{v,c_N} \\\\\n \\end{array}\\right]\\left(\\begin{array}{c} \\hat{\\mathbf{v}}_{c_1} \\\\ \\vdots \\\\ \\hat{\\mathbf{v}}_{c_N}\\end{array}\\right)\n\\] and \\[ \\left(\\begin{array}{c} \\hat{\\mathbf{f}}_{c_1} \\\\ \\vdots \\\\ \\hat{\\mathbf{f}}_{c_N}\\end{array}\\right)\n = \\left[\\begin{array}{ccc}\n \\mathbf{T}^{\\rm T}_{f,c_1} & & \\\\\n & \\ddots & \\\\\n & & \\mathbf{T}^{\\rm T}_{f,c_N} \\\\\n \\end{array}\\right]\\left(\\begin{array}{c} \\mathbf{{f}}_{c_1} \\\\ \\vdots \\\\ \\mathbf{{f}}_{c_N}\\end{array}\\right)\n\\]where \\(\\hat{\\square}\\) denotes the measured DoFs, and \\(\\mathbf{T}_{v,c_n}\\) and \\(\\mathbf{T}_{f,c_n}\\) are the response and force transformation matrices for the \\(n\\)th connection point. For the full multi-point interface mobility we then have, \\[\n \\mathbf{Y}_{cc} = \n \\left[\\begin{array}{ccc}\n \\mathbf{T}_{v,c_1} & & \\\\\n & \\ddots & \\\\\n & & \\mathbf{T}_{v,c_N} \\\\\n \\end{array}\\right]\n \\left[\\begin{array}{ccc}\n \\mathbf{\\hat{Y}}_{c_1c_1} & \\cdots & \\mathbf{\\hat{Y}}_{c_1c_N} \\\\\n \\vdots & \\ddots & \\vdots \\\\\n \\mathbf{\\hat{Y}}_{c_Nc_1} & \\cdots & \\mathbf{\\hat{Y}}_{c_Nc_N}\n \\end{array}\\right]\n \\left[\\begin{array}{ccc}\n \\mathbf{T}_{f,c_1} & & \\\\\n & \\ddots & \\\\\n & & \\mathbf{T}_{f,c_N}\n \\end{array}\\right]^{\\rm T}\n\\tag{16}\\] where it is noted that each transformation matrix can be constructed using the most appropriate method for that connection point (not necessarily the VPT).\nFinally, to include a set of additional remote DoFs \\(r\\) where no interface transformations are required, #eq-multipointInt can be extended simply as, \\[\n \\left[\\begin{array}{cc}\n \\mathbf{Y}_{cc} & \\mathbf{Y}_{cr} \\\\\n \\mathbf{Y}_{rc} & \\mathbf{Y}_{rr}\n \\end{array}\\right] = \n \\left[\\begin{array}{cc}\n \\mathbf{T}_{v,c} & \\\\\n & \\mathbf{I}\n \\end{array}\\right]\n \\left[\\begin{array}{cc}\n \\mathbf{\\hat{Y}}_{cc} & \\mathbf{\\hat{Y}}_{cr} \\\\\n \\mathbf{\\hat{Y}}_{rc} & \\mathbf{\\hat{Y}}_{rr}\n \\end{array}\\right]\n \\left[\\begin{array}{cc}\n \\mathbf{T}_{f,c} & \\\\\n & \\mathbf{I}\n \\end{array}\\right]^{\\rm T}\n\\tag{17}\\] where \\(c = \\{c_1, c_2, \\cdots, c_N\\}\\).",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#flexible-interfaces",
"href": "introductions/VAVPs/interfaces/interfaces.html#flexible-interfaces",
"title": "Interface representations",
"section": "Flexible interfaces",
"text": "Flexible interfaces\nThe notion of a point interface is typically valid in the low-mid frequency range. Fortunately, this is the range where most structural vibration problems are concerned, and so the point interface assumption has served the community well over the decades. However, new technologies (such as vehicle electrification) are ever extending the frequency of interest. With the corresponding wavelength decreasing, structural connections that might once have appeared `point-like’ are becoming comparable in size to wavelength. As a consequence, the assumption of locally rigid dynamics breaks down as local deformations occur. To deal with this, more complex representations of interface dynamics are required. Below we briefly detail a flexible extension of the virtual point transformation.\nThe virtual point transformation, as described above, relies on the construction of the so-called Interface Deformation Mode (IDM) matrices, \\(\\mathbf{R}_m\\) that describe how the measured DoF would respond to the motion of a virtual point, assuming rigid body motion. The rows of \\(\\mathbf{R}_m\\) correspond to the \\(x, y\\) and \\(z\\) coordinates of the measurement DoF, whilst the columns relate to the permissible motions of the virtual point (the translations, and three rotations), \\[\n\\mathbf{R}_m = \\left[\\rule{0cm}{1.4cm}\\right.\\overbrace{\\begin{array}{ccc} 0 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n0 & 0 & 1 \\end{array}}^{\\mbox{Translation}}\n\\left|\\rule{0cm}{1.4cm}\\right.\\,\\,\\,\n\\overbrace{\\begin{array}{ccc } 0 & r_z & -r_y \\\\\n -r_z & 0 & r_x \\\\\n r_y & -r_x & 0 \\end{array}}^{\\mbox{Rotation}} \\,\\,\\,\\left.\\rule{0cm}{1.4cm}\\right].\n\\] Importantly, any residual motion (i.e. flexibility) not captured by the 6 VP DoFs is discarded and not included in the interface representation. To accommodate for flexibility of the interface, additional modes must be added to the IDM matrix. A `bottom-up’ definition of flexible interface behaviour is recommended, which avoids the need for a priori knowledge of the interface dynamics, and is therefore more generally applicable.\nThe following example is given of an extended virtual point IDM that is capable of capturing an additional three extension (i.e. ‘strain’) and torsion deformations, alongside the standard rigid modes, \\[\n \\mathbf{R}_m^{ext} = \\left[\\rule{0cm}{1.4cm}\\right.\\overbrace{\\begin{array}{ccc} 0 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n0 & 0 & 1 \\end{array}}^{\\mbox{Translation}}\n\\left|\\rule{0cm}{1.4cm}\\right.\n\\,\\,\\,\n\\overbrace{\\begin{array}{ccc } 0 & r_z & -r_y \\\\\n -r_z & 0 & r_x \\\\\n r_y & -r_x & 0 \\end{array}}^{\\mbox{Rotation}}\n\\left|\\rule{0cm}{1.4cm}\\right.\n\\,\\,\\,\n\\overbrace{\\begin{array}{ccc } r_x & 0 & 0 \\\\\n 0 & r_y & 0 \\\\\n 0 & 0 & r_z \\end{array}}^{\\mbox{Extension}} \n\\left|\\rule{0cm}{1.4cm}\\right.\n\\,\\,\\,\n\\overbrace{\\begin{array}{ccc } 0 & r_y r_z & -r_zr_y \\\\\n -r_xr_z & 0 & r_zr_x\\\\\n r_x r_y & -r_y r_x & 0 \\end{array}}^{\\mbox{Torsion}}\n \\,\\,\\,\\left.\\rule{0cm}{1.4cm}\\right]\n\\] Based on this extended IDM matrix, a measured response is represented as, \\[\n \\hat{v}_n = \\left[\\begin{array}{c} e_x \\\\ e_y \\\\ e_z \\end{array} \\right]^{\\rm T} \\mathbf{R}_n^{ext}\n\\left[\\begin{array}{c} \\mathbf{v}_{tr.} \\\\ \\mathbf{v}_{rot.} \\\\ \\mathbf{v}_{ext.} \\\\ \\mathbf{v}_{tor.}\\end{array} \\right]\n\\tag{18}\\] where the flexible virtual point DoFs \\(\\mathbf{v}_{ext.} = [v_{ext.,x}, v_{ext.,y}, v_{ext.,z}]^{\\rm T}\\) and \\(\\mathbf{v}_{tor.} = [v_{tor.,x}, v_{tor.,y}, v_{tor.,z}]^{\\rm T}\\) have been added to the virtual point response vector \\(\\mathbf{v}\\). Note that a similar procedure can be applied to the excitation IDM matrix.\nHigher order flexible IDMs can be defined with similar ease, requiring only the product of measurement DoF coordinates. When including flexible IDMs, it is important to always ensure that the resultant matrix \\(\\mathbf{R}\\) is full-rank, i.e. to capture say 6 flexible modes, a minimum ov total of 12 independent measurement DoFs are required; 6 for the rigid IDMs, and 6 force the flexible IDMs.\nIn practice, it is necessary to ‘cherry-pick’ the correct set of IDM modes for the problem at hand, whilst avoiding any unnecessary IDMs in the interface representation.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#equivalent-multi-point-connection",
"href": "introductions/VAVPs/interfaces/interfaces.html#equivalent-multi-point-connection",
"title": "Interface representations",
"section": "Equivalent multi-point connection",
"text": "Equivalent multi-point connection\nOften when conducting a diagnostic test, such as in-situ TPA, interest does not lie with the contribution of the translational and rotational DoFs themselves, but rather with the contribution of the different connection points as a whole. In this case, there is no need to explicitly calculate the rotational DoFs, though their contribution must still be captured. This leads to the so-called (EMP) connection.\nThe equivalent multi-point connection is the most straightforward approach to capture rotational DoFs. The interface is instrumented/excited by 6 or more sensors/forces which, like the FD approximation, are attached/applied in spaced pairs. Each pair is orientated to point in a particular direction (\\(x\\), \\(y\\) or \\(z\\)). By measuring 6+ translational DoFs, all 6 DoFs (translations and rotations) are implicitly captured. Translational DoFs are encoded by the pair-wise average, whilst the rotational DoFs are encoded by the pair-wise difference, much like the FD approximation. Importantly however, the averaged translational and rotational components are not calculated explicitly; their influence is encoded within the 6+ individual translations. Hence we treat the connection as 6+ equivalent translational DoFs. Take for example the interface configuration in Figure \\(\\ref{FDexample2}\\) (right). Seven translational sensors are positioned around the interface. These are accompanied by 7 translational force excitations. According to the equivalent multi-point connection the mobility matrix \\(\\mathbf{Y}_{cc}\\) for this contact point would be of dimensions \\(7\\times 7\\). Rotational information about the \\(x\\), \\(y\\) and \\(z\\) axes are encoded by the sensors spaced about their respective axes.\nMore generally, the sensors/applied forces do not have to be included as spaced pairs. So long as no two sensors/forces are co-located, and those that are included sufficiently span \\(x\\) \\(y\\) and \\(z\\), all translational and rotational interface motions will be captured.\nThe main advantages of the equivalent multi-point connection approach are its straight forward implementation and ability to capture rotational DoFs. Furthermore, in the case that $>$6 measurement points are used, it has the ability to capture flexible interface dynamics. Its main disadvantage is that the true translational and rotational motions are not explicitly calculated. This can cause some issues when trying to combine data sets, for example measured blocked forces with transfer functions obtained from numerical models or other experimental set-ups.\nThough straight forward to implement, it is important to note that if more than 6 translations are used in the EMP representation, some regularisation may be required to avoid a rank deficient mobility matrix; if there are more sensors/forces present than DoFs to capture (i.e. more parameters than can be justified by the data) and the connection point indeed behaves rigidly, the inversion of \\(\\mathbf{Y}_{cc}\\) can become ill-conditioned and large errors returned. A simple solution is to perform a Singular Value Decomposition to invert the matrix, and retain only the first 6 singular values.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#a-short-comment-on-interface-completeness",
"href": "introductions/VAVPs/interfaces/interfaces.html#a-short-comment-on-interface-completeness",
"title": "Interface representations",
"section": "A short comment on interface completeness",
"text": "A short comment on interface completeness\nThe importance of accurately representing all the important interactions taking place at the interface has been emphasised in the above. For components with discrete point-like interfaces, these interactions are represented by the translational and rotational DoFs at each coupling point, all of which have the potential to contribute towards the dynamics of the assembled VAVP. At mid-to-high frequencies, where interface deformations become significant, higher order ‘flexible’ DoFs are required to provide an adequete description. An interface rpresentation description that is missing important DoFs can be said to be incomplete.\nThe incomplete representation of an interface is perhaps one of the main sources of error when building a VAVP. Not only does it effect the substructuring of components (if DoFs are not included in the interface model, then will not be coupled by the substructuring) but it can have a serve impact on the active characterisation of vibration sources (by neglecting DoFs large bias errors can be introduced onto the characterisation). It is therefore adventageous to develop tests for completeness, i.e. methods to identify whether the chosen interface representation has enough DoFs (also that these DoFs are the ‘right ones’). This topic is closely related to that of Uncertainty, and so it will be addressed in more detail there.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/interfaces/interfaces.html#what-next",
"href": "introductions/VAVPs/interfaces/interfaces.html#what-next",
"title": "Interface representations",
"section": "What next",
"text": "What next\nNow that we have introduced the concept of an interface representation, and how to populate it with data from measurement, we can continue with our coverage of component characterisation. Having dealt with the passive characterisation of source and receiver-like components, next on our to-do list is Vibration isolators, i.e. the squidgy bits of rubber that we use to reduce vibration transmission from active vibrations sources (which we will cover shortly afterward in Blocked forces) to receiver structures.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Interface representations"
]
},
{
"objectID": "introductions/VAVPs/uncertainty/uncertainty.html",
"href": "introductions/VAVPs/uncertainty/uncertainty.html",
"title": "Uncertainty",
"section": "",
"text": "Over the previous pages we have covered pretty much everything we need to build a working VAVP. Of course some more advanced/specific topics have been omitted for brevity (for example, we havn’t really discussed the particulars of how we go about solving the inverse problem for the blocked forces, including the use of regularisation). These topics are covered in much greater detail in the book Experimental Vibro-acoustics.\nThe last thing to consider is an evaluation of the uncertainties of our VAVP. Though we have left our treatment of uncertainty to last, this is not to say that uncertainty should be treated as an after thought! The importance of appropriate uncertainty analysis cannot be understated; it is fair to argue that the output of any model is meaningless, unless it is accompanied by a measure of its reliability. And so on this page, we introduce the main tools required to furnish our VAVP output with a measure of confidence, specifically in the form of a response prediction confidence bound.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Uncertainty"
]
},
{
"objectID": "introductions/VAVPs/uncertainty/uncertainty.html#categorisation-of-uncertainty-in-vavp",
"href": "introductions/VAVPs/uncertainty/uncertainty.html#categorisation-of-uncertainty-in-vavp",
"title": "Uncertainty",
"section": "Categorisation of uncertainty in VAVP",
"text": "Categorisation of uncertainty in VAVP\nIn order to minimise the unwanted effects of uncertainty, it is necessary to understand their underlying cause. In development of a VAVP (or the application of blocked force and/or sub-structuring methods) the uncertainties encountered may be categorized as originating from the ‘Model’, the ‘Source’, or the ‘Experiment’.\n\nModel uncertainty\nModel uncertainties arise when an approximate model is used to describe the physical problem. Obvious examples include the assumption of linearity and time invariance. Other important examples include the assumptions that:\n\nthe interface of a structure can be treated as a series of point-like DoFs,\nthat an interface description includes a sufficient number of DoFs,\nthe passive properties of an assembly do not change whilst under operation, among others.\n\nStrictly speaking, this form of ‘model uncertainty’ introduces a systematic/bias error as opposed to random uncertainty. In what follows we will introduce what we believe is the main source of model uncertainty, interface completeness.\n\n\n\nTable 1: Types of uncertainties encountered in VAVPs.\n\n\n\n\n\n\n\n\n\n\nType\nDescription\nCategory\n\n\n\n\n1) Model\nUncertainty resulting from an approximate model being used to describe the physical problem.\nSystematic\n\n\n2) Source\nUncertainty inherent to the operational behaviour of the source.\nRandom\n\n\n3) Experimental:\nUncertainty due to experimentation.\n\n\n\n3a) Measurement\nDue to the cumulative effect of noise sources in the measurement signal path and computation post processing.\nRandom\n\n\n3b) Operator\nDue to human error in the measurement procedure.\nRandom\n\n\n\n\n\n\n\n\nSource uncertainty\nSource uncertainty arises due to the inherent variation in the activity of any vibration source. This variation may be considered the temporal variation (i.e. fluctuating activity) exhibited by a single vibration source, or the variation in activity between a series of nominally identical vibration sources. Source uncertainty is typically random in nature.\n\n\nExperimental uncertainty\nExperimental uncertainty can be further subdivided into ‘Measurement’ and ‘Operator’ uncertainty. Measurement uncertainty describes the cumulative effect of noise sources within the measurement signal path and beyond. Typical examples include, external disturbances, thermo-electrical noise, sampling error, finite precision, etc. Measurement uncertainty is typically random in nature. Operator uncertainty describes the effect of human error in the measurement procedure. An important example is the inconsistent location and/ orientation of applied forces during the measurement of FRFs. Operator uncertainty is typically random in nature, although bias errors can be introduced if excitations are not be applied at the desired location. The presence of a bias error can often be absorbed by, or reinterpreted as, model uncertainty, e.g. a bias error in excitation position is often equivalent to redefining the position of a model DoF.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Uncertainty"
]
},
{
"objectID": "introductions/VAVPs/uncertainty/uncertainty.html#model-uncertainty-interface-completeness",
"href": "introductions/VAVPs/uncertainty/uncertainty.html#model-uncertainty-interface-completeness",
"title": "Uncertainty",
"section": "Model uncertainty (interface completeness)",
"text": "Model uncertainty (interface completeness)\nAs described above, model uncertainty emanates from the use of an approximate model to describe a physical problem. Its effect is to introduce a systematic error. Whilst its underlying cause may vary from application to application, there exist two notable sources worth discussing here. These are related to the notions of completeness and consistency, the former of which we consider here.\n\n\n\n\n\n\nNote\n\n\n\nCompleteness describes the degree to which the interface between two components has been correctly represented, for example, whether enough DoFs have been included. The completeness of an interface model is essential if independent blocked forces are to be obtained. The completeness of an interface description is also an essential requirement in the coupling and decoupling of substructures.\n\n\nInterface completeness is a general issue encountered whenever measurements are performed at the interface of one or more sub-structures. This includes not only the characterisation of blocked forces, but also the coupling and decoupling of substructures. To demonstrate the importance of completeness we will consider it from the perspective of a blocked force characterisation, though similar notions apply in other situations.\n\n\n\n\n\n\n\n\nUnconstrained assembly.\n\n\n\n\n\n\n\nPartially constrained assembly\n\n\n\n\n\n\nFigure 1: Unconstrained and partially constrained assemblies for use in deriving interface completeness criteria.\n\n\n\nShown in Figure 2 (left) is a vibration source \\(S\\) coupled via two sets of DoFs \\(c_i\\) and \\(c_j\\), to a receiver structure \\(R\\). Completeness may be described as follows: suppose all of the interface DoFs (\\(c_i\\) and \\(c_j\\)) are constrained such that their responses (displacement, velocity and acceleration) are zero, and an arbitrary internal or external force is applied to the source. If the interface has been described completely (i.e all DoFs are being constrained), then the resultant receiver response will be zero everywhere, since no vibration will be able to cross the blocked interface. We call this an complete interface representation (or interface model). Had only a portion of the interface been constrained, say the \\(c_i\\) DoFs, then some vibration will find a way through the interface and into the receiver; its response will no longer be zero. We call this an incomplete interface representation. In what follows we will show that an incomplete interface representation is generally bad news… We will then introduce the Interface Completeness Criterion as a means of quantifying the degree of incompleteness as a tool to minimise it.\nConsidering first a complete interface representation, where all interface DoFs are contained within the set \\(c\\). We have see in Blocked forces that the blocked force can be obtained using the in situ relation, \\[\n\\mathbf{{\\bar{f}}}_{{c}} = \\mathbf{Y}_{C{bc}}^{+}\\mathbf{{v}}_{{c}}\n\\] where \\(C\\) denotes the coupled assembly, and \\(b\\) and \\(c\\) are sets of remote (indicator) and interface DoFs, respectively. Once obtained, the blocked force can be used to identically reproduce the response in some new assembly \\(X\\) at the target DoFs \\(r\\) as, \\[\n\\mathbf{{{v}}}_{{r}} = \\mathbf{Y}_{X{rc}} \\mathbf{{\\bar{f}}}_{{c}}.\n\\] Let us now consider what happens if the interface model is incomplete. We do this by treating the interface \\(c\\) as two sets of DoFs: those that are known and measurable, \\(c_i\\); and those that are unknown, or known but unmeasurable, \\(c_j\\). The in situ relation (in its forward form) can then be rewritten as the sum of their respective contributions, \\[\n\\mathbf{v}_{{b}} = \\mathbf{Y}_{C{bc_i}}\\mathbf{\\bar{f}}_{{c_i}} +\\mathbf{Y}_{C{bc_j}}\\mathbf{\\bar{f}}_{_{c_j}}.\n\\qquad(1)\\] As discussed in Blocked forces, the in situ characterisation of the blocked force requires the operational response to be pre-multiplied by the inverse of a measured FRF matrix. Recall that the \\(c_j\\) DoFs are inaccessible for measurement, or perhaps unknown, and only the \\(\\mathbf{Y}_{C{bc_i}}\\) FRF is available. The blocked force is then given by, \\[\n\\mathbf{Y}_{C{bc_i}}^{-1}\\mathbf{v}_{{b}} = \\mathbf{\\tilde{\\bar{f}}}_{{c_i}} = \\mathbf{\\bar{f}}_{{c_i}} +\\mathbf{Y}_{C{bc_i}}^{-1}\\mathbf{Y}_{C{bc_j}}\\mathbf{\\bar{f}}_{{c_j}}\n\\qquad(2)\\] where \\(\\mathbf{\\tilde{\\bar{f}}}_{{c_i}}\\) represents the acquired blocked force and \\(\\mathbf{\\bar{f}}_{{c_i}}\\) its true value. Note that the true blocked force is accompanied by an additional term, \\(\\mathbf{Y}_{C{bc_i}}^{-1}\\mathbf{Y}_{C{bc_j}}\\mathbf{\\bar{f}}_{{c_j}}\\). This has arisen due to the neglected DoFs \\(c_j\\). This term constitutes a bias error on the true blocked force. Also notice that this additional term is dependent upon the dynamics of the coupled assembly. Consequently, the acquired blocked force is no longer an independent property of the source and is no longer transferable between assemblies! This is bad news for a VAVP… After all, one of the principle advantages of the blocked force over other source characterisation methods is that the acquired result may be transferred from one assembly, say a test bench, to another, say a virtual prototype.\nIf we consider the transfer of this incomplete blocked force into a secondary assembly denote \\(X\\) (say a VAVP), we obtain a response prediction of the form, \\[\n\\mathbf{\\tilde{v}}_{X{r}} = \\mathbf{Y}_{X{rc_i}} \\mathbf{{\\bar{f}}}_{{c_i}} +\\mathbf{Y}_{X{rc_i}} \\mathbf{Y}_{C{bc_i}}^{-1}\\mathbf{Y}_{C{bc_j}}\\mathbf{\\bar{f}}_{{c_j}}.\n\\] whilst the true response (using the complete blocked force) takes the form, \\[\n\\mathbf{v}_{X{r}} = \\mathbf{Y}_{X{rc_i}}\\mathbf{\\bar{f}}_{{c_i}} +\\mathbf{Y}_{X{rc_j}}\\mathbf{\\bar{f}}_{{c_j}}.\n\\]\nIt is clear from the above that the bias error we see on the blocked force carries through onto any response prediction. Obviously, we want to minimise the influence of this bias term, ideally removing it all together. The first step in doing so is coming up with a meaningful quantification of the severity of this error.\n\nInterface Completeness Criterion (ICC)\nWhen characterising a component, it is of course beneficial to minimise the influence of bias errors due to incompleteness. Knowing this, it may be tempting to ‘play safe’ and ensure that the full 6 DoFs are included at every point. However, this may not be necessary, indeed if some DoFs do not contribute significantly then to include them would be undesirable a) because of the extra experimental effort involved and b) because the unnecessary measurements will add noise into the system without contributing to an accurate description of interface dynamics. For example, with resilient coupling it is often found that some DoFs are significantly softer than others and do not influence the interface dynamics to a great extent; rigid coupling at the interface on the other hand typically requires a greater number of DoFs to achieve a sufficient level of completeness. On the otherhand, in the mid-high frequency range, the point-like DoFs might not fully capture the interface dynamics, and a more robust interface model required (i.e. including some flexible interface dynamics). Clearly, there is great potential benefit in being able to quantify the completeness of an interface model prior to embarking on a full measurement campaign.\nTo quantify completeness, we are interested in obtaining a measure of the bias error in Equation 2 relative to the true blocked force. However, this bias error, defined in terms of the unknown \\(c_j\\) DoFs, cannot be measured directly. We need an indirect method to quantify it.\nComparison of a predicted response (using an estimated blocked force) with the measured response should provide a measure of incompleteness. However, this is not a particularly robust approach for two reasons:\n\nThe operational forces within the source are not guaranteed to excite all of the interface DoFs, which is required for a true assessment of the interface completeness.\nAn operational blocked force might also be subject to inconsistency, i.e. discrepancies might not be due to incompleteness alone.\n\nA solution is to replace the operational blocked force with those obtained using artificial excitations, i.e. external forces applied to the housing of the source, usually by an instrumented force hammer. Artificial excitations can be applied in any number, and in such a way to ensure that all interface DoFs are excited, overcoming issue 1. Also, with the source remaining inactive, inconsistencies between active and passive quantities (issue 2) are also avoided.\nSo, the response produced by an artificial excitation at some remote location \\(a\\) on the source, can be reproduced identically by the corresponding (artificial) blocked force according to, \\[\n \\mathbf{v}_{{c_i}} = \\mathbf{Y}_{C{c_ic_i}}\\mathbf{\\bar{f}}_{{c_i}} = \\mathbf{Y}_{C{c_ia}}\\mathbf{{f}}_{a}.\n\\qquad(3)\\] From the above we obtain a transmissibility-based relation between the artificial excitation force and the resulting blocked force, \\[\n \\mathbf{\\bar{f}}_{{c_i}} = \\mathbf{Y}_{C{c_ic_i}}^{-1}\\mathbf{Y}_{C{c_ia}}\\mathbf{\\bar{f}}_{a}.\n\\] Using the blocked force, we are able to predict the response at the remote receiver DoFs \\(b\\), \\[\n \\mathbf{v}_{{b}} = \\overbrace{\\mathbf{Y}_{C{bc_i}}\\mathbf{Y}_{C{c_ic_i}}^{-1}\\mathbf{Y}_{C{c_ia}}}^{\\mathbf{Y}_{C{ba}}^{(c_i)}}\\mathbf{{f}}_{a}.\n\\qquad(4)\\] Noting that Equation 4 simply relates the external force at \\(a\\) to the response at \\(b\\), its FRF product must be the transfer FRF from \\(a\\) to \\(b\\), \\(\\mathbf{Y}_{C{ba}}^{(c_i)}\\). Here we use the superscript \\(\\square^{(c_i)}\\) to denote that the FRF reconstruction is done only through the interface DoFs \\(c_i\\).\nWith a complete interface (\\(c=c_i\\)), the reconstructed FRF \\(\\mathbf{Y}_{C{ba}}^{(c_i)} = \\mathbf{Y}_{C{bc_i}}\\mathbf{Y}_{C{c_ic_i}}^{-1}\\mathbf{Y}_{C{c_ia}}\\) is identically equal to the direct FRF \\(\\mathbf{Y}_{C{ba}}\\) (in practice small discrepancies due to experimental error will be present). However, with an incomplete interface, the contribution of the neglected \\(c_j\\) DoFs will not be accounted for by \\(\\mathbf{Y}_{C{ba}}^{(c_i)}\\), though will be present in \\(\\mathbf{Y}_{C{ba}}\\). As a result an approximation is introduced. The level of agreement between the reconstructed and direct FRFs is proportional to the bias error in Equation 2. Therefore, to quantify completeness it is sufficient to assess the similarity of the direct and reconstructed FRFs, \\(\\mathbf{Y}_{C{ba}}\\) and \\(\\mathbf{Y}_{C{ba}}^{(c_i)}\\). To this end, a correlation-based measure of similarity yields the so-called Interface Completeness Criterion (ICC), \\[\n\\begin{align}\n &\\mbox{ICC} = \\mbox{Corr}\\left(\\mbox{Vec}\\left({\\mathbf{Y}}_{{C}{ba}}^{{(c)}}\\right),\\mbox{Vec}\\left({\\mathbf{Y}}_{{C}{ba}}^{{(c_i)}}\\right)\\right) \\\\\n & \\mbox{Corr}\\left(\\mathbf{a},\\mathbf{b}\\right) = \\frac{ \\left|\\mathbf{a}^{\\rm H}\\mathbf{b} \\right|^2 }{ \\mathbf{a}^{\\rm H} \\mathbf{a} \\cdot \\mathbf{b}^{\\rm H}\\mathbf{b}}\n\\end{align}\n\\qquad(5)\\] where \\(\\mbox{Vec}\\left({\\square}\\right)\\) denotes matrix column-wise vectorisation and \\(\\square^{\\rm H}\\) represents a conjugate transpose. Note that to form the necessary vectors, multiple excitation and/or response DoFs (\\(b\\) and \\(a\\)) should be considered when measuring the necessary FRFs. Ideally, the excitations \\(a\\) should be sufficient in number, and varied in direction, such that they are able to control all interface motions in the frequency range of interest. This requires the number of \\(a\\) points to be equal to or greater than the number of interface DoFs. Similarly, the responses positions \\(b\\) should be such that they are sufficient in number and orientation to observe all interface motions. This requires the number of \\(b\\) points to also be equal to or greater than the number of interface DoFs.\nIf the chosen interface representation is complete, that is, \\(|c_i| = |c|\\), the ICC will be equal to one. If the interface description is incomplete, \\(|c_i| < |c|\\) the ICC will be less than 1. In practice, it is important to note that the FRFs used in Equation 5 are obtained from measurement and are therefore subject to some degree of uncertainty. Consequently, even in the case of a complete interface representation, an ICC equal to 1 is unlikely to be achieved (though the authors have found it usually straightforward to distinguish noise-induced discrepancies, which tend to be localised in frequency, with those from neglected DoFs which tend to cover a wide frequency range).\nThe ICC provides a measure of interface completeness irrespective of the operating nature of the connected components. For example, suppose a vibration source generates forces exclusively in the vertical \\(z\\) DoFs. Whilst a decent blocked force characterisation might be obtained using the \\(z\\) DoFs alone, this would not constitute a complete interface representation according to the ICC if other DoFs are important for the coupling. The ICC considers completeness more generally, specifically with respect to the applied forces at \\(a\\) and observed responses at \\(b\\).\nThe ICC can be used also to assess the completeness of interface representations for structural coupling and decoupling problems. For decoupling, tests are required on the assembled structure (target component coupled to residual test bench), and so the ICC can be applied as standard. For coupling problems, one usually only measures the component in its free-interface configuration, from which the ICC can not be computed. The component must be installed onto some test bench such that ICC can be computed its interface representation be assessed.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Uncertainty"
]
},
{
"objectID": "introductions/VAVPs/uncertainty/uncertainty.html#random-uncertainty",
"href": "introductions/VAVPs/uncertainty/uncertainty.html#random-uncertainty",
"title": "Uncertainty",
"section": "Random uncertainty",
"text": "Random uncertainty\nIn the above we considered role of model uncertainty, specifically completeness and its introduction of a systematic error onto the blocked force. Here, we turn our attention towards the role of random uncertainty.\nRandom uncertainties, or random errors, are those which, over a series of repeated measurements, randomly fluctuate about the true value of the measurand. Whilst all measurements are subject to some degree of random uncertainty, it is often not the measurements themselves that we are interested in. Rather, these measurements are used to infer or derive some other quantity. For example, we measure operational responses and FRFs, not for their own sake, but to calculate a blocked force or to perform substructuring. The question is then, how do the random errors in our measurements affect the estimates of derived quantities? That is, how does the uncertainty propagate through our VAVP equations? Answering this question will be the focus of the remainder of this page.\n\nQuick stats. recap\n\nMean\nThe mean is a measure of central tendency. Given a discrete set of values, for example the repeated measurement of a quantity \\(x\\), it is calculated as, \\[\n\\mathbb{E}[{x}]= \\frac{1}{N} \\sum_n^N {x_n} = \\mu_x\n\\] where \\(x_n\\) is the \\(n\\)th sample of the quantity \\(x\\), and \\(N\\) is the total number of measured samples. For vector or matrix quantities we can similarly calculate the mean value as, \\[\n\\mathbb{E}[\\mathbf{x}] = \\frac{1}{N} \\sum_n^N \\mathbf{x}_n = \\mathbf{\\mu_x}\n\\] where \\(\\mathbf{x}_n\\) is the \\(n\\)th sample of the vector (or matrix) quantity \\(\\mathbf{x}\\).\nIf the mean value is calculated from a number of repeated measurements, then it is called the sample mean. This is not to be confused with the true mean. In an experimental context, the true mean corresponds to the value of the measureand, in the absence of any random/and or bias errors. According to the Law of Large Numbers, in the absence of any systematic bias errors the sample mean obtained from a large number of repeated experiments, should be close to the true mean, and tend towards it as the number of experiments increases.\nThe true mean, or the population mean, of a quantity is equal to the calculated mean only when every member of the `population’ is included. In an experimental setting the population of experiments is infinite, and so the true mean can only be approximated using the sample mean obtained from a sufficient number of repeated measurements. It is important to note that if an insufficient number of measurements are taken, the sample mean may differ considerably from the true mean.\n\n\nVariance and covariance\nVariance is a measure of how far a set of values are spread out from the mean. Given a discrete set of values, for example the repeated measurement of a quantity \\(x\\), the variance is calculated as, \\[\n\\mbox{Var}(x) = \\frac{1}{N-1}\\sum_{n=1}^N (x_n-{\\mu_x})^2=\\sigma^2_x.\n\\qquad(6)\\] where \\(\\sigma_x\\) is termed the standard deviation. The variance and standard deviation are a strictly positive quantities, which get larger as the data becomes more widely spread.\nStrictly speaking, Equation 6 is called the sample variance, as opposed to the population variance because it is based on the sample mean, which is not necessarily equal to the population mean. The sample vs. population mean can introduce a bias error on the estimate of \\(\\sigma_x^2\\). The use of \\(N-1\\) as opposed to \\(N\\) ensures that the sample variance gives an unbiased estimate of the population variance. Clearly, this only has a noticeable effect when \\(N\\) is quite small.\nVariance describes the variability of a measured quantity, say \\(x\\). Covariance describes the joint variability between two measured quantities, say \\(x\\) and \\(y\\). The sample covariance is calculated as, \\[\n\\mbox{Cov}(x,y) = \\frac{1}{N-1}\\sum_{n=1}^N (x_n-\\mu_x)(y_n-\\mu_y) =\\sigma_{xy}^2.\n\\qquad(7)\\] Unlike the variance, covariance can be positive or negative depending on how \\(x\\) and \\(y\\) vary relative to one another. For example, suppose we have a set of measurements for \\(x\\) and \\(y\\). If the larger values of \\(x\\) correspond mostly with larger values of \\(y\\), and the same holds for small values - \\(x\\) and \\(y\\) have similar behaviour - then their covariance is positive. Conversely, if the larger values of \\(x\\) correspond mostly with smaller values of \\(y\\), and visa versa - \\(x\\) and \\(y\\) have opposite behaviour - then their covariance is negative. The sign of the covariance shows the linear tendency between \\(x\\) and \\(y\\).\nThe notion of covariance is particularly important when dealing with the uncertainty of FRF matrices, the elments of which are often highly correlated with each other.\nFor vector quantities, the notion of covariance can be extended to a covariance matrix. Given a set of measurements for the vector quantity \\(\\mathbf{x}\\), its sample covariance matrix is calculated as, \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{x}} = \\frac{1}{N-1}\\left[\\left(\\mathbf{\\hat{x}} - \\mathbf{\\mu_x}\\mathbf{1}^{\\rm T}\\right)\\left(\\mathbf{\\hat{x}} - \\mathbf{\\mu_x}\\mathbf{1}^{\\rm T}\\right)^{\\rm T}\\right]\n\\qquad(8)\\] where, to avoid summation notation, the repeated \\(N\\) measurements of the vector \\(\\mathbf{x}\\) are arranged into the columns of a measurement matrix \\(\\mathbf{\\hat{x}}\\), \\[\n \\mathbf{\\hat{x}} = \\left[\\begin{array}{cccc}\n \\mathbf{x}^{(1)} & \\mathbf{x}^{(2)} & \\cdots & \\mathbf{x}^{(N)}\n \\end{array}\\right]\n\\] from which the mean value can be subtracted by column stacking the mean vector \\(\\mathbf{\\mu_x}\\) using, \\[\n \\mathbf{\\mu_x}\\mathbf{1}^{\\rm T} = \\left[\\begin{array}{ccc}\n \\mathbf{\\mu_x} & \\cdots & \\mathbf{\\mu_x}\n \\end{array}\\right]\n\\] where \\(\\mathbf{1}\\) is simply an \\(N\\) dimensional vector of 1s.\nNote that the diagonal entries of \\(\\boldsymbol{\\Sigma}_{\\mathbf{x}}\\) correspond to the variances of each element of \\(\\mathbf{x}\\), whilst the off-diagonal entries (covariances) satisfy \\(\\left(\\boldsymbol{\\Sigma}_{\\mathbf{x}}\\right)_{ij} = \\left(\\boldsymbol{\\Sigma}_{\\mathbf{x}}\\right)_{ji}\\), i.e. the covariance matrix is symmetric.\nWe can similarly calculate a covariance matrix between two vector quantities \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\), \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{xy}} = \\frac{1}{N-1}\\left[\\left(\\mathbf{\\hat{x}} - \\mathbf{\\mu_x}\\mathbf{1}^{\\rm T}\\right)\\left(\\mathbf{\\hat{y}} - \\mathbf{\\mu_y}\\mathbf{1}^{\\rm T}\\right)^{T}\\right].\n\\qquad(9)\\] Note that the diagonal elements of \\(\\boldsymbol{\\Sigma}_{\\mathbf{xy}}\\) are not variances in this case, but covariances.\nFor matrix quantities \\(\\mathbf{X}\\) and \\(\\mathbf{Y}\\) we can calculate a covariance matrix \\(\\boldsymbol{\\Sigma}_{\\mathbf{XY}}\\) in the same way as for the vectors \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\). To do so we have to restructure each matrix into a (tall) vector. This is done by taking each column of the matrix \\(\\mathbf{X}\\in\\mathbb{R}^{N\\times M}\\) (starting with the left most column) and stacking them vertically on top of one another such that \\(\\mbox{Vec}(\\mathbf{X}) = \\mathbf{\\vec{X}}\\in\\mathbb{R}^{NM}\\). We call \\(\\mbox{Vec}(\\quad)\\) the vectorisation operator. Here is an example, \\[\n\\mbox{Vec}\\left(\\left[\\begin{array}{c c c}\nA & B & C \\\\ D & E & F \\\\ H & I & J\\end{array}\\right]\\right) = \\left(\\begin{array}{c } A \\\\ D \\\\H \\\\B \\\\E\\\\I\\\\C\\\\F\\\\J \\end{array}\\right).\n\\] Utilising the vectorisation operator, the covariance matrix of \\(\\mathbf{X}\\) from \\(N\\) measurements is given by, \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{X}} = \\frac{1}{N-1}\\left[\\left(\\mathbf{\\hat{X}} - \\mathbf{\\mu}_{\\vec{\\mathbf{X}}}\\mathbf{1}^{\\rm T}\\right)\\left(\\mathbf{\\hat{X}} - {\\mathbf{\\mu}}_{\\vec{\\mathbf{X}}}\\mathbf{1}^{\\rm T}\\right)^{\\rm T}\\right]\n\\qquad(10)\\] where the measurement matrix \\(\\mathbf{\\hat{X}}\\) is built from the vectorisation of each measured matrix, \\[\n \\mathbf{\\hat{X}} = \\left[\\begin{array}{cccc}\n \\vec{\\mathbf{X}}^{(1)} & \\vec{\\mathbf{X}}^{(2)} &\\cdots &\\vec{\\mathbf{X}}^{(N)}\n \\end{array}\\right]\n\\qquad(11)\\] and \\(\\mathbf{\\mu}_{\\vec{\\mathbf{X}}}\\) is the vectorisation of the matrix mean.\nFor the covariance between two matrices, say \\(\\mathbf{X}\\) and \\(\\mathbf{Y}\\) we have similarly, \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{XY}} = \\frac{1}{N-1}\\left[\\left(\\mathbf{\\hat{X}} - \\mathbf{\\mu}_{\\vec{\\mathbf{X}}}\\mathbf{1}^{\\rm T}\\right)\\left(\\mathbf{\\hat{Y}} - {\\mathbf{\\mu}}_{\\vec{\\mathbf{Y}}}\\mathbf{1}^{\\rm T}\\right)^{\\rm T}\\right]\n\\qquad(12)\\]\nWe will use the above formulae for vector and matrix covariance matrices later to characterise the uncertainty in measured operational velocities and FRFs.\n\n\n\n\n\n\nNote\n\n\n\nRecall that all of our VAVP equation are set in the frequency domain… Consequently, all of our variables are complex quantities, they have magnitude and phase (or a real and imaginary part)! We will have to adapt the above definitions of variance and covariance to deal with this. Its an easy fix, but we will save it for later.\nIn addition to the above, there are some particulars related to obtaining robust covariance matrices which are beyond the scope of this introduction.\n\n\n\n\n\nLaw of propagation of uncertainty (LPU)\nPropagation of uncertainty (or error) is the procedure by which the uncertainties in one or more input variables (such as a set of measured quantities) are used to predict the uncertainty on the output of a function that depends on them. For example, in the context of a blocked force characterisation, we may wish to know the uncertainty in our estimate of the blocked force given the uncertainty in a measured FRF matrix.\nHere we will limit our discussion to probabilistic methods, in particular a local expansion-based method known as the Law of Propagation of Uncertainty (LPU). This method has advantages when being used with relatively small levels of uncertainty, or functions that do not express considerable non-linearity. In the presence of large uncertainty or very non-linear functions we will resort to sampling based method using Monte-Carlo simulations.\nThe LPU is based on a first order (linear) Taylor series expansion of the function through which uncertainty is propagated. In this sense, we can think of LPU as also meaning Linear Propagation of Uncertainty. As we will see shortly, after some manipulations an expression is obtained that relates the covariance matrix of the measured input variables to that of the output. These covariance matrices are related by a matrix of partial derivatives (termed a Jacobian matrix). An illustrative example of the LPU is shown in ??? for a simple single input single output function. Below we derive the general LPU for multiple inputs and multiple outputs.\n\n\n\n\n\n\n\n\nLPU applied to linear function.\n\n\n\n\n\n\n\nLPU applied to non-linear function.\n\n\n\n\n\n\nFigure 2: Illustration of LPU propagation. LPU approximates function by its local derivative. Gives an exact propagation of uncertainty for linear problems (left), and an approximation for non-linear functions (right). For multi-input/output functions the same idea applies, just in a higher dimension (i.e. the function is approximated by a hyper-plane rather than a straight line).\n\n\n\nWe begin by considering \\(y_i\\) and \\(y_j\\), generally, as the outputs of some multi-variable function, \\[\ny_i = G_i(x_1,x_2,\\cdots,x_M) = G_i(\\mathbf{x})\n\\] \\[\ny_j = G_j(x_1,x_2,\\cdots,x_M)= G_j(\\mathbf{x}),\n\\] where \\(\\mathbf{x}=\\left(x_1,\\,x_2,\\,\\cdots,\\,x_M\\right)\\) is a vector of input variables.\nWe follow the same steps as before. Suppose that we repeatedly measure each element of the input variable vector \\(\\mathbf{x}\\). Due to experimental error the result is perturbed by \\(\\Delta\\mathbf{x}\\). This small change in the value of \\(\\mathbf{x}\\) will lead to a small change in the value of the output variables \\(y_i\\) and \\(y_j\\). Assuming that the input error \\(\\Delta\\mathbf{x}\\) remains small, the change in output \\(\\Delta y_i\\) and \\(\\Delta y_j\\) may be approximated by a first order Taylor series expansion, \\[\n\\Delta y_{i} \\approx \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_1}\\Delta x_{1} + \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_2} \\Delta x_{2} + \\cdots+\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_M}\\Delta x_{M}\n\\qquad(13)\\] \\[\n\\Delta y_{j} \\approx \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_1}\\Delta x_{1} + \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_2} \\Delta x_{2} + \\cdots+\\frac{\\partial G_j(\\mathbf{x})}{\\partial x_M}\\Delta x_{M}\n\\qquad(14)\\] where \\(\\frac{\\partial G_j(\\mathbf{x})}{\\partial x_M}\\) is the partial derivative of the propagation function \\(G_j(\\square)\\) with respect to the \\(M\\)th input variable. Using subscript \\({\\square}_n\\) to denote the \\(n\\)th measurement, Equation Equation 13 and Equation 14 may be written more conveniently using summation notation, \\[\n\\Delta y_{i_n} \\approx \\sum_m^M \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\n\\qquad(15)\\] \\[\n\\Delta y_{j_n} \\approx \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}.\n\\qquad(16)\\]\nNow that we have two output variables, \\(y_i\\) and \\(y_j\\), we are also interested in determining the covariance between them. To do so, we begin by multiplying together Equation 15 and Equation 16, \\[\n\\Delta y_{i_n}\\Delta y_{j_n} \\approx \\left(\\sum_m^M \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right)\\left( \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right).\n\\] Summing over \\(N\\) measurements and dividing both sides by \\(N-1\\) yields, \\[\n\\frac{1}{N-1}\\sum_n^N\\Delta y_{i_n}\\Delta y_{j_n} \\approx \\frac{1}{N-1}\\sum_n^N \\left(\\sum_m^M \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right)\\left( \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right).\n\\] By expanding the brackets, \\[\n\\begin{align}\n\\frac{1}{N-1}\\sum_n^N\\Delta y_{i_n}\\Delta y_{j_n} \\approx \\frac{1}{N-1}\\sum_n^N \\Bigg[ \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_1}\\Delta x_{1_n}\\left( \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right) + \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_2}\\Delta x_{2_n} \\\\\\left( \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right)+\\cdots+\n\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_M}\\Delta x_{M_n}\\left( \\sum_m^M \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\Delta x_{m_n}\\right)\\Bigg]\n\\end{align}\n\\] and regrouping terms we get, \\[\n\\begin{align}\n\\frac{1}{N-1}\\sum_n^N\\Delta y_{i_n}\\Delta y_{j_n} \\approx \\sum_m^M \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m} \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m} \\frac{1}{N-1}\\sum_n^N\\Delta x_{m_n}\\Delta x_{m_n} +\\\\ \\sum_{m\\neq l}\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_l} \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\frac{1}{N-1}\\sum_n^N\\Delta x_{l_n}\\Delta x_{m_n}\n\\end{align}\n\\qquad(17)\\] Recalling the definitions of variance and covariance from Equation 6 and Equation 7, Equation 17 can be rewritten as, \\[\n\\sigma_{y_{i}y_j} \\approx \\sum_m^M \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m} \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\sigma_{x_m}^2 + \\sum_{m\\neq l}\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_l} \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_m}\\sigma_{x_lx_m}^2\n\\qquad(18)\\] where \\(\\sigma_{y_{i}y_j}\\) is the covariance between the output variables \\(y_i\\) and \\(y_j\\), \\(\\sigma_{x_m}^2\\) is the variance of the input variable \\(x_m\\), and \\(\\sigma_{x_lx_m}^2\\) is the covariance between the input variables \\(x_l\\) and \\(x_m\\). For the case that \\(i=j\\) Equation 18 reduces to, \\[\n\\sigma_{y_{i}}^2 \\approx \\sum_m^M \\left(\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m} \\right)^2\\sigma_{x_m}^2 + \\sum_{m\\neq l}\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_l} \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m}\\sigma_{x_lx_m}\n\\qquad(19)\\] where \\(\\sigma_{y_{i}}^2\\) is the variance of the output variable \\(y_i\\).\nThe first summation in Equation 18 and Equation 19 describes the propagation of input variance onto the output uncertainty. The second summation describes the influence of input covariance (i.e. correlations between the input variables) on the output uncertainty. Often when dealing with uncertainty propagation the simplifying assumption of uncorrelated input variables is made, that is \\(\\sigma_{x_lx_{m\\neq l}}=0\\). The output variance then reduces to, \\[\n\\sigma_{y_{i}}^2 \\approx \\sum_m^M \\left(\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_m} \\right)^2\\sigma_{x_m}^2.\n\\qquad(20)\\]\n\n\n\n\n\n\nNote\n\n\n\nUnfortunately, the experimental measurements associated with blocked forces and dynamic substructuring often yield highly correlated input variables. This is especially so for impact-based FRF measurements.\nThe impact-based measurement of an FRF matrix involves the operator exciting the component at each DoF of interest using an instrumented force hammer, whilst simultaneously measuring the response at all DoFs. Typically, to improve SNR and/or provide a measure of coherence, multiple impacts are applied at each DoF. Of course, there will always exist some variation in the position and orientation of these repeated impacts (nobody’s perfect!). Naturally, the FRFs for each impact will differ. With all responses measured simultaneously, they clearly share the same underlying operator uncertainty (i.e. varying impact position/orientation) and so they are likely to possess some correlation.\nFor this reason, our uncertainty propagation must be conducted using Equation 18. It has been shown that neglecting this inter-FRF uncertainty can lead to large over estimations of uncertainty in VAVP predictions.\n\n\n\nMatrix form of LPU\nWhen dealing with large numbers of input and output variables it is far more convenient to express Equation 18 in matrix form. This is done by using the equation, \\[\n\\sigma_{y_{i}y_j} \\approx \\mathbf{J}_{i} \\boldsymbol{\\Sigma}_{\\mathbf{x}}\\mathbf{J}_{j}^{\\rm T}\n\\] where, \\[\n\\mathbf{J}_i = \\left[\\begin{array}{c c c c}\\frac{\\partial G_i(\\mathbf{x})}{\\partial x_1} & \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_2} & \\cdots & \\frac{\\partial G_i(\\mathbf{x})}{\\partial x_M} \\end{array}\\right]\\quad\n\\] and \\[\n\\mathbf{J}_j= \\left[\\begin{array}{c c c c}\\frac{\\partial G_j(\\mathbf{x})}{\\partial x_1} & \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_2} & \\cdots & \\frac{\\partial G_j(\\mathbf{x})}{\\partial x_M} \\end{array}\\right]\n\\] are the Jacobian matrices of the functions \\(y_i = G_i(\\mathbf{x})\\) and \\(y_j = G_j(\\mathbf{x})\\), respectively, and \\(\\boldsymbol{\\Sigma}_{\\mathbf{x}}\\) is the covariance matrix of the input vector \\(\\mathbf{x}\\), \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{x}} = \\left[\\begin{array}{c c c c} \\sigma_{x_1}^2 & \\sigma_{x_{1}x_{2}}^2 & \\cdots & \\sigma_{x_{1}x_{M}}^2 \\\\\\sigma_{x_{2}x_{1}}^2 & \\sigma_{x_2}^2 & \\cdots & \\sigma_{x_{2}x_{M}}^2 \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\sigma_{x_{M}x_{1}}^2 & \\sigma_{x_{M}x_{2}}^2 & \\cdots & \\sigma_{x_M}^2\\end{array}\\right].\n\\qquad(21)\\]\nNow that we have a concise expression for \\(\\sigma_{y_{i}y_j}\\), we can go one step further by combing the Jacobian matrices for multiple output variables, \\[\n\\mathbf{J} = \\left[\\begin{array}{c c c c}\\frac{\\partial G_1(\\mathbf{x})}{\\partial x_1} & \\frac{\\partial G_1(\\mathbf{x})}{\\partial x_2} & \\cdots & \\frac{\\partial G_1(\\mathbf{x})}{\\partial x_M} \\\\[0.25cm]\n\\frac{\\partial G_2(\\mathbf{x})}{\\partial x_1} & \\frac{\\partial G_2(\\mathbf{x})}{\\partial x_2} & \\cdots & \\frac{\\partial G_2(\\mathbf{x})}{\\partial x_M} \\\\[0.25cm]\n\\vdots &\\vdots & \\ddots & \\vdots\\\\[0.25cm]\n\\frac{\\partial G_L(\\mathbf{x})}{\\partial x_1} & \\frac{\\partial G_L(\\mathbf{x})}{\\partial x_2} & \\cdots & \\frac{\\partial G_L(\\mathbf{x})}{\\partial x_M}\\end{array}\\right]\\quad.\n\\qquad(22)\\] The uncertainty propagation, \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{y}} \\approx \\mathbf{J} \\boldsymbol{\\Sigma}_{\\mathbf{x}}\\mathbf{J}^{\\rm T}\n\\qquad(23)\\] then yields a covariance matrix for the output variable vector \\(\\mathbf{y} = \\left(y_1,\\,y_2,\\,\\cdots,\\,y_L\\right)\\), \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{y}} = \\left[\\begin{array}{c c c c} \\sigma_{y_1}^2 & \\sigma_{y_{1}y_{2}}^2 & \\cdots & \\sigma_{y_{1}y_{L}}^2 \\\\\\sigma_{y_{2}y_{1}}^2 & \\sigma_{y_2}^2 & \\cdots & \\sigma_{y_{2}y_{L}}^2 \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\sigma_{y_{L}y_{1}}^2 & \\sigma_{y_{L}y_{2}}^2 & \\cdots & \\sigma_{y_L}^2\\end{array}\\right].\n\\qquad(24)\\]\nEquation 21 - Equation 24 describe, generally, the linear propagation of uncertainty. It is important that the Jacobians are evaluated at the true value of \\(\\mathbf{x}\\), which we approximate by its mean value \\(\\mathbf{\\mu_x}\\). These equations will form the basis of the VAVP uncertainty propagation described below.\n\n\n\nLPU for VAVP calculations\n\nDealing with complex variables\nThe LPU, as detailed above, has been formulated to deal with real-valued functions of a real input variables. However, the problems of interest, namely blocked force characterisation, dynamic substructuring and response predictions, are all formulated in the frequency domain. Consequently, all our variables are complex, having both a magnitude and phase. To apply the LPU to a complex-valued function of a complex input variable some minor modification must be made. There are two approaches to do this, differing based on how the complex values are split. One approach is to treat the complex value and its conjugate as independent variables. Alternatively, the real and imaginary parts of the complex number can be treated as separate real variables. The authors find this approach the most intuitive and so the uncertainty framework presented here is done in this way.\nConsider the complex variable \\(H=a+ib\\). Separating its real and imaginary parts, we can rewrite \\(H\\) instead as the \\(2\\times 1\\) column vector, \\[\nH \\rightarrow \\left(\\begin{array}{c}\n\\mathrm{Re}(H)\\\\\n\\mathrm{Im}(H)\\end{array}\\right).\n\\] Supposing \\(H\\) is a measured quantity, its mean value is given by, \\[\n\\mu_H= \\mu_{a+ib} = \\mu_a+i\\mu_b \\rightarrow \\left(\\begin{array}{c}\n\\mu_{\\mathrm{Re}(H)}\\\\\n\\mu_{\\mathrm{Im}(H)}\\end{array}\\right).\n\\] Having separated the real and imaginary parts of \\(H\\), we can no longer define a single variance term to describe its uncertainty. Rather, we use the covariance matrix, \\[\n{\\sigma}^2_H \\rightarrow \\boldsymbol{\\Sigma}_H = \\left[\\begin{array}{c c}\n\\sigma_{\\mathrm{Re}(H)}^2 & \\sigma_{\\mathrm{Re}(H)\\mathrm{Im}(H)}^2\\\\\n\\sigma_{\\mathrm{Im}(H)\\mathrm{Re}(H)}^2 & \\sigma_{\\mathrm{Im}(H)}^2\\end{array}\\right]\n\\] where \\(\\sigma_{\\mathrm{Re}(H)}^2\\) is the variance of the real part of \\(H\\), \\(\\sigma_{\\mathrm{Im}(H)}^2\\) is the variance of the imaginary part of \\(H\\), and \\(\\sigma_{\\mathrm{Re}(H)\\mathrm{Im}(H)}^2\\) is the covariance between the two. This bivariate covariance matrix describes the uncertainty in the complex variable \\(H\\). The same idea can be used to express the covariance between two different complex variables \\(H_1\\) and \\(H_2\\), \\[\n\\boldsymbol{\\Sigma}_{H_1H_2} = \\left[\\begin{array}{c c}\n\\sigma_{\\mathrm{Re}(H_1)\\mathrm{Re}{(H_2)}}^2 & \\sigma_{\\mathrm{Re}(H_1)\\mathrm{Im}{(H_2)}}^2 \\\\ \\sigma_{\\mathrm{Im}(H_1)\\mathrm{Re}{(H_2)}}^2 & \\sigma_{\\mathrm{Im}(H_1)\\mathrm{Im}{(H_2)}}^2\n\\end{array}\\right].\n\\] Hereafter, we will use the term ‘bivariate’ more generally to describe any covariance or Jacobian matrix that has had its complex values separated into real and imaginary parts.\nFurther extension to complex vector and matrix variables is straightforward. We can define the operator, \\[\nM_v(H) = \\left(\\begin{array}{c}\n\\mathrm{Re}(H)\\\\\n\\mathrm{Im}(H)\\end{array}\\right)\n\\] which separates and stacks vertically the real and imaginary parts of the complex variable \\(H\\). By applying \\(M_v(\\quad)\\) to each element of \\(\\mathbf{{x}}\\), \\(\\mu_\\mathbf{x}\\), \\(\\mathbf{{y}}\\), and \\(\\mu_\\mathbf{y}\\) Equation 8 and Equation 9 will compute bivariate forms of the covariance matrices \\(\\boldsymbol{\\Sigma}_{\\mathbf{x}}\\) and \\(\\boldsymbol{\\Sigma}_{\\mathbf{xy}}\\). For a complex matrix variable, the vectorisation operator \\(\\mbox{Vec}(\\quad)\\) is applied first, before \\(M_v(\\quad)\\) is applied element-wise. As an example, the bivariate form of Equation 21 is given by, \\[\n\\boldsymbol{\\Sigma}_{\\mathbf{x}} = \\left[\\begin{array}{c c c c }\n\\boldsymbol{\\Sigma}_{x_1x_1} & \\boldsymbol{\\Sigma}_{x_1x_2} & \\cdots & \\boldsymbol{\\Sigma}_{x_1x_M} \\\\\n\\boldsymbol{\\Sigma}_{x_2x_1} & \\boldsymbol{\\Sigma}_{x_2x_2} & \\cdots & \\boldsymbol{\\Sigma}_{x_2x_M} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\boldsymbol{\\Sigma}_{x_Mx_1} & \\boldsymbol{\\Sigma}_{x_Mx_2} & \\cdots & \\boldsymbol{\\Sigma}_{x_Mx_M} \\\\\n\\end{array}\\right]\n\\] where each univaraite element has been replaced by a \\(2\\times 2\\) bivariate covariance matrix, \\[\n\\sigma_{x_ix_j}^2 \\rightarrow \\boldsymbol{\\Sigma}_{x_ix_j} = \\left[\\begin{array}{c c}\n\\sigma_{\\mathrm{Re}(x_i)\\mathrm{Re}(x_j)}^2 & \\sigma_{\\mathrm{Re}(x_i)\\mathrm{Im}(x_j)}^2\\\\\n\\sigma_{\\mathrm{Im}(x_i)\\mathrm{Re}(x_j)}^2 & \\sigma_{\\mathrm{Im}(x_i)\\mathrm{Im}(x_j)}^2\\end{array}\\right].\n\\]\nTo propagate the bivariate complex uncertainty it is necessary to redefine the Jacobian matrices accordingly. The bivariate form of the Jacobian is given by \\[\n\\mathbf{J} = \\left[\\begin{array}{c c c c}\\frac{\\partial \\mathbf{G}_1(\\mathbf{x})}{\\partial {x_1}} & \\frac{\\partial \\mathbf{G}_1(\\mathbf{x})}{\\partial {x_2}} & \\cdots & \\frac{\\partial \\mathbf{G}_1(\\mathbf{x})}{\\partial {x_M}} \\\\[0.25cm]\n\\frac{\\partial \\mathbf{G}_2(\\mathbf{x})}{\\partial{x_1}} & \\frac{\\partial \\mathbf{G}_2(\\mathbf{x})}{\\partial {x_2}} & \\cdots & \\frac{\\partial \\mathbf{G}_2(\\mathbf{x})}{\\partial {x_M}} \\\\[0.25cm]\n\\vdots &\\vdots & \\ddots & \\vdots\\\\[0.25cm]\n\\frac{\\partial \\mathbf{G}_L(\\mathbf{x})}{\\partial {x_1}} & \\frac{\\partial \\mathbf{G}_L(\\mathbf{x})}{\\partial {x_2}} & \\cdots & \\frac{\\partial \\mathbf{G}_L(\\mathbf{x})}{\\partial {x_M}}\\end{array}\\right]\n\\qquad(25)\\] where we have simply replaced each partial derivative in Equation 22, with a \\(2\\times2\\) matrix containing the partial derivatives of the real and imaginary parts of the output, with respect to the real and imaginary parts of the input, \\[\n\\frac{\\partial {G_i}(\\mathbf{x})}{\\partial {x_j}}\\rightarrow \\frac{\\partial \\mathbf{G}_i(\\mathbf{x})}{\\partial {x_j}} = \\left[\\begin{array}{c c }\\frac{\\partial \\mathrm{Re}(G_i(\\mathbf{x}))}{\\partial \\mathrm{Re}(x_j)} & \\frac{\\partial \\mathrm{Re}(G_i(\\mathbf{x}))}{\\partial \\mathrm{Im}(x_j)} \\\\[0.25cm]\n\\frac{\\partial \\mathrm{Im}(G_i(\\mathbf{x}))}{\\partial \\mathrm{Re}(x_j)} & \\frac{\\partial \\mathrm{Im}(G_i(\\mathbf{x}))}{\\partial \\mathrm{Im}(x_j)} \\end{array}\\right].\n\\qquad(26)\\]\nThe bivariate covariance and Jacobian matrices described above are used in place of their univariate counter parts in Equation 23 to propagate complex uncertainty. The resulting output uncertainty, characterised by the covariance matrix \\(\\boldsymbol{\\Sigma_\\mathbf{y}}\\), is also bivariate, with its elements describing the variance and covariance between the real and imaginary parts of its elements.\nThe theory underlying VAVP construction has been described in detail over the previous pages. In short, there are three equations that govern majority of VAVP excersises. In the following sections we will briefly reintroduce these key equations, before summaring their LPU-based uncertainty propagation.\n\n\nInverse (blocked) force identification\nThe key equation for the inverse identification of blocked forces takes the form, \\[\n \\mathbf{\\bar{f}} = \\mathbf{Y}_C^+\\mathbf{v}\n\\qquad(27)\\] where \\(\\mathbf{Y}_C\\in\\mathbb{C}^{N_r\\times N_f}\\) is a complex \\(N_r\\times N_f\\) matrix of Frequency Response Functions (FRFs) that relate the measured operational responses \\(\\mathbf{v}\\in\\mathbb{C}^{N_r}\\) to the sought after blocked forces \\(\\mathbf{\\bar{f}}\\in\\mathbb{C}^{N_f}\\) and \\(\\square^+\\) denotes a generalized matrix (pseudo-)inverse.\nGiven the above, the input and output (bivariate) covariance required by the LPU are given by, \\[\n % \\mu_{\\mathbf{x}} = \\left(\\begin{array}{c}\n % M(\\mu_{\\mathbf{\\vec{Y}}}) \\\\\n % M(\\mu_\\mathbf{{v}})\n % \\end{array} \\right)\n % \\qquad \n \\Sigma_\\mathbf{x} = \\left[\\begin{array}{ll} \\Sigma_\\mathbf{Y} & \\mathbf{0} \\\\\n \\mathbf{0}& \\Sigma_\\mathbf{v}\n \\end{array}\\right] \\qquad \\mbox{and}\\qquad\n % \\mu_{\\mathbf{y}} = M(\\mu_\\mathbf{\\bar{f}})\n \\Sigma_\\mathbf{y} = \\Sigma_\\mathbf{\\bar{f}}\n\\]\nThe Jacobian of the underlying function (Equation 27) is given by, \\[\n \\begin{align}\n \\mathbf{J} &= \\left[ \\begin{array}{c c}\n \\mathbf{J}_{\\mathbf{Y}} & \\mathbf{J}_\\mathbf{v}\n \\end{array}\\right]\\\\\n \\mathbf{J}_{\\mathbf{Y}} &=\\mathcal{M}\\left(-(\\mathbf{Y^+}\\mathbf{v})^{\\rm T}\\otimes\\mathbf{Y^+}, \\left[\\left(((\\mathbf{I}-\\mathbf{Y}\\mathbf{Y^+})\\mathbf{v})^{\\rm T}\\otimes\\mathbf{Y^+} \\mathbf{Y}^{\\rm +H}\\right)+ \\left((\\mathbf{Y}^{\\rm +H} \\mathbf{Y^{+}}\\mathbf{v})^{\\rm T}\\otimes(\\mathbf{I}-\\mathbf{Y^+}\\mathbf{Y}) \\right)\\right]\\mathbf{K}\\right) \\\\\n \\mathbf{J}_{\\mathbf{v}} &= \\mathcal{M} \\left(\\mathbf{Y}^+,\\mathbf{0}\\right)\n \\end{align}\n\\] where \\(\\otimes\\) denotes the Kroneker product, \\(\\mathbf{K}\\) is the so-called commutation matrix (the matrix relating the vectorisation of a matrix, to that of its transpose), and \\(\\mathcal{M}(A,B)\\) is an element-wise function defined as, \\[\n \\mathcal{M}(A,B) = \\left[\\begin{array}{c c}\n \\mathrm{Re}\\left(A+B\\right) & \\phantom{-}\\mathrm{Im}\\left(-A+B\\right) \\\\\n \\mathrm{Im}\\left(A+B\\right) & -\\mathrm{Re}\\left(-A+B\\right)\n \\end{array}\\right].\n\\] The function \\(\\mathcal{M}(A,B)\\) plays an essential role; it converts the complex derivatives defined with, to the real-imagainary form as shown in Equation 26.\nNote that \\(\\mathbf{J}_\\mathbf{Y}\\) is defined above for the general case of an over or underdetermined problem. In the case of a determined (i.e. same number of responses as forces to be identified) it reduces to the simpler form, \\[\n \\mathbf{J}_{\\mathbf{Y}} =\\mathcal{M}\\left(-(\\mathbf{Y}^{-1}\\mathbf{v})^{\\rm T}\\otimes\\mathbf{Y}^{-1}, \\mathbf{0}\\right) \\\\\n\\] Substituting \\(\\mathbf{J}\\) and \\(\\Sigma_\\mathbf{x}\\), as defined above, into Equation 23 provides a linear esitmate of the blocked force covariance matrix \\(\\Sigma_\\mathbf{\\bar{f}}\\) based on the operational response and FRF uncertainty.\n\n\nSubstructuring\nThe dual formulation of substructuring is govered by the following equation, \\[\n \\mathbf{H}_C = \\mathbf{Y} - \\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B} \\mathbf{Y} \\mathbf{B}^{\n \\rm T} + \\mathbf{Z}_J^{-1}\\right)^{-1} \\mathbf{B}\\mathbf{Y}\n\\qquad(28)\\] where \\(\\mathbf{H}_C\\in\\mathbb{C}^{N_C\\times N_C}\\) is the FRF matrix of the coupled assembly (note that we have used \\(\\mathbf{H}_C\\) rather than \\(\\mathbf{Y}_C\\) to avoid confusion with the FRF matrix used in Equation 27), \\(\\mathbf{Y}\\in\\mathbb{C}^{(N_S+N_R)\\times(N_S+N_R)}\\) is a block diagonal matrix containing the uncoupled FRF matrices of the source \\(S\\) and receiver \\(R\\) components, \\(\\mathbf{\\Gamma}\\in\\mathbb{C}^{N_c\\times N_c}\\) is a matrix of joint flexibilities that describes the dynamics of the connecting DoFs, and \\(\\mathbf{B}\\in\\mathbb{Z}^{N_c\\times(N_S+N_R)}\\) is a signed Boolean matrix that controls between which DoFs equilibrium is enforced.\nThe input and output covariance are given by, \\[\n\\Sigma_\\mathbf{x} = \\left[\\begin{array}{ll} \\Sigma_\\mathbf{[{Y}]} & \\mathbf{0} \\\\\n\\mathbf{0}& \\Sigma_{{\\mathbf{Z}}_j}\n\\end{array}\\right] \\qquad \\mbox{and} \\qquad \\Sigma_{\\mathbf{y}} = \\Sigma_{{\\mathbf{H}}_C}\n\\] The Jacobian of the underlying function (Equation 28) is given by, \\[\n \\mathbf{J} = \\left[ \\begin{array}{c c}\n \\mathbf{J}_{[\\mathbf{Y}]} & \\mathbf{J}_{\\mathbf{Z}_j}\n \\end{array}\\right]\n\\qquad(29)\\] \\[\n\\begin{align}\n \\mathbf{J}_{[\\mathbf{Y}]}= \\mathcal{M}\\,\\Bigg(\\mathbf{I} - \\left[\\left(\\mathbf{B}^{\\rm T}\\left(\\mathbf{B} \\mathbf{Y} \\mathbf{B}^{\\rm T}\\right)^{-1} \\mathbf{B}\\mathbf{Y}\\right)^{\\rm T}\\otimes \\mathbf{I} \\right] &+ \\left[ \\left(\\mathbf{B}^{\\rm T}\\left(\\mathbf{B} \\mathbf{Y} \\mathbf{B}^{\\rm T}\\right)^{-1} \\mathbf{B}\\mathbf{Y}\\right)^{\\rm T} \\otimes \\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B} \\mathbf{Y} \\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B} \\right] \\cdots \\\\ &-\n \\left[ \\mathbf{I} \\otimes \\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B} \\mathbf{Y} \\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B} \\right],\\, \\mathbf{0}\\Bigg) \\nonumber\n\\end{align}\n\\] \\[\n \\mathbf{J}_{\\mathbf{Z}_j} = \\mathcal{M}\\,\\Bigg(\\left[\\left(\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T} + \\mathbf{\\Gamma}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\right)^{\\rm T} \\otimes \\mathbf{Y}\\mathbf{B}^{\\rm T} \\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T} + \\mathbf{\\Gamma}\\right)^{-1}\\right]\\bigg[-\\mathbf{\\Gamma}^{\\rm T}\\otimes \\mathbf{\\Gamma}\\bigg],\\, \\mathbf{0}\\Bigg)\n\\qquad(30)\\]\nSubstituting \\(\\mathbf{J}\\) and \\(\\Sigma_\\mathbf{x}\\), as defined above, into Equation 23 provides a linear esitmate of the coupled FRF covariance matrix \\(\\Sigma_\\mathbf{H}\\) based on the uncertainty of each individual component, and that of the joint dynamics. Note that if the joint dynamics are not considered uncertain, then we can simply set \\(\\Sigma_{{\\mathbf{Z}}_J}=\\mathbf{0}\\).\nA similar set of equations are available for the primal formulation of the substructuring equations, though are omitted here for berevity.\n\n\nForward response prediction\nOnce the blocked force and coupled assembly FRF are known, an operational response prediction can be made using, \\[\n \\mathbf{p} = \\mathbf{H}_C\\mathbf{\\bar{f}}\n\\qquad(31)\\] where \\(\\mathbf{H}_C\\in\\mathbb{C}^{N_p\\times N_f}\\) is a matrix of (forward) FRFs that relate the acquired force \\(\\mathbf{\\bar{f}}\\in\\mathbb{C}^{N_f}\\) to the target response variables \\(\\mathbf{p}\\in\\mathbb{C}^{N_p}\\). We have used the notation \\(\\mathbf{p}\\) and \\(\\mathbf{H}_C\\) for generality and to avoid confusion with \\(\\mathbf{v}\\) and \\(\\mathbf{Y}_C\\) used in Equation 27, though in general the response could be structural and/or acoustic.\nThe input and output covariance are given by, \\[\n \\Sigma_\\mathbf{x} = \\left[\\begin{array}{ll} \\Sigma_\\mathbf{H} & \\mathbf{0}\\\\\n \\mathbf{0}& \\Sigma_\\mathbf{f}\n \\end{array}\\right] \\qquad \\mbox{and} \\qquad \\Sigma_{\\mathbf{y}} = \\Sigma_{{\\mathbf{p}}}\n\\] Note that the input covariance matrix is block diagonal. This is becasuse the blocked force and substructured FRF are uncorrelated, asd they are obtained from completely seperate expeirments.\nThe Jacobian of the underlying function (Equation 31) is given by, \\[\n \\begin{align}\n \\mathbf{J} = \\left[ \\begin{array}{c c}\n \\mathbf{J}_{\\mathbf{H}} & \\mathbf{J}_{\\mathbf{f}}\n \\end{array}\\right] \\qquad \\qquad \\mathbf{J_H} = \\, \\mathcal{M}\\left(\\mathbf{\\bar{f}^{\\rm T}}\\otimes \\mathbf{I}, \\, \\mathbf{0}\\right) \\qquad \\qquad\n \\mathbf{J_{f}} = \\, \\mathcal{M}\\left(\\mathbf{H}_C, \\, \\mathbf{0}\\right)\n \\end{align}\n\\] Substituting \\(\\mathbf{J}\\) and \\(\\Sigma_\\mathbf{x}\\), as defined above, into Equation 23 provides a linear estimate of the operational VAVP response \\(\\Sigma_\\mathbf{p}\\) based on the uncertainty of the blocked force and forward FRF.\n\n\nEsimtating confidence bounds\n\n\n\nMonte-Carlo simulations (for large uncertainty)\nThe LPU described above is able to provide good esimtate of the output uncertainty of a VAVP, provided the input uncertainty is small. In the presence of mid-large levels of uncertainty the LPU can yield large errors, and so an alternative method is required. The simplest approach, capable of handling arbitary levels of uncertainry, is Monte-Carlo simulation.\nUsing a Monte-Carlo approach, we estimate the probability density function (PDF) of the output of a function \\(\\mathbf{y} = G(\\mathbf{x})\\), by randomly sampling the distribution of the input \\(\\mathbf{x}\\). Failing an analytical solution for the PDF of the output, which is rarely available for problems of realistic complexity, the Monte-Carlo approach is the most robust approach for the propagation of uncertainty; it is frequently used to provide ground truth estimates for the evaluation of alternative methods. By evaluating the function \\(G(\\square)\\) directly, we implicitly capture the effects of non-linearity and their influence on the output distribution. The main limitation of a MC simulation is simply the computational effort required, which can be great for high dimensional problems or expensive models. It is worth recalling that all TPA equations are represented in the frequency domain, so whilst \\(\\mathbf{x}\\) might not be considered very high dimensional, all calculations must be repeated potentially many thousands of times to cover the frequency range of interest.\nA typical Monte-Carlo procedure is as follows:\n\nA random sample \\(\\mathbf{x}_i\\) is drawn from the input distribution, for example the multi-dimensional Normal with mean \\(\\mu_\\mathbf{x}\\) and covariance \\(\\Sigma_{\\mathbf{x}}\\), \\[\n\\mathbf{x}_i\\sim \\mathcal{N}(\\mu_\\mathbf{x},\\Sigma_\\mathbf{x})\n\\]\nThe drawn sample is used to evaluate the function \\(G(\\square)\\), resulting in the output sample \\(\\mathbf{y}_i\\). \\[\n\\mathbf{y}_i = G(\\mathbf{x}_i)\n\\]\nSteps 1 and 2 are repeated until \\(N_{MC}\\) output samples have been obtained. Collectively, these samples approximate the PDF of the output \\(\\mathbf{y}\\). They can be used to infer any statistics of interest, for example the mean and covariance, \\[\n\\mu_\\mathbf{y} = \\frac{1}{N_{MC}}\\sum_{i=1}^{N_{MC}} \\mathbf{y}_i\\qquad \\qquad\n \\Sigma_\\mathbf{y} = \\frac{1}{N_{MC}-1}\\sum_{i=1}^{N_{MC}} (\\mathbf{y}_i - \\mu_\\mathbf{y})(\\mathbf{y}_i - \\mu_\\mathbf{y})^{\\rm T}\n\\]\n\nTo avoid uneessecary function evaluations, stopping criteria can be incorporated into the above by using a recursive calculation of, for example, the mean and covairance.\nNote that the MC samples \\(\\mathbf{y}_i\\) can be used to infer more than just the covariance. Indeed, these samples approximate the actual output distibution and so can be used to infer any statistics of interest, e.g. magnitude confidence bound, etc.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Uncertainty"
]
},
{
"objectID": "introductions/VAVPs/uncertainty/uncertainty.html#what-next",
"href": "introductions/VAVPs/uncertainty/uncertainty.html#what-next",
"title": "Uncertainty",
"section": "What next?",
"text": "What next?\nWith our discussion of uncertainty now complete, the last thing to do is bring everything together and summarise the key steps of constructing a VAVP.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Uncertainty"
]
},
{
"objectID": "introductions/VAVPs/components/components.html",
"href": "introductions/VAVPs/components/components.html",
"title": "Components (passive characterisation)",
"section": "",
"text": "On this page we describe in greater detail the passive characterisation of components, specifically source and recevier-like components. The passive characterisation of coupling elements, such as vibration isolators, is detailed later in Vibration isolators.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/components/components.html#component-representations",
"href": "introductions/VAVPs/components/components.html#component-representations",
"title": "Components (passive characterisation)",
"section": "Component representations",
"text": "Component representations\nAs discussed on the previous page, centra to the development of a VAVP is the independent characteristion of a component’s passive properties. Independence is the key word here, it is what enables components to be assembled, and interchanged within the VAVP framework.\nThe independent properties of a passive component can be described in various ‘domains’, each providing a different perspective on essentially the same dynamic behaviour. The Physical, State-space, Modal, Time, and Frequency domains are the most commonly encountered in vibro-acoustics and will be briefly reviewed below before we adopt the Frequency domain as the default for our VAVP.\nIn the physical domain, a component’s inertial and elastic properties are discretised (\\(N\\) DoFs, for example by the Finite Element Method) and represented by the mass and stiffness matrices, \\(\\mathbf{M}\\in \\mathbb{R}^{N\\times N}\\) and \\(\\mathbf{K}\\in \\mathbb{R}^{N\\times N}\\), respectively. Together with an appropriate matrix representation of its damping, \\(\\mathbf{C}\\in \\mathbb{R}^{N\\times N}\\), these matrices characterise the passive properties of a component. They can be used to formulate a second order equation of motion that relates the external force \\(\\mathbf{f}(t)\\) applied to the component, to its resulting kinematics, \\[\n \\mathbf{M}\\mathbf{\\ddot{x}}(t)+\\mathbf{C}\\mathbf{\\dot{x}}(t)+\\mathbf{K}\\mathbf{x}(t) = \\mathbf{f}(t)\n\\tag{1}\\] where \\(\\mathbf{x}(t)\\), \\(\\mathbf{\\dot{x}}(t)\\) and \\(\\mathbf{\\ddot{x}}(t)\\) are vectors of displacement, velocity and acceleration, respectively at a each of the \\(n\\) discretised degrees of freedom.\nIn the state-space domain, a component is represented by a set of first order differential equations that relate the state vector \\(\\mathbf{y}(t) = [\\mathbf{x}^{\\rm T}(t) \\quad \\mathbf{\\dot{x}}^{\\rm T}(t)]^{\\rm T}\\) to its temporal derivative, \\(\\mathbf{\\dot{y}}(t) = [\\mathbf{\\dot{x}}^{\\rm T}(t) \\quad \\mathbf{\\ddot{x}}^{\\rm T}(t)]^{\\rm T}\\). With reference to Equation 1, we have, \\[\n \\mathbf{\\dot{y}}(t) = \\left[\\begin{array}{c}\\mathbf{\\dot{x}}(t) \\\\ \\mathbf{\\ddot{x}}(t)\\end{array}\\right] = \\left[\\begin{array}{cc} \\mathbf{0} & \\mathbf{I} \\\\ -\\mathbf{M}^{-1}\\mathbf{K} & -\\mathbf{M}^{-1}\\mathbf{C}\n \\end{array} \\right] \\left[\\begin{array}{c}\\mathbf{{x}}(t) \\\\ \\mathbf{\\dot{x}}(t)\\end{array}\\right] + \\left[\\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{M}^{-1} \\end{array} \\right]\\mathbf{f}(t)\n\\] or equivalently, (t) = (t)+(t) where \\(\\mathbf{A}\\) is the so-called state matrix, and \\(\\mathbf{B}\\) is the input matrix. Together, \\(\\mathbf{A}\\) and \\(\\mathbf{B}\\) characterise the passive properties of a component in state-space.\nIn the modal domain a component is represented by a set of mode shapes \\(\\boldsymbol{\\Phi} = [\\mathbf{\\phi}_1 \\quad \\mathbf{\\phi}_2 \\quad \\cdots \\quad \\mathbf{\\phi}_N]\\) and natural frequencies \\(\\boldsymbol{\\Lambda}=\\mbox{diag}[\\omega^2_1 \\quad \\omega^2_2 \\quad \\cdots \\quad \\omega^2_N]\\) which individually satisfy the homogenous undamped equation of motion, ** -^2 = _i = 0. ** Together \\(\\boldsymbol{\\Phi}\\) and \\(\\boldsymbol{\\Lambda}\\), with an appropriate representation of damping (e.g. modal loss factor), characterise the passive properties of the component.\nIn the time domain a component’s kinematic response \\(\\mathbf{x}(t)\\), \\(\\mathbf{\\dot{x}}(t)\\) or \\(\\mathbf{\\ddot{x}}(t)\\) to a time varying force \\(\\mathbf{f}(t)\\) satisfies the convolution integral (shown only for displacement), ** (t) = _0^{} (t-)() d ** where \\(\\mathbf{G}(t)\\) is a matrix of displacement impulse response functions (IRFs) that describe the component response to a unit (dirac) force impulse. The IRF matrix \\(\\mathbf{G}(t)\\) characterises the passive properties of a component.\nIn the FRF domain a component is characterised by its frequency response function (FRF) matrix, which can take several forms. Assuming time harmonic motion, Equation 1 can be Fourier transformed into the frequency domain, (i++)() = ()() = () where \\(\\mathbf{Z}(\\omega)\\) is the component impedance FRF matrix. Equivalently we can write, \\[\n \\mathbf{v}(\\omega) = \\mathbf{Y}(\\omega)\\mathbf{f}(\\omega)\n\\tag{2}\\] where \\(\\mathbf{Y}(\\omega)=\\mathbf{Z}^{-1}(\\omega)\\) is the component mobility FRF matrix. Both \\(\\mathbf{Y}(\\omega)\\) and \\(\\mathbf{Z}(\\omega)\\) characterise the passive properties of a component. Similar FRF definitions exist for displacement and acceleration, as discussed below.\nThe choice of domain depends, amongst other things, on whether a physical structure is available for test. If no experimental characterisation is possible, the conventional starting point is a finite element model, i.e. in the physical domain. In this case, reliable estimates of the mass, stiffness and damping properties of the structure are required (not always easily found) for input to Equation 1. From here, state-space, modal, IRF, or FRF representations can be determined. If a physical component available for testing, then representation by measured FRFs is often preferred. These do not require estimates of the component’s physical properties because the FRF is directly obtained from measured forces and velocities (Equation 2). From measured FRFs, IRF and modal representations can be determined with relative ease. However, physical and state-space domains are typically limited to numerical models; identifying system matrices (e.g. mass, stiffness, state, etc) from experimental data is a difficult problem requiring advanced system identification methods. Being able to represent both experimentally and numerically sourced components, the frequency domain is the obvious choice for VAVPs.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/components/components.html#frequency-response-functions",
"href": "introductions/VAVPs/components/components.html#frequency-response-functions",
"title": "Components (passive characterisation)",
"section": "Frequency response functions",
"text": "Frequency response functions\nFrequency response functions (FRFs) are a clearly central concept in the development of VAVPs, and so it is only fair that we devote sometime to their definition and interpretation.\n\nTypes of FRF\nHaving chosen the FRF as our preferred representation, it still remains to choose the dynamic quantities involved: generally speaking, FRFs describe input-output relations and it is to be decided whether the inputs and outputs are forces, kinematic or acoustic quantities. If a kinematic quantity, is this in the form of acceleration, velocity or displacement? In addition, are rotational quantities such as moments or angular velocities involved?\nOur first choice will be to consider excitation in terms of applied force and response in terms of resultant velocity giving a force-velocity type of FRF known as ‘mobility’, typically denoted by \\(Y(\\omega)\\in \\mathbb{C}\\). For a single input-output system, it is a complex frequency dependent scalar quantity that describes the velocity response to a unit force excitation at a point, defined mathematically as, \\[\n v(\\omega) = Y(\\omega)f(\\omega), \\quad \\quad Y(\\omega) = \\frac{v(\\omega)}{f(\\omega)}\n\\tag{3}\\] where \\(v(\\omega)\\) and \\(f(\\omega)\\) are the complex frequency domain spectra of the velocity and force, respectively.\nBeing complex, the mobility has a real and imaginary part. Physically, the real part is typically associated with damping mechanisms and losses, whilst the imaginary part is related to energy storage. When visualising or plotting a mobility FRF is it rare that we look directly at its real or imaginary parts. Rather, we consider its polar representation in terms of magnitude \\(|Y|\\) and phase \\(\\angle Y\\), \\[\\begin{align}\n v_0(\\omega,t )e^{i(\\omega t + \\angle Y)} &= \\left[\\mbox{Re}(Y(\\omega)) + i\\mbox{Im}(Y(\\omega))\\right]f_0(\\omega)e^{i \\omega t} \\\\\n &=|Y(\\omega)|e^{i\\angle Y}f_0(\\omega)e^{i \\omega t}\n \\end{align}\\] where the magnitude of the mobility (or any FRF) describes the change in amplitude between a harmonic input (in the case of mobility, force) and output signal (velocity), whilst the phase describes the time difference between the input and output introduced by the system at a particular frequency.\nFRFs in terms of displacement and acceleration, rather than velocity, are frequently found in the literature. It is straightforward to convert between the various kinematic quanties. By integration and differentiation with respect to time (assuming harmonic motion \\(v=v_0e^{i\\omega t}\\)), Equation 3 can be transformed into a pair of force-displacement and force-acceleration relations, respectively, \\[\nv = \\frac{dx}{dt} = i\\omega x = Yf, \\quad \\quad v = \\int a dt = \\frac{a}{i\\omega} = Y f\n\\] where for simplicity we have omitted explicit frequency dependence \\((\\omega)\\). From the above we have the FRF relations, \\[\n x = \\frac{Y}{i\\omega} f = R f,\\quad \\quad a = i\\omega Y f = Af\n\\] where \\(R\\in \\mathbb{C}\\) is called the (dynamic) compliance and \\(A\\in \\mathbb{C}\\) the accelerance. Thus in the frequency domain, multiplication or division by \\(i\\omega\\) is all that is required to convert one from one form to another.\nAccompanying each of the FRFs above is an inverse FRF in which the excitation is an imposed motion and the response a reaction force, \\[\n f = D x,\\quad \\quad f = Z v, \\quad \\quad f = \\hat{M} a.\n\\] where, \\[\n D = \\frac{1}{R},\\quad \\quad Z = \\frac{1}{Y}, \\quad \\quad \\hat{M} = \\frac{1}{A}.\n\\] The FRFs \\(D\\in \\mathbb{C}\\), \\(Z\\in \\mathbb{C}\\) and \\(\\hat{M}\\in \\mathbb{C}\\) are called, respectively, the dynamic stiffness, impedance, and apparent mass.\n\n\n\n\n\n\nNote\n\n\n\nThese days, most of the literature related to VAVP and TPA use acceleration, rather than velocity, as their base kineamtic quantity. This is because acceleraion is super easy to measure (thank you accelerometers!). That said, in the vast majority of cases the choice of kinematic variable is completely arbitrary; the same underlying equations can be applied to displacement, velocity and acceleration-based FRFs. In fact, we will rarely refer to a specific type of FRF (i.e. compliance, mobility, accelerance), adopting instead the agnostic term FRF. Similarly, when taking about the kinematic quantity directly (i.e. displacement, velocity, acceleration) we will use the term generic term response. Note, when reffering to the inverse FRF (i.e. dynamic stiffness, impedance and apparent mass) we will use impedance as a general term without specific reference to velocity.\nAn important exption to the above is when dealing with power-based methods, since power is defined specifically in terms of velocity and force. If using displacement or acceleration-based FRFs, approaiate frequency scaling must be included to yield correct power estimates.\n\n\nImportantly, the above definitions concern single-input single-output systems in which all FRFs are (complex) quantities. When dealing with multi-input multi-output systems, as is necessary for virtual prototyping, a more general FRF definition is required that allows for simultaneous excitation and response at multiple points.\n\n\nMulti-DoF FRFs\nVery rarely are structural components excited at, or connected through, a single point. In almost all scenarios, the FRFs needed for component-level simulation are multi-point. In the following we will specifically consider multi-point FRF and impedance FRFs, though the definitions apply equally to other types of FRF. As is common practice, we will also omit specific reference to frequency dependence from now on. All formulations throughout the remainder of this book are assumed to be in the frequency domain unless stated otherwise.\nFor a component with \\(N\\) inputs and outputs, its FRF is defined by the vector-matrix equation, \\[\n\\mathbf{v} = \\mathbf{Y}\\mathbf{f}\n\\tag{4}\\] where \\(\\mathbf{v} \\in \\mathbb{C}^N\\) is an \\(N\\) dimensional vector of responses, \\(\\mathbf{f} \\in \\mathbb{C}^N\\) is an \\(N\\) dimensional vector of applied forces, and \\(\\mathbf{Y} \\in \\mathbb{C}^{N\\times N}\\) is the \\(N\\times N\\) dimensional FRF matrix that relates the two. Written out in full Equation 4 reads, \\[\n \\left(\\begin{array}{c} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_N\n \\end{array}\\right) = \\left[\\begin{array}{cccc}\n Y_{11} & Y_{12} & \\cdots & Y_{1N} \\\\\n Y_{21} & Y_{22} & \\cdots & Y_{2N} \\\\\n \\vdots & \\vdots & \\cdots & \\ddots \\\\\n Y_{N1} & Y_{N2} & \\cdots & Y_{NN} \\\\\n \\end{array}\\right]\\left(\\begin{array}{c} f_1 \\\\ f_2 \\\\ \\vdots \\\\ f_N\n \\end{array}\\right)\n\\tag{5}\\] where the first sub-scripted index \\(i\\) denotes the response position (the rows of \\(\\mathbf{Y}\\)), and the second index \\(j\\) the force excitation position (the columns of \\(\\mathbf{Y}\\)).\nWith reference to Equation 5, we can distinguish two types of FRF; point and transfer. Point FRFs describe the response at a point when the force excitation is at the same position, i.e. co-located, \\(v_i = Y_{ii}f_i\\). Transfer FRFs describe the velocity at a point when the excitation force is located elsewhere, \\(v_i=Y_{ij}f_j, j\\neq i\\).\nWhen excitations and responses are considered across the same set of positions, \\(\\mathbf{Y}\\in \\mathbb{C}^{N\\times N}\\)$ is a square symmetric matrix (\\(Y_{ij} = Y_{ji}\\) or \\(\\mathbf{Y} = \\mathbf{Y}^{\\rm T}\\)). Its diagonal elements contain the point FRFs at each position, whilst the off-diagonal elements contain the transfer FRFs between each pair of positions. The symmetry of \\(\\mathbf{Y}\\) arises due to an important feature of FRFs; they are reciprocal, meaning that when the input and output positions are interchanged the FRF remains the same, i.e. \\(Y_{ij} = Y_{ji}\\). The same is true for all FRFs described above.\nEquation 5 provides a component-level definition of a multi-DoF FRF. To obtain an element wise definition for \\(Y_{ij}\\), we must isolate a single element of the FRF matrix. We do this by considering the response at an arbitrary position \\(i\\), due to a single applied force at position \\(j\\); that is, we set all forces other than that at \\(j\\) to zero, \\(f_{k\\neq j}=0\\). Equation 5 then reduces to the element wise FRF definition, \\[\nY_{ij} = {v_i \\over f_j} \\Big|_{f_{k \\neq j} = 0}.\n\\tag{6}\\]\nBased on Equation 6, the measurement of an FRF matrix should be done in such a way that:\n\nforces are applied one at a time to each point of interest;\nexternal forces at all other points are zero (the structure is allowed to respond freely);\nthe individual elements of the matrix are then measured as the complex ratio of the response at \\(i\\) to the single excitation force at \\(j\\), as per Equation 6.\n\n\n\n\n\n\n\n\n\nMobility FRF constraints.\n\n\n\n\n\n\n\nImpedance FRF constraints.\n\n\n\n\n\n\nFigure 1: Illustration of the element-wise constraints on the definition of mobility and impedance FRFs.\n\n\n\nConsidering the same component with \\(N\\) co-located inputs and outputs, its impedance is defined by the vector-matrix equation, \\[\n\\mathbf{f} = \\mathbf{Z}\\mathbf{v}\n\\] where \\(\\mathbf{f} \\in \\mathbb{C}^N\\) is an \\(N\\) dimensional vector of resultant forces, \\(\\mathbf{v}\\in \\mathbb{C}^N\\) is an \\(N\\) dimensional vector of applied velocities and \\(\\mathbf{Z} \\in \\mathbb{C}^{N\\times N}\\) is the \\(N \\times N\\) dimensional impedance matrix that relates the two. Written out in full, \\[\n \\left(\\begin{array}{c} f_1 \\\\ f_2 \\\\ \\vdots \\\\ f_N\n \\end{array}\\right) = \\left[\\begin{array}{cccc}\n Z_{11} & Z_{12} & \\cdots & Z_{1N} \\\\\n Z_{21} & Z_{22} & \\cdots & Z_{2N} \\\\\n \\vdots & \\vdots & \\cdots & \\ddots \\\\\n Z_{N1} & Z_{N2} & \\cdots & Z_{NN} \\\\\n \\end{array}\\right]\\left(\\begin{array}{c} v_1 \\\\ v_2 \\\\ \\vdots \\\\ v_N\n \\end{array}\\right).\n\\] As above for the FRF, the first sub-scripted index denotes the response position (now a resultant force), and the second index the excitation position (now an applied motion).\nLike the FRF matrix, when excitations and responses are considered across the same set of \\(N\\) positions, \\(\\mathbf{Z}\\in\\mathbb{C}^{N\\times N}\\) is a square symmetric (i.e. reciprocal) matrix (\\(Z_{ij} = Z_{ji}\\) or \\(\\mathbf{Z} = \\mathbf{Z}^{\\rm T}\\)), with the so-called point impedances along the diagonal, and transfer impedances elsewhere.\nTo obtain an element-wise definition of the impedance \\(Z_{ij}\\), we must isolate a single element of the matrix \\(\\mathbf{Z}\\). Following the same procedure as before, we consider an arbitrary force location \\(i\\) and set all motions, bar that applied to position \\(j\\), to zero, \\(v_{k\\neq j} = 0\\). We are then left with the element-wise definition, \\[\nZ_{ij} = {\\bar{f}_i \\over v_j} \\Big|_{v_{k \\neq j} = 0}\n\\tag{7}\\] where it is noted that in order to satisfy the constraint \\(v_{k \\neq j} = 0\\), an external blocking force \\(\\bar{f}\\) is required at all positions \\({k \\neq j}\\).\nBased on Equation 7, an impedance matrix would be measured in such a way that:\n\nprescribed motions are applied one at a time to each point of interest;\nthe motion of all other points is constrained to zero (the structure is not allowed to move);\nthe individual elements of the matrix are then measured as the complex ratio of the force response to the prescirbed motion, as per Equation 7.\n\nIn contrast to FRF, the measurement of impedance required all points other than the excited point to be completely constrained, in other words they must be fixed. These different constraints for FRF and impedance have major practical implications for measurement. In short; impedance cannot be directly measured (at least not in a practical or reliable way), whilst mobility can (and with no great difficulty, usually…).\n\n\n\n\n\n\nNote\n\n\n\nAn important consequence of the element wise constraint in Equation 7 is that the elements of the impedance matrix depend on the position and number of DoFs being considered. If a DoF is moved, so is the constaint that accompanies it. Clearly, this will lead to a different ‘boundary condition’ on the component, and so all elements of the impedance matrix will be modified. Similarly, if a new DoF is added, a new constraint is introduced and again, all elements of the impedance matrix will be modified. The same is not true, however, for the mobility FRF; moving or adding a DoF only alters the external force constraint, which has no effect on the component’s boundary conditions. Hence, the elements of the FRF matrix are unchanged.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/components/components.html#degrees-of-freedom",
"href": "introductions/VAVPs/components/components.html#degrees-of-freedom",
"title": "Components (passive characterisation)",
"section": "Degrees of freedom",
"text": "Degrees of freedom\nHaving introduced the general concept of a multi-DoF FRF matrix, the building block of a VAVP, we are now ready to discus what FRFs are required to characterise completely the passive properties of a component. There are two main talking points here: the first conearns the nature of the FRF’s excitation and response; the second concearns where on a component should FRFs be measured.\nIn the preceeding discussion we made no specific mention of the type of forces or responses (their direction or whether they are translational or rotational) in our defintion of the FRF. These are important considerations when representing a component by an FRF matrix. Failure to caputure a correct set of FRFs will lead to a poorly characterised component, and put the entire VAVP at risk.\n\n\n\n\n\n\nFigure 2: Illustration of the DoFs for source and receiver components; the interface DoFs \\(c\\) (for source and receiver) and the target DoFs \\(r\\) (receiver only).\n\n\n\nThe FRFs required to characterise a component depend firstly on the type of component, whether it is a vibration source, a coupling element, or a recevier-like structure. Following the black-box approach described earlier, a component can be characterised entirely by a set of input-output (FRF) relations at and between its structural connections (interfaces), in addition to any extra remote positions that are of interest. For a vibration source, there is typically no interest in the remote response away from the interface; the source FRFs include only the interface DoFs \\(c\\). Coupling elements are designed to couple two (or more) components, and so they have least two distinct interfaces, e.g. above and below a vibration isolator. There is generally no interest in the element’s response inbetween these interfaces and so, like the vibration source, the coupling element FRFs include only the interface DoFs \\(c_1, \\, c_2\\). Recevier-like components differ from the above. In addition to the interface \\(c\\), there are typically a series of additional target locations \\(r\\) that are of interest; these are the usually the locations at which we would like to make a response prediction using the VAVP. They might be structural or acoustic.\n\nSource components\nLets begin by considering the source FRF matrix in the case of \\(N\\) discrete connections points. The motion of any `point’ on a structural component is in general described by 6 DoFs - 3 translations in the \\(x\\), \\(y\\) and \\(z\\) coordinates, and a rotation about each of these axes, respectively, \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\). We term these DoFs ‘rigid body’ or ‘point-like’ DoFs. Of course, the notion of a point is an idealisation describing an infinesimally small area, which a structural connection point is clearly not. Nevertheless, if the structural wavelength (which depends on frequnecy) is large compared to the size of the connection, then locally the dynamics behave point-like and can be described entirely by the rigid body DoFs. It is imperitive when building a VAVP that all point-like DoFs are considered (at least initially); very rarely does structual forcing or coupling occur in a single translational direction with no rotations.\n\n\n\n\n\n\nNote\n\n\n\nNote that as frequency increases, and structural wave length decreases, local deformations will begin to occur over the connection. At this point the interface dynamics are longer behaving point-like and an alternative ‘flexible’ or ‘continuous’ representation is required. This topic will be discussed later in the context of the Virtual Point transformation in Interfaces.\n\n\nSo, a source with \\(N\\) interface connections is characterised by \\(6N\\) point-like DoFs such that its complete FRF matrix \\(\\mathbf{Y}_S\\in\\mathbb{C}^{6N\\times6N}\\), is given by, \\[\n \\mathbf{Y}_{Scc} = \\left[\\begin{array}{c c c c}\\mathbf{Y}_{Sc_1c_1} & \\mathbf{Y}_{Sc_1c_2} & \\cdots & \\mathbf{Y}_{Sc_1c_N}\\\\\n \\mathbf{Y}_{Sc_2c_1} & \\mathbf{Y}_{Sc_2c_2} & \\cdots & \\mathbf{Y}_{Sc_2c_N}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n \\mathbf{Y}_{Sc_Nc_1} & \\mathbf{Y}_{Sc_Nc_2} & \\cdots & \\mathbf{Y}_{Sc_Nc_N}\n \\end{array}\\right]\n\\tag{8}\\] where the capitalised sub-script \\(S\\) clarifies that the FRF belongs to the source in isolation, and the lower case sub-scripts \\(cc\\) that the excitation and reponse positions are co-located at the interface DoFs \\(c\\). In general, \\(\\mathbf{Y}_{Sc_ic_j}\\in\\mathbb{C}^{6\\times 6}\\) is a FRF sub-matrix relating the translational and rotational forces at the \\(j\\)th interface connection to the corresponding response at the \\(i\\)th. Explicitly, \\[\n \\left(\\begin{array}{c}\n v_{x_i} \\\\ v_{y_i} \\\\ v_{z_i} \\\\ v_{\\alpha_i}\\\\ v_{\\beta_i}\\\\ v_{\\gamma_i}\n \\end{array}\\right)=\\left[\\begin{array}{cccccc} Y_{{x_ix_j}} & Y_{{x_iy_j}} & Y_{{x_iz_j}} & Y_{{x_i\\alpha_j}} & Y_{{x_i\\beta_j}} & Y_{{x_i\\gamma_j}} \\\\\n Y_{{y_ix_j}} & Y_{{y_iy_j}} & Y_{{y_iz_j}} & Y_{{y_i\\alpha_j}} & Y_{{y_i\\beta_j}} & Y_{{y_i\\gamma_j}} \\\\\n Y_{{z_ix_j}} & Y_{{z_iy_j}} & Y_{{z_iz_j}} & Y_{{z_i\\alpha_j}} & Y_{{z_i\\beta_j}} & Y_{{z_i\\gamma_j}} \\\\\n Y_{{\\alpha_i x_j}} & Y_{{\\alpha_i y_j}} & Y_{{\\alpha_i z_j}} & Y_{{\\alpha_i\\alpha_j}} & Y_{{\\alpha_i\\beta_j}} & Y_{{\\alpha_i\\gamma_j}} \\\\\n Y_{{\\beta_i x_j}} & Y_{{\\beta_i y_j}} & Y_{{\\beta_i z_j}} & Y_{{\\beta_i\\alpha_j}} & Y_{{\\beta_i\\beta_j}} & Y_{{\\beta_i\\gamma_j}}\\\\\n Y_{{\\gamma_i x_j}} & Y_{{\\gamma_i y_j}} & Y_{{\\gamma_i z_j}} & Y_{{\\gamma_i\\alpha_j}} & Y_{{\\gamma_i\\beta_j}} & Y_{{\\gamma_i\\gamma_j}}\n \\end{array}\\right]\\left(\\begin{array}{c}\n f_{x_j} \\\\ f_{y_j} \\\\ f_{z_j} \\\\ f_{\\alpha_j}\\\\ f_{\\beta_j}\\\\ f_{\\gamma_j}\n \\end{array}\\right)\n\\tag{9}\\] where, for example, \\(Y_{{\\alpha_i y_j}}\\) is the FRF relating a translational force in the \\(y\\) direction at position \\(j\\) to a rotational response about the \\(x\\)-axis at position \\(i\\).\n\n\n\n\n\n\nNote\n\n\n\nIn practice, not all DoFs are necessary at all points. In fact, in some scenarios including a DoF that does not contirbute significantly to forcing or coupling can actually worsen the performance of a VAVP. For this reason, perhaps the greatest challenge when building a VAVP is correctly identifying which DoFs are required (which depends on frequency by the way!) to characterise a component sufficiently without introducing unecessary error and uncertainty.\n\n\nAt this point it is important to recognise that whilst the full \\(6N\\) DoF matrix described above is theoretically important, FRF matrices are rarely measured in this form. The \\(6N\\) DoFs above describe a component’s motion in terms of the translations and rotations about a series of idealised point-like connections. In practice, it is not usually possible to directly measure translational and rotational motions for co-located excitation and response positions, nor is it straightforward to measure rotational motions directly. Rather, we measure a set of translational FRFs between \\(N\\) excitation positions surrounding point \\(j\\), and \\(M\\) response positions surrounding point \\(i\\), in the form, \\[\n \\left(\\begin{array}{c}\n v_{1} \\\\ v_{2} \\\\ \\vdots \\\\ v_{M}\n \\end{array}\\right)_i = \\left[\\begin{array}{c c c c}Y_{11} & Y_{12} & \\cdots & Y_{1N}\\\\\n Y_{11} & Y_{22} & \\cdots & Y_{2N}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n Y_{M1} & Y_{M2} & \\cdots & Y_{MN}\\\\\n \\end{array}\\right]_{ij} \\left(\\begin{array}{c}\n f_{1} \\\\ f_{2} \\\\ \\vdots \\\\ f_{N}\n \\end{array}\\right)_j.\n\\tag{10}\\] An example illustrated in Figure 3, where for clarity we consider collocated points, \\(i=j\\).\nWhilst directly measured FRF matrices in the form of Equation 10 can be used for some forms of analysis, due to their non-co-located excitation and response positions and lack of rotations, they are limited in terms of VAVP development.\nFrom Equation 10, provided the measured FRF matrix spans the vector space of the full point-like \\(6\\) DoF matrix (Equation 9), the latter can be determined by applying an appropriate rigid body transformation, \\[\n \\left[\\begin{array}{cccccc} Y_{{x_ix_j}} & Y_{{x_iy_j}} & Y_{{x_iz_j}} & Y_{{x_i\\alpha_j}} & Y_{{x_i\\beta_j}} & Y_{{x_i\\gamma_j}} \\\\\n Y_{{y_ix_j}} & Y_{{y_iy_j}} & Y_{{y_iz_j}} & Y_{{y_i\\alpha_j}} & Y_{{y_i\\beta_j}} & Y_{{y_i\\gamma_j}} \\\\\n Y_{{z_ix_j}} & Y_{{z_iy_j}} & Y_{{z_iz_j}} & Y_{{z_i\\alpha_j}} & Y_{{z_i\\beta_j}} & Y_{{z_i\\gamma_j}} \\\\\n Y_{{\\alpha_i x_j}} & Y_{{\\alpha_i y_j}} & Y_{{\\alpha_i z_j}} & Y_{{\\alpha_i\\alpha_j}} & Y_{{\\alpha_i\\beta_j}} & Y_{{\\alpha_i\\gamma_j}} \\\\\n Y_{{\\beta_i x_j}} & Y_{{\\beta_i y_j}} & Y_{{\\beta_i z_j}} & Y_{{\\beta_i\\alpha_j}} & Y_{{\\beta_i\\beta_j}} & Y_{{\\beta_i\\gamma_j}}\\\\\n Y_{{\\gamma_i x_j}} & Y_{{\\gamma_i y_j}} & Y_{{\\gamma_i z_j}} & Y_{{\\gamma_i\\alpha_j}} & Y_{{\\gamma_i\\beta_j}} & Y_{{\\gamma_i\\gamma_j}}\n \\end{array}\\right]= \\mathbf{T}_r\\left[\\begin{array}{c c c c}Y_{11} & Y_{12} & \\cdots & Y_{1N}\\\\\n Y_{11} & Y_{22} & \\cdots & Y_{2N}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots\\\\\n Y_{M1} & Y_{M2} & \\cdots & Y_{MN}\\\\\n \\end{array}\\right]_{ij} \\mathbf{T}_f^{\\rm T}\n\\] where \\(\\mathbf{T}_{r,f}\\) are the transformation matrices for response and force positions, respectively. Further details on the construction of \\(\\mathbf{T}_{r,f}\\) are given in Interfaces. Hereafter, we will assume (unless otherwise stated) that any FRF matrices introduced have been transformed in such a way that they relate translational/rotational DoFs, as described above.\n\n\n\n\n\n\nFigure 3: Illustration of ‘transformed’ (left) vs. ‘measured’ (right) FRFs. Measured FRFs related a set of discrete non-co-located forces and responses surrounding a connection point. Transformed FRFs relate the co-located forces and moments to the translational and rotational responses at a particular point.\n\n\n\n\n\nRecevier components\nLet us now consider the DoFs of a matching reciever structure (as in Figure 2). As a matching component, we have the same number of interface connections but now with an aditional set of remote \\(r\\) DoFs. Consequently, the recevier FRF takes the form, \\[\n \\mathbf{Y}_{R} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Rcc} & \\mathbf{Y}_{Rcr}\\\\\n \\mathbf{Y}_{Rrc} & \\mathbf{Y}_{Rrr}\n \\end{array}\\right]\n\\] where: the FRF submatrix \\(\\mathbf{Y}_{Rcc}\\) is identical in form to that of Equation 8 (i.e. including \\(N\\) sub-FRF matrices, one for each connection point); \\(\\mathbf{Y}_{Rrc}\\) and \\(\\mathbf{Y}_{Rcr}\\) are the transfer FRFs between the interface and remote DoFs; and \\(\\mathbf{Y}_{Rrr}\\) is the point FRF matrix for the remote DoFs. If the remote points are acoustic in nature (say if we wish to predict the sound pressure level within a vehicle cabin), we use instead vibro-acoustic FRF matrices as, \\[\n \\mathbf{Y}_{R} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Rcc} & \\mathbf{H}_{Rcr}\\\\\n \\mathbf{H}_{Rrc} & \\mathbf{H}_{Rrr}\n \\end{array}\\right]\n\\] In this case \\(\\mathbf{Y}_{R}\\) isnt strictly speaking a structural FRF matrix anymore, rather its a mixed vibro-acoustic FRF matrix. Nevertheless we use the same notation.\n\n\n\n\n\n\nNote\n\n\n\nThe vibro-acoustic FRF matrix \\(\\mathbf{H}_{Rcr}\\) relates a volume velocity excitation at \\(r\\) to the structura response at \\(c\\). Similary, \\(\\mathbf{H}_{Rrc}\\) relates a structural force excitation at \\(c\\) to an acoustic pressure response at \\(r\\). Finally, \\(\\mathbf{H}_{Rrr}\\) is a purely acoustic FRF matrix that relates a volume velocity excitation at \\(r\\) to its co-located pressure response.\n\n\nUnlike the interface connections, which generaly need to include all 6 point-like DoFs (translations and rotations about each axis), the remote points \\(r\\) need only include the DoFs for which prediction is sought. Most often this is just the outward normal response and some set of locations, or perhaps a sound pressure level at some listener position. Moreover, if these remote points are soley for prediction, i.e. no external forcing or coupling is to be applied there, then only one of the transfer FRFs is required, and the remote point FRF can be discarded entirely. This reduces the recevier FRF to, \\[\n \\mathbf{Y}_{R} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Rcc} \\\\\n \\mathbf{Y}_{Rrc}\n \\end{array}\\right] \\qquad \\mbox{or} \\qquad \\left[\\begin{array}{c c}\\mathbf{Y}_{Rcc} \\\\\n \\mathbf{H}_{Rrc}\n \\end{array}\\right]\n\\] This reduction can drastically simplify the measurement of \\(\\mathbf{Y}_{R}\\), especialy for the vibro-acoustic case.\n\n\nCoupling elements\nFor a coupling elements, such as a vibration isolator, we typically have two sets of interface DoFs; those above the element (\\(c_1\\)) and those below (\\(c_2\\)). As above, at each of these interfaces all point-like DoFs should be accounted for. This leads to an FRF matric \\(\\mathbf{Y}_I\\in\\mathbb{C}^{12\\times 12}\\) of the form, \\[\n \\mathbf{Y}_{I} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Ic_1c_1} & \\mathbf{Y}_{Ic_1c_2}\\\\\n \\mathbf{Y}_{Ic_2c_1} & \\mathbf{Y}_{Ic_2c_2}\n \\end{array}\\right]\n\\] where \\(\\mathbf{Y}_{Ic_1c_1}\\) and \\(\\mathbf{Y}_{Ic_1c_1}\\) are the point FRFs above and below the element, respectively, and \\(\\mathbf{Y}_{Ic_1c_2}=\\mathbf{Y}_{Ic_2c_1}^{\\rm T}\\) are the transfer FRFs from one side to the other.\n\n\nSummary\nIn summary of this section, the passive characterisation of source, recevier and coupling components is acheived by the following FRF matrices, \\[\n \\mathbf{Y}_{S} = \\mathbf{Y}_{Scc}\\qquad \\mathbf{Y}_{R} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Rcc} \\\\\n \\mathbf{Y}_{Rrc}\n \\end{array}\\right] \\qquad \\mbox{and} \\qquad \\mathbf{Y}_{I} = \\left[\\begin{array}{c c}\\mathbf{Y}_{Ic_1c_1} & \\mathbf{Y}_{Ic_1c_2}\\\\\n \\mathbf{Y}_{Ic_2c_1} & \\mathbf{Y}_{Ic_2c_2}\n \\end{array}\\right]\n\\] where \\(c\\) is set to include all point-like DoFs across the interface, \\(r\\) is a specific subset of DoFs that we wish to make predictions for, and \\(c_1\\) and \\(c_2\\) include all point-like DoFs above and below the coupling element, respectively.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/components/components.html#boundary-conditions",
"href": "introductions/VAVPs/components/components.html#boundary-conditions",
"title": "Components (passive characterisation)",
"section": "Boundary conditions",
"text": "Boundary conditions\nCentral to the VAVP framework is notion of an independent characteristion of each component by its (multi-DoF) FRF matrix. In the above we introduced general concept of multi-DoF FRFs, discussed what DoFs are required for each component type, but as yet we have made no comment on how to ensure that these FRF matrices are independent, in partiuclar when obtained by experiment. This a comes down to boundary conditions.\nTo ensure that a measured FRF matrix is an invariant property it should be obtained with the component in ‘isolation’. To achieve this, the preferred test arrangement is with a ‘free-interface’ condition under either a grounded support or a free boundary condition, as discussed below and illustrated in. For this latter case, we are particularly interested in the design of an appropriate resilient support system and, where necessary, a method for removing its effect from measured FRFs.\n\n\n\n\n\n\n\n\nGrounded support.\n\n\n\n\n\n\n\nFree interface.\n\n\n\n\n\n\nFigure 4: Illustration of the free and grounded supports for independent characterisation of source and recevier-like components\n\n\n\n\nGrounded support\nReceiver-like components are often such that they will naturally be supported by the ground, for example the body of a vehicle or the fabric of a building. In this case, to obtain an independent FRF, it is sufficient to ensure that the component is supported in a manner representative of its intended application, and that the interface to which any components will be attached is left free. In a sense, we are simply considering the grounded support to be part of the component.\n\n\nFree suspension\nFor components that are to be entirely supported by other components (for example, most vibration sources), a ‘free’ boundary condition is required to obtain an indeopendent FRF. This clearly presents some practical issues and only an approximately free boundary can be achieved in practice. This is typically done by suspending the component from resilient cabling (e.g. elastic bungees) or by mounting it on a sufficiently resilient support. This support system should be designed such that its impedance is as small as possible (while providing adequate structural support for the component) so that the measured coupled FRF approximates that of the free component.\nThe added support will ultimately lead to low frequency mass-spring-like resonances where the enitre component bounces around as a rigid body. A suitable support scheme should see these resonant frequencies positioned well below the lowest frequency of interest. Some practical guidence on acheiving this is given in ISO 9611 and ISO 7626.\n\n\nIn situ\nIn some cases, a resilient support is simply not possible as the component must be mounted to a more supportive structure to operate, such as a test bench. In this scenario one must resort to a more advanced indirect method of obtaining the component’s free-interface FRF; this involves mathematically decoupling the component from a known test bench. This sort of decoupling procedure is fraught with difficulty, and should generally be avoided if possible. A similar problem arrises when considering the characterisation of coupling element’s, as described below.\nTo measure the free-interface FRF matrix of a coupling element one would have to freely suspend the element (say from elastic bungees), instrument its two interfaces and then apply the necessary force excitations… There are a number of practical issues with this:\n\ncoupling elements are generally quite light weight, and so to have a minimal effect on the measured FRFs the suspension used would have to be incredibly compliant;\nthere is usually limited space to insturment and subsequently excite either end; even if this were possible, the compliant suspension would lead to very large rigid body motions;\ncoupling elements are often sensitive to pre-load and installation effects, which are not caputured when freely suspended.\n\nClearly, we are unable to characterise coupling elements by the same methods as source and recevier components, i.e. by their free interface FRF matrices. Instead, measurements must be made with the element in situ, i.e. installed in some assembely. This installation provides not only a means on instrumenting and exciting the element, but also necessayr pre-load. The challange then becomes, how to extract independent component data from measurements made on an assembly. For coupling elements, this issue will be dealt with in detail in Vibration isolators..",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/components/components.html#what-next",
"href": "introductions/VAVPs/components/components.html#what-next",
"title": "Components (passive characterisation)",
"section": "What next?",
"text": "What next?\nHaving introduced multi-DoF FRFs and discussed which DoFs are necessary for a component’s characterisation, we now have the means to independently characterise the passive properties of source and recevier-like structures. Coupling elements like vibration isolators require an alternative approach, which will be described shortly in Vibration isolators.. Once we have dealt with coupling elements, we will have everything we need to build a coupled assembly model by means of substructuring.\nHowever, before we get onto isolators we need to take a very important detour to discuss Interface representations. How we describe an interfaces in terms of a finite number of DoFs (e.g. point-like translations and rotations), and how we obtain these DoF from measurement, is central to any component-level characterisation, both passive and active. On the next page we will introduce the general notion of interface transformations, and the special case of the Virtual Point transformation.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Components (passive characterisation)"
]
},
{
"objectID": "introductions/VAVPs/substructuring/substructuring.html",
"href": "introductions/VAVPs/substructuring/substructuring.html",
"title": "Substructuring",
"section": "",
"text": "On this page we introduce substructuring; the process of mathematically coupling individual component models to build a model of an assembled system. Substructuring is a very broad subject, and can be performed in of the typical domains (physical, state space, modal, time and frequency). In Components we justified our choice of the frequency domain for the VAVP framework, and so representation of components by multi-DoF FRF matrices. For this reason, we will focus exclusively on frequency-based substructuring. Even then, there are various ways one can formulate the substructuring equations. We will breifly introduce the direct method before focusing on the primal and dual formulations, the latter of which tends to be the most commonly adopted in the literature.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Substructuring"
]
},
{
"objectID": "introductions/VAVPs/substructuring/substructuring.html#coupling-conditions",
"href": "introductions/VAVPs/substructuring/substructuring.html#coupling-conditions",
"title": "Substructuring",
"section": "Coupling conditions",
"text": "Coupling conditions\nThe rigid coupling of two components requires satisfying two conditions: equilibrium and continuity.\n\n\n\n\n\n\n\nFigure 1: Conditions of continuity and equilibrium.\n\n\n\n\nThe condition of equilibrium is simply a restatement of Newton’s 3rd law - when two objects are in contact they exert equal and opposite forces on one another. For two sub-structures in rigid contact at a single point it may be expressed in the form, \\[\ng_1 = -g_2\n\\] where \\(g_1\\) and \\(g_2\\) represent the interface coupling forces at the connecting DoFs of the two sub-structures.\nThe condition of continuity (also known as compatibility) states that the sub-structural DoFs in rigid contact must move in unison, i.e. with equal velocity (also displacement and acceleration), and may be expressed in the form, \\[\nv_1 = v_2 = v_C\n\\] where \\(v_1\\) and \\(v_2\\) represent the contact velocities of the two sub-structures, and \\(v_C\\) the velocity of the coupled structure.\nIn the presence of several sub-structures and/or connecting DoFs, the above conditions can be expressed more generally in vector form as, \\[\n\\mathbf{g}_1 = -\\mathbf{g}_2\n\\tag{1}\\] and \\[\n\\mathbf{v}_1 = \\mathbf{v}_2 = \\mathbf{v}_C\n\\tag{2}\\] where Equation 1 and Equation 2 are now generalised in the sense that they can also include moment forces and angular responses, respectively.\nDynamic substructuring, the process of assembling substructural properties to form an assembly, amounts to the application of the above conditions to the connecting interfaces between each substructure (or equivalently, component). Depending on how these substructures are represented (i.e. in the physical domain, modal domain, frequency domain, or state-space) the sub-structuring formulation may differ, though the underlying physics remains the same.\nIn this book we are principally focused on frequency domain methods, and so we will consider sub-structuring methods. That said, the sub-structuring methods presented below are not limited to experimental FRFs. FRF matrices determined by analytical or numerical means can also be used to represent individual sub-structures. In-fact, the ability to combine experimental and numerical sub-structures to form `hybrid assemblies’ is one of the principal advantages of the component-based approach to simulation.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Substructuring"
]
},
{
"objectID": "introductions/VAVPs/substructuring/substructuring.html#direct-method-impedance-coupling",
"href": "introductions/VAVPs/substructuring/substructuring.html#direct-method-impedance-coupling",
"title": "Substructuring",
"section": "Direct method (impedance coupling)",
"text": "Direct method (impedance coupling)\nThe direct formulation considers the special case in which just two substructures are being coupled, typically a source \\(S\\) and receiver \\(R\\) as in Figure 2. Each substructure is characterised independently by their free interface mobility matrices, \\(\\mathbf{Y}_{Scc}\\in\\mathbb{C}^{N_c\\times N_c}\\) and \\(\\mathbf{Y}_{Rcc}\\in\\mathbb{C}^{N_c\\times N_c}\\). If remote DoFs are considered on the receiver structure the transfer mobility \\(\\mathbf{Y}_{Rrc}\\in\\mathbb{C}^{N_r\\times N_c}\\) (or the vibro-acoustic FRF \\(\\mathbf{H}_{Rrc}\\in\\mathbb{C}^{N_r\\times N_c}\\), if the sound pressure response is of interest) is also necessary. Our treatment of the direct method will be brief as the primal and dual methods provide far more flexible substructuring frameworks. Indeed, our focus here will simply be to prove the impedance coupling relation used in Isolators for the in situ characterisation of coupling element, i.e. \\[\n\\mathbf{Z}_{Ccc} = \\mathbf{Z}_{Acc} + \\mathbf{Z}_{Icc} + \\mathbf{Z}_{Bcc}\n\\tag{3}\\] which states that when two structures are in rigid contact, their impedance matrices sum together.\n\n\n\n\n\n\nFigure 2: Illustration of example source-receiver (\\(SR\\)) assembyly (shown uncoupled). Interface DoFs are denoted \\(c\\) and remote points \\(r\\).\n\n\n\nTo derive Equation 3 we begin with the interface responses of the (uncoupled) substructures \\(S\\) and \\(R\\) due to an interface forcing, \\[\n\\mathbf{v}_{Sc} = \\mathbf{Y}_{Scc} \\left(\\mathbf{f}_{Sc}+\\mathbf{g}_{Sc}\\right), \\quad \\mathbf{v}_{Rc} = \\mathbf{Y}_{Rcc} \\left(\\mathbf{f}_{Rc}+\\mathbf{g}_{Rc}\\right)\n\\] where \\(\\mathbf{f}\\) represents an externally applied force, and \\(\\mathbf{g}\\) a set of interface (coupling) forces.\nWe assume the inverse matrices \\(\\mathbf{Y}_{Scc}^{-1}\\) and \\(\\mathbf{Y}_{Rcc}^{-1}\\) exist, and rearrange the above equations to isolate the interface forces, \\[\n\\mathbf{Y}_{Scc}^{-1}\\mathbf{v}_{Sc} - \\mathbf{f}_{Sc} = \\mathbf{g}_{Sc}, \\quad \\mathbf{Y}_{Rcc}^{-1}\\mathbf{v}_{Rc} - \\mathbf{f}_{Rc} = \\mathbf{g}_{Rc}.\n\\] Equilibrium and continuity are then enforced as per Equation 1 and ?@eq-eq-contin, \\[\n\\mathbf{Y}_{Scc}^{-1}\\mathbf{v}_{c} - \\mathbf{f}_{Sc} = -\\mathbf{Y}_{Rcc}^{-1}\\mathbf{v}_{c} + \\mathbf{f}_{Rc}\n\\] where it is noted that the coupled response \\(\\mathbf{v}_{c}\\) has been substituted for both the source and receiver velocities.\nIsolating the external force terms and introducing the coupled external force \\(\\mathbf{f}_{c} = \\mathbf{f}_{Rc} + \\mathbf{f}_{Sc}\\) as the sum of substructure external forces yields, \\[\n\\left(\\mathbf{Y}_{Scc}^{-1}+\\mathbf{Y}_{Rcc}^{-1}\\right)\\mathbf{v}_{c} = \\mathbf{f}_{c}.\n\\] Finally, pre-multiplying both sides by the inverse of the left bracketed term (assuming its exists), we obtain an expression for the coupled velocity, \\[\n\\mathbf{v}_{c} = \\left(\\mathbf{Y}_{Scc}^{-1}+\\mathbf{Y}_{Rcc}^{-1}\\right)^{-1} \\mathbf{f}_{c}.\n\\tag{4}\\] Equation 4 relates an external force applied to the interface DoFs \\(c\\) to the coupled velocity at said DoFs. Hence, the bracketed term must represent the point mobility matrix of the coupled assembly’s interface,\n\\[\n\\mathbf{Y}_{Ccc} = \\left(\\mathbf{Y}_{Scc}^{-1}+\\mathbf{Y}_{Rcc}^{-1}\\right)^{-1}.\n\\tag{5}\\] Recalling that the inverse of mobility is impedance, from Equation 5 we see that the coupled impedance may be expressed simply as the sum of sub-structure impedances, \\[\\label{}\n\\mathbf{Z}_{Ccc} = \\mathbf{Z}_{Scc}+\\mathbf{Z}_{Rcc}.\n\\] This is a well known but important result; when two structural components are in rigid contact, the impedance at their connection point is equal to the matrix sum of their respective impedances. This result was used in Isolators to justify the in situ characterisation of resilient elements, but at the time was not proven.\nAlthough perfectly admissible, Equation 5 can be written in a more convenient (at least computationally) form using the mathematical relation, \\[\\begin{align}\\label{invSum}\n\\mathbf{Y}_{Scc}^{-1}+\\mathbf{Y}_{Rcc}^{-1} = \\mathbf{Y}_{Rcc}^{-1}\\left[\\mathbf{Y}_{Scc}+\\mathbf{Y}_{Rcc}\\right]\\mathbf{Y}_{Scc}^{-1}.\n\\end{align}\\] Substitution into Equation 5 then yields, \\[\\begin{align}\\label{directPointMob}\n% \\left(\\mathbf{Y}_{Scc}^{-1}+\\mathbf{Y}_{Rcc}^{-1}\\right)^{-1} &= \\left(\\mathbf{Y}_{Scc}^{-1}\\left[\\mathbf{Y}_{Rcc}+\\mathbf{Y}_{Scc}\\right]\\mathbf{Y}_{Rcc}^{-1}\\right)^{-1} \\\\\n\\mathbf{Y}_{Ccc} =\\mathbf{Y}_{Scc}\\left[\\mathbf{Y}_{Scc}+\\mathbf{Y}_{Rcc}\\right]^{{-1}}\\mathbf{Y}_{Rcc}.\n\\end{align}\\] In this modified form, the double matrix inversion is avoided. Matrix inversions are not only computationally expensive, but are known to cause an ‘amplification’ of errors and thus increase overall levels of uncertainty\nSimilar direct relations can be derived for the transfer FRF \\(\\mathbf{Y}_{Crc}\\) for rigid and resiliently sources components. However, this direct approach quickly becomes inconvenient as soon as any more components are introduced. Hence, we move on to discus the so-called three field formulation and the resulting primal and dual formulations.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Substructuring"
]
},
{
"objectID": "introductions/VAVPs/substructuring/substructuring.html#three-field-formulation",
"href": "introductions/VAVPs/substructuring/substructuring.html#three-field-formulation",
"title": "Substructuring",
"section": "Three field formulation",
"text": "Three field formulation\nWe begin by considering the equations of motion of \\(P\\) sub-structures in a block diagonal form, as given by, \\[\n\\mathbf{Z}\\mathbf{v} = \\mathbf{f} + \\mathbf{g}\n\\tag{6}\\] where, \\(\\mathbf{Z}\\) is the block diagonal impedance matrix of the \\(P\\) uncoupled sub-structures and \\(\\mathbf{v}\\), \\(\\mathbf{f}\\) and \\(\\mathbf{g}\\) are the corresponding partitioned column vectors of velocities, applied forces, and coupling interface forces, respectively. \\[\n\\mathbf{Z} = \\left[\\begin{array}{c c c c}\n\\mathbf{Z}^{(1)} & & &\\\\\n& \\mathbf{Z}^{(2)} & &\\\\\n& & \\mathbf{\\ddots} &\\\\\n& & & \\mathbf{Z}^{(P)}\n\\end{array}\\right]\n\\]\n\\[\n\\mathbf{v} = \\left(\\begin{array}{c}\n\\mathbf{v}^{(1)} \\\\\n\\mathbf{v}^{(2)} \\\\\n\\vdots \\\\\n\\mathbf{v}^{(P)} \n\\end{array}\\right),\\quad \\mathbf{f} = \\left(\\begin{array}{c}\n\\mathbf{f}^{(1)} \\\\\n\\mathbf{f}^{(2)} \\\\\n\\vdots \\\\\n\\mathbf{f}^{(P)} \n\\end{array}\\right),\n\\quad \\mathbf{g} = \\left(\\begin{array}{c}\n\\mathbf{g}^{(1)} \\\\\n\\mathbf{g}^{(2)} \\\\\n\\vdots \\\\\n\\mathbf{g}^{(P)} \n\\end{array}\\right).\n\\] Note that the vibro-acoustic FRF matrix \\(\\mathbf{H}\\) may be included straightforwardly by augmenting the appropriate impedance matrix.\n\n\n\n\n\n\n\n\nUncoupled assembly.\n\n\n\n\n\n\n\nCoupled assembly\n\n\n\n\n\n\nFigure 3: Substructuring example with remote source (\\(a\\)) and receiver (\\(r\\)) DoFs. Coupling occurs at the interface DoFs, labelled \\(1, \\, 2, \\, 3\\) and \\(4\\).\n\n\n\nAs in the direct formulation, to assemble the individual substructures we must enforce the conditions of equilibrium and continuity. To do so we express Equation 1 and Equation 2 more generally in the form, \\[\n\\mathbf{B}\\mathbf{v} = \\mathbf{0},\n\\tag{7}\\] and \\[\n\\mathbf{L}^{\\rm T}\\mathbf{g} = \\mathbf{0},\n\\tag{8}\\] respectively, where \\(\\mathbf{B}\\) and \\(\\mathbf{L}\\) represent signed (contain only 0s, 1s and -1s) and unsigned (contain only (0s and 1s) Boolean matrices, respectively. Together, Equation 6 - Equation 8 are referred to as the three field formulation. They may be solved in a primal or dual manner, depending on whether Equation 7 or Equation 8 is satisfied a priori.\nFor the demonstrative example in Figure 3, \\(\\mathbf{B}\\) and \\(\\mathbf{L}^{\\rm T}\\) are shown in Figure 4.\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 4: Boolean coupling matrices for example problem in Figure 3.\n\n\n\nNote that for both \\(\\mathbf{B}\\) and \\(\\mathbf{L}^{\\rm T}\\) the matrix columns correspond to the uncoupled DoFs of the assembly. For \\(\\mathbf{B}\\), the rows correspond to the interface DoFs where coupling occurs, and for \\(\\mathbf{L}^{\\rm T}\\) the coupled DoFs of the assembly. Their substitution into Equation 7 and Equation 8 yields, \\[\n\\left[\\begin{array}{c c c c c c}\n0 & 1 & 0 & -1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & -1 & 0 \\\\\n\\end{array}\\right] \\left(\\begin{array}{c}\nv_a\\\\v_{1}\\\\v_{2} \\\\ v_{3} \\\\v_{4} \\\\ v_b\n\\end{array}\\right) = \\left(\\begin{array}{c}\n0 \\\\ 0\n\\end{array}\\right)\n\\] and \\[\n\\left[\\begin{array}{c c c c c c}\n1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1\n\\end{array}\\right]\\left(\\begin{array}{c}\n0\\\\ g_{1}\\\\g_{2} \\\\ g_{3} \\\\g_{4} \\\\ 0\n\\end{array}\\right) = \\left(\\begin{array}{c}\n0\\\\0 \\\\ 0 \\\\0\n\\end{array}\\right)\n\\] which clearly satisfy the required constraints for rigid coupling.\nFrom the above example we can see that to construct a signed Boolean matrix \\(\\mathbf{B}\\), for each connection (i.e. row of \\(\\mathbf{B}\\)), we assign a 1 and -1 to the columns corresponding to the DoFs to be coupled. The size of \\(\\mathbf{B}\\) is such that the number of rows is equal to the number of connection points, and the number of columns to the number of uncoupled DoFs.\nTo construct the unsigned Boolean matrix \\(\\mathbf{L}^{\\rm T}\\), for each coupled interface DoF, a pair of 1s are placed in the columns corresponding uncoupled DoF. If a DoF is to be left uncoupled, a 1 is placed in the corresponding column. The size of \\(\\mathbf{L}^{\\rm T}\\) is such that the number of rows is equal to the number of coupled DoFs, and the number of columns to the number of uncoupled DoFs.\n\nPrimal\nThe primal formulation begins by defining a unique set of DoFs \\(\\mathbf{v}_C\\) representing the coupled assembly. Note that number of coupled DoFs is always less than the number of uncoupled DoFs; for every pair of DoFs that we couple we reduced by one the total number of coupled DoFs, see for example Figure 3. As per the condition of continuity, the coupled velocity \\(\\mathbf{v}_C\\) can be related to the uncoupled velocity \\(\\mathbf{v}\\) by an appropriate unsigned Boolean matrix, \\[\n\\mathbf{v} = \\mathbf{L}\\mathbf{v}_C.\n\\tag{9}\\] The condition of continuity is then given by, \\[\n\\mathbf{B} \\mathbf{L}\\mathbf{v}_C = \\mathbf{0}.\n\\] Note that the above is satisfied irrespective of \\(\\mathbf{v}_C\\), hence \\(\\mathbf{B}\\) and \\(\\mathbf{L}\\) represent the null space of one another. This can provide a convenient way of obtaining one Boolean matrix from the other.\nSubstituting Equation 9 into Equation 6 (representing the uncoupled equation of motion) enforces continuity and yields, \\[\n\\mathbf{Z}\\mathbf{L}\\mathbf{v}_C = \\mathbf{f} + \\mathbf{g}.\n\\tag{10}\\] Pre-multiplying both sides of Equation 10 by \\(\\mathbf{L}^{\\rm T}\\) we obtain, \\[\n\\mathbf{L}^{\\rm T}\\mathbf{Z}\\mathbf{L}\\mathbf{v}_C = \\mathbf{L}^{\\rm T}\\mathbf{f} + \\mathbf{L}^{\\rm T}\\mathbf{g}.\n\\tag{11}\\] Noting Equation 8 it is clear that the condition of equilibrium has now also been satisfied. Pre-multiplying both sides of Equation 11 by the inverse of the left most bracketed term yields, \\[\n\\mathbf{v}_C = \\left(\\mathbf{L}^{\\rm T}\\mathbf{Z}\\mathbf{L}\\right)^{-1}\\mathbf{f}_C\n\\tag{12}\\] where \\(\\mathbf{f}_C = \\mathbf{L}^{T} \\mathbf{f}\\) represents the external force applied to the coupled assembly, i.e. the sum of external forces acting on each sub-structure. Finally, substituting the block diagonal impedance matrix for the inverse mobility matrix, \\[\n\\mathbf{Z} = \\mathbf{Y}^{-1}, \\quad \\quad \\left[\\begin{array}{c c c c}\n\\mathbf{Z}^{(1)} & & &\\\\\n& \\mathbf{Z}^{(2)} & &\\\\\n& & \\mathbf{\\ddots} &\\\\\n& & & \\mathbf{Z}^{(P)}\n\\end{array}\\right] = \\left[\\begin{array}{c c c c}\n\\mathbf{Y}^{(1)} & & &\\\\\n& \\mathbf{Y}^{(2)} & &\\\\\n& & \\mathbf{\\ddots} &\\\\\n& & & \\mathbf{Y}^{(P)}\n\\end{array}\\right]^{-1}\n\\] where \\(\\mathbf{Y}^{(P)}\\) is the free mobility of the \\(P\\)th component, we arrive at, \\[\n\\mathbf{Y}_{C} = \\left(\\mathbf{L}^{\\rm T}\\mathbf{Y}^{-1}\\mathbf{L}\\right)^{-1}.\n\\tag{13}\\] where \\(\\mathbf{Y}_{C}\\) is the mobility of the coupled assembly.\nEquation 13 provides a general framework for determining the FRF matrix (in its entirety) of a coupled assembly. It can be interpreted as a generalisation of the impedance summation obtained by the direct method, as in equation Equation 5. In an experimental context Equation 13 has the downside of requiring several matrix inversions (each block matrix of \\(\\mathbf{Y}\\) must be inverted in full). These are not only computationally expensive, but prone to error amplification. For this reason, the primal formulation tends to be considered less reliable than its dual counterpart (discussed below). Nevertheless, for relatively small systems of equations, negligible differences are observed.\n\n\nDual\nThe dual formulation begins by satisfying the condition of equilibrium a priori. This is done by defining the coupling force \\(\\mathbf{g}\\) in the form, \\[\n\\mathbf{g} = -\\mathbf{B}^{\\rm T}\\mathbf{\\lambda}\n\\tag{14}\\] where \\(\\mathbf{B}\\) is the same signed Boolean matrix introduced earlier, and \\(\\lambda\\) a Lagrange multiplier representing physically the unknown coupling interface force intensities. \\(\\lambda\\) is termed a force intensity as it possess no directional information. This is allocated by the Boolean matrix \\(\\mathbf{B}\\).\nThe signed nature of \\(\\mathbf{B}\\) ensures collocated force intensities are equal and opposite in direction. The condition of equilibrium is then given by, \\[\n\\mathbf{L}^{\\rm T}\\mathbf{g} = -\\mathbf{L}^{\\rm T}\\mathbf{B}^{\\rm T}\\mathbf{\\lambda} = \\mathbf{0}.\n\\] Recalling that \\(\\mathbf{L}^{\\rm T}\\), the Boolean matrix satisfying equilibrium as per Equation 8, is in the null space of \\(\\mathbf{B}^{\\rm T}\\), the condition of equilibrium is always satisfied.\nSubstituting Equation 14 into Equation 6 the equations of motion become, \\[\n\\mathbf{v} = \\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\lambda \\right)\n\\tag{15}\\] where \\(\\mathbf{Y}\\) again represents the block diagonal matrix of component free mobilties, \\[\n\\mathbf{Y} = \\left[\\begin{array}{c c c c}\n\\mathbf{Y}^{(1)} & & &\\\\\n& \\mathbf{Y}^{(2)} & &\\\\\n& & \\mathbf{\\ddots} &\\\\\n& & & \\mathbf{Y}^{(P)}\n\\end{array}\\right].\n\\]\nTo determine the coupled assembly response, we must first find \\(\\lambda\\). To do so we begin by substituting Equation 15 into the continuity condition (see Equation 2), \\[\n\\mathbf{B}\\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\lambda \\right) =\\mathbf{B}\\mathbf{Y}\\mathbf{f} - \\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\lambda = \\mathbf{0}.\n\\] In this sense, \\(\\lambda\\) can be interpreted as the force intensity required to ensure compatibility between connected DoFs. Rearranging the above to find \\(\\lambda\\), \\[\n\\lambda = \\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f}\n\\] and substituting this into Equation 15 yields the coupled response (since continuity has now be enforced), \\[\n\\mathbf{v}_C = \\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f} \\right) = \\left(\\mathbf{Y} -\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y} \\right)\\mathbf{f}.\n\\] From the above we can identify the coupled mobility \\(\\mathbf{Y}_C\\) as, \\[\n\\mathbf{Y}_C = \\mathbf{Y} -\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}. \n\\tag{16}\\]\nEquation 16 is a little less intuitive than Equation 13, so it is worth unpacking it a bit more. \\[\n\\mathbf{v}_C = \\underbrace{\\mathbf{Y}\\mathbf{f}}_{\\mathbf{v}}-\\overbrace{\\mathbf{Y}\\underbrace{\\mathbf{B}^{\\rm T}\\overbrace{\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\underbrace{\\mathbf{B}\\mathbf{Y}\\mathbf{f}}_{\\Delta \\mathbf{v}}}^{\\lambda}}_{\\mathbf{g}}}^{\\mathbf{v}_g}\n\\tag{17}\\]\nAs illustrated in Equation 17, the left most \\(\\mathbf{Y}\\mathbf{f}\\) term describes the response of each component to the applied force whilst uncoupled; being uncoupled, forces only generate responses within the component to which they are applied. The secondary term, \\(\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f}\\) describes the additional contribution \\(\\mathbf{v}_g\\) to this response that arises due to the coupling forces \\(\\mathbf{g}\\) generated by the connected components. It is through this term that forces applied to one component generate responses in others. Breaking down this second term in detail, starting from the right, \\(\\mathbf{Y}\\mathbf{f}\\) is again the free response of the uncoupled components due to the applied force \\(\\mathbf{f}\\), and so pre-multiplication by \\(\\mathbf{B}\\) yields the velocity between the coupling DoFs, \\(\\Delta \\mathbf{v}\\). The product \\(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\) when expanded simply yields the sum of the coupling interface FRFs, and so pre-multiplication by the inverse term \\(\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\) yields the coupling force intensity (see equation \\(\\ref{contFfreeV}\\)). This force is applied to the connecting components with equal magnitude but in opposite directions by \\(\\mathbf{B}^{\\rm T}\\), before being propagated through each component by \\(\\mathbf{Y}\\), whereupon its contribution is combined with that of the force applied directly to the component.\nLike Equation 13, Equation 16 provides a general method for determining the mobility matrix of a coupled assembly. Note that in contrast to the primal formulation, the dual formulation requires only a single matrix inversion. Furthermore, after pre- and post-multiplication by \\(\\mathbf{B}\\) (and \\(\\mathbf{B}^{\\rm T}\\)) the size of the required inversion is reduced to those of the interface DoFs alone. Hence, the dual formulation is a) computationally more efficient, and b) less prone to error amplification.\nIt is important to note that in deriving Equation 16 we did not define a unique set of assembly DoFs. And so in the dual formulation \\(\\mathbf{Y}_C\\) retains all uncoupled DoFs (interface DoFs are present twice). Care should be take to remove/avoid these duplicate rows and columns if necessary.\n\n\nIncluding resilient coupling\nThe inclusion of resilient elements, such as vibration isolators, in a substructuring procedure is generally straight forward; the coupling element impedance matrix \\(\\mathbf{Z}_I\\) is simply included in the primal formulation as an individual substructure, or for the dual formulation its mobility matrix \\(\\mathbf{Y}_I\\). However, in the case of ideal springs, the coupling element’s mobility matrix \\(\\mathbf{Y}_I\\) does not exist and so an alternative approach is required as described in the following.\nLet us consider the simplest resilient coupling, that of an ideal spring-damper. The impedance matrix of such an element is given by, \\[\n\\mathbf{Z}_I = \\left[\\begin{array}{c c}\n\\frac{K}{i\\omega}+C & -(\\frac{K}{i\\omega}+ C) \\\\-(\\frac{K}{i\\omega}+ C) & \\frac{K}{i\\omega}+ C\n\\end{array}\\right]\n\\] where \\(K\\) is the stiffness of the spring, and \\(C\\) is damping (assumed here to be of viscous type). An element of this form can be substituted directly in to the primal formulation as the equilibrium and continuity constraints are applied to the impedance matrices of each sub-structure (see Equation 12). In contrast, the dual formulation requires the mobility matrix of each element (see Equation 16), and so the coupling impedance must be inverted. Note however, that \\(\\mathbf{Z}_I\\) is singular. Its columns are linearly related by a factor of -1. Physically, this is because a force applied to an unconstrained, mass-less system causes an infinite response. Hence, an element impedance of this form is not compatible with the dual formulation as described above.\nThere are number of way to get around this apparent issue, two of which we will discuss here. The simplest approach is to add a small ‘virtual mass’ \\(M\\) to either end of the ideal spring. By doing so the element’s impedance becomes, \\[\n\\mathbf{Z}_I = \\left[\\begin{array}{c c}\n\\frac{K}{i\\omega}+C +i\\omega M& -(\\frac{K}{i\\omega}+ C) \\\\-(\\frac{K}{i\\omega}+ C) & \\frac{K}{i\\omega}+C +i\\omega M\n\\end{array}\\right].\n\\] Note that \\(\\mathbf{Z}_I\\) is no longer singular. It can be inverted to obtain the mobility \\(\\mathbf{Y}_I\\) and used within the dual formulation, though, care should be taken when adding this virtual mass. If it is made too large, noticeable effects will be introduced to the systems dynamics. The mass be chosen such that the local non-zero eigen frequencies of the coupling are at least 10 times greater than the frequency range of interest. Alternatively, the effect of the additional mass can be cancelled by coupling a second negative virtual mass to the assembly. This way the virtual mass can be chosen arbitrarily.\nThe second approach is to consider the resilient coupling as a relaxation of the continuity condition. That is, the connecting DoFs are no longer constrained to have the same velocity; the presence of a resilient element allows for their relative movement. We can then redefine an appropriate continuity condition as, \\[\n\\mathbf{B}\\mathbf{v} =\\mathbf{\\delta} \\neq \\mathbf{0}\n\\] where \\(\\mathbf{\\delta}\\) represents the relative velocity of the two interface DoFs. In the presence of an ideal spring-like coupling, this relative velocity can be expressed in terms of the internal force generated by the spring-damper \\(\\lambda\\), \\[\\label{}\n\\mathbf{B}\\mathbf{v} =\\mathbf{\\delta} = \\boldsymbol{\\Gamma}\\lambda\n\\tag{18}\\] where \\(\\boldsymbol{\\Gamma} = \\mbox{diag}\\left[\\frac{K_1}{i\\omega}+C_1,\\cdots, \\frac{K_N}{i\\omega}+C_N\\right]^{-1}\\) describes the inverse point impedance (i.e. mobility) of the \\(N\\) coupling elements. Note that the condition of equilibrium is unchanged. This is because of the massless assumption of the spring-damper coupling, meaning force is conserved across it, and so \\(\\mathbf{g}_1 = \\mathbf{g}_2\\). Substituting Equation 15 into Equation 18 we can rearrange to get \\(\\lambda\\), \\[\n\\lambda = \\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T} + \\boldsymbol{\\Gamma}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f}.\n\\tag{19}\\] Finally, substituting Equation 19 back into Equation 15, we arrive at, \\[\n\\mathbf{Y}_C = \\mathbf{Y} -\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T} + \\boldsymbol{\\Gamma}\\right)^{-1}\\mathbf{B}\\mathbf{Y}. \n\\tag{20}\\] Equation 20 describes the mobility of a resiliently (spring-damper-like) coupled assembly, where \\(\\boldsymbol{\\Gamma}\\) characterises the coupling element.\nNote that the now permitted relative motion between the two sides of the coupling means that the dual method no longer produces duplicate rows/columns.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Substructuring"
]
},
{
"objectID": "introductions/VAVPs/substructuring/substructuring.html#what-next",
"href": "introductions/VAVPs/substructuring/substructuring.html#what-next",
"title": "Substructuring",
"section": "What next?",
"text": "What next?\nWe now have everything we need to build a passive VAVP; a means of characterising passive components by their interface FRF matrices, and techniques to mathematically couple them to obtain assembly FRFs. Though, a passive VAVP isnt particularly useful… We want to be able to predict the response of an assembly whilst it is active (or operational. To do so, we need a method to characterise the active properties of vibration sources in such a way that is compatible with our FRF-based VAVP. To turn on a VAVP, we use the so-called blocked force.",
"crumbs": [
"A quick introduction to:",
"Vibro-Acoustic Virtual Prototyping",
"Substructuring"
]
},
{
"objectID": "introductions/summary.html",
"href": "introductions/summary.html",
"title": "A series of quick introductions",
"section": "",
"text": "On this page you will find a series of Introduction to pages covering a range of topics. These pages are intended to give the reader a gentle and relatively informal introduction to the topics presented. The content presented here is mostly pretty well established. More researchy stuff can be found here.\nAgain, the resources provided on this website are for educational and informational purposes only. While every effort has been made to ensure accuracy, the content may contain errors or omissions. The materials reflect the my own personal views and experience and do not necessarily represent the views of any affiliated institutions or collaborators. Use of the information is at your own risk..!\n\n\n\n\n\n\nFigure 1: A classic meme, advocating the use of the blocked force over the contact force. Don’t get the meme? Check out the Blocked forces page for the details.",
"crumbs": [
"A quick introduction to:"
]
},
{
"objectID": "introductions/Measurement/Sampling_freq_domain/sampling_freq.html",
"href": "introductions/Measurement/Sampling_freq_domain/sampling_freq.html",
"title": "Signal sampling and frequency domain representation",
"section": "",
"text": "Something here?"
},
{
"objectID": "introductions/POV/TDOF/TDOF.html",
"href": "introductions/POV/TDOF/TDOF.html",
"title": "Two degree of freedom systems",
"section": "",
"text": "Before we move onto to general \\(N\\) DoF systems, we will take an intermediate pit stop to consider the special case of the two DoF mass-spring system. Our treatment of the two DoF system wont be as detailed as our previous single DoF system, nor will it be as detailed as our subsequent \\(N\\) DoF system, but we include it here for two reasons: first, it will allow us to introduce and prove certain concepts (such as orthogonality of mode shapes) without resorting to tools from linear algebra and second, there are some notable two DoF systems that are quite interesting to study, specifically the tuned mass damper and the vented (bass reflex) loudspeaker.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Two degrees of freedom"
]
},
{
"objectID": "introductions/POV/TDOF/TDOF.html#synchronous-motion",
"href": "introductions/POV/TDOF/TDOF.html#synchronous-motion",
"title": "Two degree of freedom systems",
"section": "Synchronous motion",
"text": "Synchronous motion\nSince our system is linear, it is true that if \\(x_1\\) and \\(x_2\\) are a solution, then so are \\(\\alpha x_1\\) and \\(\\alpha x_2\\), i.e. a solution can only be determined up to a multiplicative constant (that is, without knowledge of any initial conditions). Through the individual solutions are not fixed, the ratio \\(x_1/x_2\\) is unique.\nWe are interested in a special type of solution to Equation 3 where \\(x_1\\) and \\(x_2\\) are synchronous, i.e. they move together in the same direction (though with different amplitudes). Such a motion requires that \\(x_1\\) and \\(x_2\\) must have the same time dependence, and that the ratio \\(x_1/x_2\\) must remain constant throughout motion; this describes the configuration or pattern of the response, i.e. how each mass is moving relative to the other.\nBy assuming synchronous motion, we are able to separate the time and positional parts of solution, i.e. represent our solution as the product of one variable that depends only on time and another that depends only on position, \\[\n x_1 = u_1 f(t) \\quad \\quad x_2 = u_2 f(t)\n\\tag{4}\\] where \\(u_1\\) and \\(u_2\\) are amplitudes constants (positional part) and \\(f(t)\\) describes shared time dependence. With the above representation, the velocity and acceleration take the form, \\[\n \\dot{x}_n = u_n \\dot{f}(t) \\quad \\mbox{and} \\quad \\ddot{x}_n = u_n \\ddot{f}(t)\n\\]\nUsing this separation of variables our EoMs become, \\[\n \\begin{align}\n 0& = {\\color{red} m_1{u}_1} \\ddot{f}(t)+ {\\color{red}\\left(k_{11} u_1 +k_{12} u_2\\right)} f(t)\\\\\n 0 &= {\\color{blue}m_2{u}_2} \\ddot{f}(t)+ {\\color{blue}\\left(k_{22} u_2+k_{21} u_1\\right)}f(t)\n \\end{align}\n\\] For the above equations to have a solution we require that, \\[\n \\frac{-\\ddot{f}(t)}{f(t)} = {\\color{red}\\frac{k_{11}u_1 + k_{12}u_2}{m_1u_1} } = {\\color{blue} \\frac{k_{22}u_2 + k_{21}u_1}{m_2u_2} } =\\lambda\n\\] Note that the left hand side depends on time, but the right hand side does not. Similarly, the RHS depends on position, but the LHS does not. The LHS and RHS can only be the same if they are equal to a constant, which we define as \\(\\lambda\\).\nFrom the above we conclude that synchronous motion is possible, provided that, \\[\n \\ddot{f}(t) + \\lambda f(t) = 0\n\\tag{5}\\] \\[\n (k_{11} - \\lambda m_1) u_1 + k_{12}u_2 = 0\n\\tag{6}\\] \\[\n (k_{22} - \\lambda m_2) u_2 + k_{21}u_1 = 0\n\\tag{7}\\] all have solutions.\nLets start with equation Equation 5. Note that we have already seen an equation of this form in our discussion of SDoF systems, so we already know it has a solution of the form \\(f(t) = Ae^{st}\\), \\[\n s^2 Ae^{st}+ \\lambda Ae^{st}= 0\n\\] We divide through by \\(Ae^{st}\\) to obtain the characteristic equation, \\[\n s^2 + \\lambda = 0 \\quad \\rightarrow \\quad s_{1,2} = \\pm \\sqrt{-\\lambda}\n\\] The solution to Equation 5 then becomes, \\[\n f(t) = A_1 e^{\\sqrt{-\\lambda_1}t} + A_2 e^{-\\sqrt{-\\lambda_2}t}\n\\] where \\(A_1\\) and \\(A_2\\) are constants that depend on initial conditions. Clearly, if \\(\\lambda<0\\) then the exponents are real and positive and so the solution goes to \\(\\infty\\) and \\(0\\) as \\(t\\rightarrow \\infty\\). This is obviously against the notion of oscillatory motion. We therefore conclude that we must have \\(\\lambda >0\\). To guarantee this, we define \\(\\lambda=\\omega^2\\). Our solution then becomes, \\[\n f(t) = A_1 e^{i\\omega t} + A_2 e^{-i \\omega t}\n\\] or equivalently in \\(\\cos\\) form, \\[\n f(t) = |A| \\cos(\\omega t - \\phi)\n\\tag{8}\\]\nFrom the above we conclude that if the system is in synchronous motion, it is harmonic at frequency \\(\\omega\\)! Now lets deal with equation Equation 6 and Equation 7, \\[\n \\begin{align}\n (k_{11} - \\omega^2 m_1) u_1 + k_{12}u_2 &= 0 \\\\\n (k_{22} - \\omega^2 m_2) u_2 + k_{21}u_1 &= 0\n \\end{align}\n\\tag{9}\\] In Equation 9 we have two simultaneous equations with unknowns \\(u_1\\) and \\(u_2\\), with \\(\\omega^2\\) as a parameter. We want to find values of \\(\\omega^2\\) that give rise to non-trival solutions \\(u_1\\) and \\(u_2\\). This is known as a characteristic-value or eigenvalue problem. In matrix/vector notation we write Equation 9 as, \\[\n \\left(\\left[\\begin{array}{c c}\n k_{11} & k_{12} \\\\ k_{21} & k_{22}\n \\end{array}\\right] -\\omega^2 \\left[\\begin{array}{c c}\n m_1 & 0 \\\\ 0 & m_2\n \\end{array}\\right] \\right) \\left(\\begin{array}{c}\n u_1 \\\\ u_2\n \\end{array}\\right) = \\left(\\begin{array}{c}\n 0 \\\\ 0\n \\end{array}\\right)\n\\tag{10}\\] For Equation 10 to have a non-trivial solution (i.e. \\(u_1\\neq0\\) and \\(u_2\\neq 0\\)), the determinant of \\(\\mathbf{K}-\\omega^2\\mathbf{M}\\) must be 0, \\(\\mbox{det}(\\mathbf{K}-\\omega^2\\mathbf{M}) = 0\\).\n\n\n\n\n\n\nWhy should \\(\\mbox{det}(\\mathbf{K}-\\omega^2\\mathbf{M}) = 0\\) ?\n\n\n\n\n\nRight, so we have just said that the non-trivial solutions of Equation 10 require \\(\\mbox{det}(\\mathbf{K}-\\omega^2\\mathbf{M}) = 0\\) to be true. A purely mathematical explanation of this fact would require some linear algebra concepts that we don’t have at our disposal. So instead, lets go for a more physical intuition approach.\nWe start by noting that the inverse of a matrix can be written in the form, \\[\n \\mathbf{A}^{-1} = \\frac{\\mbox{adj}(\\mathbf{A})^{\\rm T}}{\\mbox{det}(\\mathbf{A})}\n\\tag{11}\\] where \\(\\mbox{adj}(\\mathbf{A})\\) is called the adjoint matrix and is of no real interest here, and \\(\\mbox{det}(\\mathbf{A})\\) is a scalar value called the determinant. For a matrix to be invertible, it is necessary that \\(\\mbox{det}(\\mathbf{A})\\neq 0\\), else the elements of the inverse matrix in Equation 11 go to infinity.\nNow, with the above in mind, let us briefly consider the forced response problem (we will cover this properly a bit later). Our EoM can be rearrange to obtain the response \\(\\mathbf{x}\\) in the form, \\[\n \\left(\\begin{array}{c}\n x_1 \\\\ x_2\n \\end{array}\\right) = \\left(\\left[\\begin{array}{c c}\n k_{11} & k_{12} \\\\ k_{21} & k_{22}\n \\end{array}\\right] -\\omega^2 \\left[\\begin{array}{c c}\n m_1 & 0 \\\\ 0 & m_2\n \\end{array}\\right] \\right)^{-1}\n \\left(\\begin{array}{c}\n f_1 \\\\ f_2\n \\end{array}\\right)\n\\] where we have pre-multiplied both sides by the inverse of the dynamics stiffness matrix.\nWe have seen from our SDOF analysis, that an undamped system, when excited at resonance, has a discontinuous response which goes to infinity. The same goes for our two DOF system. For the response \\(\\mathbf{x}\\) to go to infinity in this way, we must have that the determinant of the matrix goes to zero!\nHence, to find the frequencies of resonance, or natural frequencies, we can just solve the equation, \\[\n\\mbox{det}\\left(\\left[\\begin{array}{c c}\n k_{11} & k_{12} \\\\ k_{21} & k_{22}\n \\end{array}\\right] -\\omega_n^2 \\left[\\begin{array}{c c}\n m_1 & 0 \\\\ 0 & m_2\n \\end{array}\\right] \\right) = 0\n\\] For a two DOF system, there will be two values of \\(\\omega_n\\) that satisfy this equation. This equation is also termed the characteristic polynomial.\nIf you want a more geometrical interpretation of what it means for the determinant to be zero, check out this video from 3Blue1Brown:\n\n\n\n\nIn fact, whilst your at it you should watch his entire linear algebra playlist, its amazing!\n\n\n\nFor a \\(2\\times 2\\) matrix, the determinant is straightforward to write out in full, \\[\n \\mbox{det} \\left[\\begin{array}{c c}\n a & b \\\\ c & d\n \\end{array}\\right] = ad-cb\n\\tag{12}\\] Note that for larger matrices the determinant becomes quite a cumbersome equation…\nUsing Equation 12, the determinant of \\(\\mathbf{D}\\) can be written as, \\[\n \\mbox{det}(\\mathbf{D}) = m_1 m_2\\omega^4 - (m_1 k_{22} + m_2 k_{11})\\omega^2 + k_{11}k_{22} - k_{12}^2\n\\] Note that this is just a quadratic equation in \\(\\omega^2\\)! We can use the quadratic formula to solve for the two roots, \\[\n \\omega_{1,2}^2 = \\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\pm \\frac{1}{2}\\sqrt{\\left(\\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - 4 \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}}\n\\tag{13}\\] Since there are only two roots, this means there are only two possible modes, or configurations, in which synchronous motion is possible. One is at \\(\\omega_1\\) and the other at \\(\\omega_2\\). These are called the natural frequencies of the system.\nWe have now established at what frequencies the system exhibits synchronous motion. Next, we need to determine the constants \\(u_1\\) and \\(u_2\\). These depend on \\(\\omega_1\\) and \\(\\omega_2\\) and describe how the system oscillates at these frequencies.\nIt will be convenient for use to be able to differentiate between the configurations at \\(\\omega_1\\) and \\(\\omega_2\\). To do so, we denote \\(u_1\\) and \\(u_2\\) at frequency \\(\\omega_1\\) as \\(u_{11}\\) and \\(u_{21}\\). Similarly, we denote \\(u_1\\) and \\(u_2\\) at frequency \\(\\omega_2\\) as \\(u_{12}\\) and \\(u_{22}\\). In both cases the first sub-script denotes the mass, and the second the frequency.\nRecall that only the ratio of displacements are unique, i.e \\(u_{21}/u_{11}\\) and \\(u_{22}/u_{12}\\). Using Equation 9 we can determine these ratios for each frequency, \\[\n \\frac{u_{2n}}{u_{1n}} = -\\frac{k_{11} + \\omega_n^2 m_1}{k_{12}} = \\frac{-k_{21}}{k_{22} + \\omega_n^2 m_2}\n\\tag{14}\\]\nThis ratio describes the shape of the motion at \\(\\omega_1\\) and \\(\\omega_2\\). If \\(u_{1n}\\) or \\(u_{2n}\\) is specified, then the other can be determined from the ratio.\nWe group together the amplitude constants at each frequency form the so-called modal vectors, \\[\n \\mathbf{u}_1 =\\left( \\begin{array}{c}\n u_{11} \\\\ u_{21} \n \\end{array}\\right) \\quad \\quad \\mathbf{u}_2 =\\left( \\begin{array}{c}\n u_{12} \\\\ u_{22} \n \\end{array}\\right)\n\\] With the displacements being non-unique, modal vectors are usually normalised to render them unique. This can be done by setting the first value to \\(1\\), or by setting the norm to \\(1\\).\nRecalling from Equation 4 that \\(x_n= u_n f(t)\\), we can write solution at \\(\\omega_n\\) as, \\[\n \\mathbf{x}_n(t) = \\left(\\begin{array}{c}\n x_1(t) \\\\ x_2(t)\n \\end{array}\\right)_n = \\left(\\begin{array}{c}\n u_{1n} \\\\ u_{2n}\n \\end{array}\\right)_n f_n(t) = \\mathbf{u}_n |A_n|\\cos (\\omega_n t-\\phi_n)\n\\] where we have used Equation 8 for the time varying part.\nWe will see shortly, that the response of the system at any time can be obtained from the superposition of the two natural modes, \\[\n \\begin{align}\n \\mathbf{x}(t) &= \\mathbf{x}_1(t) + \\mathbf{x}_2(t) \\\\\n &=c_1 \\mathbf{u}_1 \\cos(\\omega_1 t - \\phi_1) + c_2 \\mathbf{u}_2 \\cos(\\omega_2 t - \\phi_2)\n \\end{align}\n\\] where the constants \\(c_1\\), \\(c_2\\), \\(\\phi_1\\) and \\(\\phi_2\\) are determined from initial conditions.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Two degrees of freedom"
]
},
{
"objectID": "introductions/POV/TDOF/TDOF.html#decoupling-eom",
"href": "introductions/POV/TDOF/TDOF.html#decoupling-eom",
"title": "Two degree of freedom systems",
"section": "Decoupling EoM",
"text": "Decoupling EoM\nIt turns out that the modal vectors \\(\\mathbf{u}_1\\) and \\(\\mathbf{u}_2\\) possess a unique, and very useful, property - orthogonality. As we will see shortly, this property will allow us to decouple our EoM, allowing us to solve them using the SDOF techniques we have already covered!\nSo what do we mean by ‘orthogonality’? Conventional othogonality states that the inner product between two vectors is equal to zero, \\[\n\\mathbf{u}_2^{\\rm T}\\mathbf{u}_1 = \\mathbf{0}\n\\] The orthogonality between modal vectors is slightly different. We say that the modal vectors are orthogonal with respect to the mass matrix. This property takes the form, \\[\n \\mathbf{u}_2^{\\rm T} \\mathbf{M} \\mathbf{u}_1 = 0\n\\tag{15}\\] We can prove this relation algebraically for our simple 2 DoF system, though it requires some laborious algebra! If you want to subject yourself to this, check out the box below.\n\n\n\n\n\n\nOthogonality of mode shapes\n\n\n\n\n\nEquation 15 is crucially important in analysing multi-DoF systems. In what follows we will prove this algebraically for our 2 DoF system; its a little tedious with lots of algebra, so feel free to skip! On the next slide we will use some tricks from linear algebra to provie this for general MDOF systems.\nRecalling the displacement ratio from Equation 14, \\[\n \\frac{u_{2n}}{u_{1n}} = -\\frac{k_{11} - \\omega_n^2 m_1}{k_{12}}\n\\] we can write the normalised modal vectors as, \\[\n \\mathbf{u}_1 = u_{11} \\left(\\begin{array}{c}\n 1 \\\\ -\\frac{k_{11} - \\omega_1^2 m_1}{k_{12}}\n \\end{array}\\right) \\quad \\quad \\mathbf{u}_2 = u_{12} \\left(\\begin{array}{c}\n 1 \\\\ -\\frac{k_{11} - \\omega_2^2 m_1}{k_{12}}\n \\end{array}\\right)\n\\] Substituting these modal vectors into our orthogonality relation (Equation 15) and expanding terms we have, \\[\n \\begin{align}\n \\mathbf{u}_2^{\\rm T} \\mathbf{M} \\mathbf{u}_1 &= u_{11} u_{12} \\left(\\begin{array}{c c}\n 1 & -\\frac{k_{11} - \\omega_2^2 m_1}{k_{12}}\n \\end{array}\\right) \\left[\\begin{array}{c c}\n m_1 & 0 \\\\ 0 &m_2\n \\end{array}\\right]\n \\left(\\begin{array}{c}\n 1 \\\\ -\\frac{k_{11} - \\omega_1^2 m_1}{k_{12}}\n \\end{array}\\right) \\\\\n &= u_{11} u_{12} \\left[m_1 + \\frac{m_2}{k_{12}^2}\\left(k_{11}-\\omega_2^2 m_1\\right)\\left(k_{11}-\\omega_1^2 m_1\\right) \\right]\\\\\n &= u_{11} u_{12} \\left[m_1 + \\frac{m_2}{k_{12}^2}\\left(k_{11}k_{11} -k_{11}\\omega_2^2 m_1-k_{11}\\omega_1^2 m_1 + m_1^2\\omega_1^2 \\omega_2^2 \\right) \\right]\\\\\n &= u_{11} u_{12} \\left[m_1 + \\frac{m_2}{k_{12}^2}\\left(k_{11}k_{11} - k_{11}m_1(\\omega_2^2 +\\omega_1^2) + m_1^2\\omega_1^2 \\omega_2^2 \\right) \\right]\n \\end{align}\n\\tag{16}\\] Lets now determine \\((\\omega_2^2 +\\omega_1^2)\\) and \\(\\omega_1^2 \\omega_2^2\\) using Equation 13, \\[\n\\begin{align}\n (\\omega_1^2 +\\omega_2^2) =& \\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} + \\frac{1}{2}\\sqrt{\\left(\\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - 4 \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}} + \\cdots \\\\ &\\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} - \\frac{1}{2}\\sqrt{\\left(\\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - 4 \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}} = \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2}\n\\end{align}\n\\] and \\[\n \\begin{align}\n \\omega_1^2 \\omega_2^2 =& \\left(\\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} + \\frac{1}{2}\\sqrt{\\left(\\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - 4 \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}} \\right) \\times \\cdots & \\\\ &\\left(\\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} - \\frac{1}{2}\\sqrt{\\left(\\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - 4 \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}} \\right) \\\\ =& \\left(\\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 - \\left(\\frac{1}{2} \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} \\right)^2 + \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2} = \\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}\n \\end{align}\n\\] Substituting these into the last of Equation 16 we obtain, \\[\n \\begin{align}\n \\mathbf{u}_2^{\\rm T} \\mathbf{M} \\mathbf{u}_1 &= u_{11} u_{12} \\left[m_1 + \\frac{m_2}{k_{12}^2}\\left(k_{11}k_{11} - k_{11}m_1 \\frac{m_1 k_{22} + m_2 k_{11}}{m_1m_2} + m_1^2\\frac{k_{11}k_{22}-k_{12}^2}{m_1 m_2}\\right) \\right]\\\\\n &= \\frac{u_{11} u_{12}}{k_{12}^2} \\left[m_1k_{12}^2 + m_2\\left(k_{11}k_{11} - k_{11} \\frac{m_1 k_{22} + m_2 k_{11}}{m_2} + m_1\\frac{k_{11}k_{22}-k_{12}^2}{m_2}\\right) \\right]\\\\\n &= \\frac{u_{11} u_{12}}{k_{12}^2} \\left[{\\color{red}m_1k_{12}^2 }+ \\left({\\color{green} m_2 k_{11}k_{11} } - k_{11}\\left({\\color{blue}m_1 k_{22}} + {\\color{green}m_2 k_{11}}\\right) + m_1\\left({\\color{blue}k_{11}k_{22}}{\\color{red}-k_{12}^2}\\right)\\right) \\right] = 0\n \\end{align}\n\\] where we have matched terms by their colors to clearly show the cancellations that occur.\n\n\n\nHaving satisfied ourselves of the modal vector orthogonality relation, lets now use it to decouple our EoMs and see if we can solve them using our SDOF techniques.\nRecalling our EoM we have, \\[\n {\\left(-\\omega^2 \\left[\\begin{array}{c c}\n m_1 & 0 \\\\ 0 & m_2\n \\end{array}\\right] + \\left[\\begin{array}{c c}\n k_{11} & k_{12} \\\\ k_{21} & k_{22}\n \\end{array}\\right] \\right)}\\left(\\begin{array}{c}\n u_1 \\\\ u_2\n \\end{array}\\right) = \\left(\\begin{array}{c}\n 0 \\\\ 0\n \\end{array}\\right)\n\\] or in compact matrix notation, \\[\n \\mathbf{K}\\mathbf{u} = \\omega^2 \\mathbf{M}\\mathbf{u}\n\\tag{17}\\] Both modes (\\(\\omega_1\\), \\(\\mathbf{u}_1\\)) and (\\(\\omega_2\\), \\(\\mathbf{u}_2\\)) must satisfy this equation, and so we can write, \\[\n \\mathbf{K} \\mathbf{u}_1 = \\omega_1^2\\mathbf{M}\\mathbf{u}_1\n\\tag{18}\\] \\[\n \\mathbf{K} \\mathbf{u}_2 = \\omega_2^2\\mathbf{M}\\mathbf{u}_2\n\\tag{19}\\] First thing, if we pre-multiply both sides of Equation 18 by \\(\\mathbf{u}_2^{\\rm T}\\) and use Equation 15, \\[\n \\begin{align}\n \\mathbf{u}_2^{\\rm T} \\mathbf{K} \\mathbf{u}_1 &= \\omega_1^2 \\mathbf{u}_2^{\\rm T}\\mathbf{M}\\mathbf{u}_1 = 0\n \\end{align}\n\\] we see that \\(\\mathbf{u}_1\\) and \\(\\mathbf{u}_2\\) are also orthogonal with respect to the stiffness matrix! If we pre-multiply by the same modal vector we get the following relation, \\[\n \\begin{align}\n \\mathbf{u}_n^{\\rm T} \\mathbf{K} \\mathbf{u}_n&= \\omega_n^2 \\mathbf{u}_n^{\\rm T}\\mathbf{M}\\mathbf{u}_n\n \\end{align}\n\\tag{20}\\] which will come in useful a little later.\nRight, lets have a go using the above orthogonality relations to uncoupled our undamped EoM, \\[\n \\mathbf{M} \\mathbf{\\ddot{x}} + \\mathbf{K}\\mathbf{x} = 0\n\\tag{21}\\] We are interested in obtaining the solution \\(\\mathbf{x}(t)\\) as a linear combination of the natural modes \\(\\mathbf{u}_1\\) and \\(\\mathbf{u}_2\\), each weighted by the so-called generalised coordinates \\(q_1(t)\\) and \\(q_2(t)\\), \\[\n \\mathbf{x}(t) = \\mathbf{u}_1 q_1(t) + \\mathbf{u}_2 q_2(t)\n\\tag{22}\\] This is possible because \\(\\mathbf{u}_1\\) and \\(\\mathbf{u}_2\\) are orthogonal, meaning they can be used as a basis to represent any vector. Substituting Equation 22 into Equation 21 we have, \\[\n \\mathbf{M}(\\mathbf{u}_1 \\ddot{q}_1(t) + \\mathbf{u}_2 \\ddot{q}_2(t))+ \\mathbf{K}( \\mathbf{u}_1 q_1(t) + \\mathbf{u}_2 q_2(t) )= 0\n\\tag{23}\\] Pre-multiplying Equation 23 by \\(\\mathbf{u}_1^{\\rm T}\\) and using the orthogonality relations above, \\[\n \\mathbf{u}_1^{\\rm T} \\mathbf{M} ( \\mathbf{u}_1 \\ddot{q}_1(t) {\\color{gray}\\,+\\, \\mathbf{u}_2 \\ddot{q}_2(t) })+ \\mathbf{u}_1^{\\rm T}\\mathbf{K}( \\mathbf{u}_1 q_1(t) {\\color{gray}\\,+\\, \\mathbf{u}_2 {q}_2(t) })= 0\n\\] \\[\n \\mathbf{u}_1^{\\rm T} \\mathbf{M} \\mathbf{u}_1 \\ddot{q}_1(t)+ \\mathbf{u}_1^{\\rm T}\\mathbf{K} \\mathbf{u}_1 q_1(t) = 0\n\\]\nRecalling Equation 20 the above reduces to, \\[\n \\begin{align}\n \\mathbf{u}_1^{\\rm T} \\mathbf{M} \\mathbf{u}_1 \\ddot{q}_1(t)+ \\mathbf{u}_1^{\\rm T}\\mathbf{K} \\mathbf{u}_1 q_1(t) = \\mathbf{u}_1^{\\rm T} \\mathbf{M} \\mathbf{u}_1 \\ddot{q}_1(t)+ \\omega_1^2 \\mathbf{u}_1^{\\rm T}\\mathbf{M}\\mathbf{u}_1 q_1(t)&= 0 \\\\\n \\mathbf{u}_1^{\\rm T} \\mathbf{M} \\mathbf{u}_1\\underbrace{( \\ddot{q}_1(t) + \\omega_1^2 q_1(t))}_{=0}&=0\n \\end{align}\n\\] We see from the above that \\(q_1(t)\\) must satisfy the equation, \\[\n \\ddot{q}_1(t) + \\omega_1^2 q_1(t) = 0\n\\] We can similarly show, by pre-multiplying Equation 19 by \\(\\mathbf{u}_1^{\\rm T}\\), that \\(q_2(t)\\) must satisfy, \\[\n \\ddot{q}_2(t) + \\omega_2^2 q_2(t) = 0\n\\] Note that these are two independent equations! Unlike \\(\\mathbf{M} \\ddot{\\mathbf{x}} + \\mathbf{K}\\mathbf{x} = 0\\) which is a pair of elastically coupled equations. Thanks to the orthogonality relations, the coordinate transformation \\(\\mathbf{x}(t) = \\mathbf{u}_1 q_1(t) + \\mathbf{u}_2 q_2(t)\\) uncoupled the EoMs.\nWe call \\(q_1(t)\\) and \\(q_2(t)\\) the natural, principal or generalised coordinates. The equations, \\[\n \\begin{align}\n \\ddot{q}_1(t) + \\omega_1^2 q_1(t) &= 0 \\\\\n \\ddot{q}_2(t) + \\omega_2^2 q_2(t) &= 0\n \\end{align}\n\\] describe a pair of independent, SDOF, harmonic oscillators. Based of our SDOF work on the previous page, we know that they have the solutions of the form, \\[\n \\begin{align}\n {q}_1(t) &= A_1 \\cos(\\omega_1 t-\\phi_1) \\\\\n {q}_2(t) &= A_2 \\cos(\\omega_2 t-\\phi_2)\n \\end{align}\n\\]\nUsing these generalised cooridnates, our original solution (Equation 22) becomes, \\[\n \\mathbf{x}(t) = A_1 \\mathbf{u}_1 \\cos(\\omega_1 t - \\phi_1) + A_2 \\mathbf{u}_2 \\cos(\\omega_2 t-\\phi_2)\n\\] where the constants \\(A_1\\), \\(A_2\\), \\(\\phi_1\\) and \\(\\phi_2\\) depend on the initial conditions \\(\\mathbf{x}_0\\) and \\(\\mathbf{\\dot{x}}_0\\) and can be obtained from the following set of equations (see the box below for where these equations come from), \\[\n \\left[ \\begin{array}{cc} \\mathbf{u}_1& \\mathbf{u}_2 \\end{array} \\right]^{-1} \\left[\\begin{array}{cc}\n \\mathbf{x}_0 &\n \\mathbf{\\dot{x}}_0\n \\end{array}\\right] = \\left[ \\begin{array}{c c}\n a& b\\\\ c& d\n \\end{array}\\right]\n\\] \\[\n A_1 = \\sqrt{a^2 + \\left(\\frac{b}{\\omega_1}\\right)^2 }\\quad \\quad A_2 = \\sqrt{c^2 + \\left(\\frac{d}{\\omega_2}\\right)^2 }\\quad \\quad\n \\phi_1 = \\tan^{-1}\\left( \\frac{b}{a\\omega_1} \\right)\\quad \\quad \\phi_2 = \\tan^{-1}\\left( \\frac{d}{c\\omega_2}\\right)\n\\]\n\n\n\n\n\n\nFinding the constants \\(A_1\\), \\(A_2\\), \\(\\phi_1\\) and \\(\\phi_2\\) from the initial conditions \\(\\mathbf{x}_0\\) and \\(\\mathbf{\\dot{x}}_0\\)\n\n\n\n\n\nWe have a solution in modal form as, \\[\n \\mathbf{x}(t) = A_1 \\mathbf{u}_1 \\cos(\\omega_1 t - \\phi_1) + A_2 \\mathbf{u}_2 \\cos(\\omega_2 t-\\phi_2)\n\\tag{24}\\] and we want to solve for \\(A_1\\), \\(A_2\\), \\(\\phi_1\\) and \\(\\phi_2\\) based on the initial conditions, \\[\n \\mathbf{x}(t=0) = \\mathbf{x}_0 \\quad \\quad \\mathbf{\\dot{x}}(t=0) = \\mathbf{\\dot{x}}_0\n\\] To do so we firs take the derivative of our modal solution to obtain the velocity, \\[\n \\mathbf{\\dot{x}}(t) = -\\omega_1 tA_1 \\mathbf{u}_1 \\sin(\\omega_1 t - \\phi_1) - \\omega_2 tA_2 \\mathbf{u}_2 \\sin(\\omega_2 t-\\phi_2)\n\\tag{25}\\] then by evaluating both Equation 24 and Equation 25 for \\(t=0\\), noting that \\(\\cos(-\\phi) = \\cos(\\phi)\\) and \\(\\sin(-\\phi) = -\\sin(\\phi)\\), we have, \\[\n \\begin{align}\n \\mathbf{x}_0 &= A_1 \\mathbf{u}_1 \\cos( \\phi_1) + A_2 \\mathbf{u}_2 \\cos(\\phi_2) \\\\\n \\mathbf{\\dot{x}}_0 &= A_1 \\omega_1 \\mathbf{u}_1 \\sin( \\phi_1) +A_2 \\omega_1 \\mathbf{u}_2 \\sin(\\phi_2)\n \\end{align}\n\\tag{26}\\] Rewriting Equation 26 in a matrix form, \\[\n \\left[\\begin{array}{cc}\n \\mathbf{x}_0 &\n \\mathbf{\\dot{x}}_0\n \\end{array}\\right] = \\left[ \\begin{array}{cc} \\mathbf{u}_1& \\mathbf{u}_2 \\end{array} \\right]\n \\left[ \\begin{array}{c c}\n A_1 \\cos(\\phi_1) & A_1\\omega_1 \\sin(\\phi_1) \\\\ A_2 \\cos(\\phi_2) & A_2\\omega_2 \\sin(\\phi_2)\n \\end{array}\\right]\n\\] and pre-multiplying both sides by the inverse of the modal matrix \\(\\left[ \\begin{array}{cc} \\mathbf{u}_1& \\mathbf{u}_2 \\end{array} \\right]\\) we obtain, \\[\n \\left[ \\begin{array}{cc} \\mathbf{u}_1& \\mathbf{u}_2 \\end{array} \\right]^{-1} \\left(\\begin{array}{cc}\n \\mathbf{x}_0 &\n \\mathbf{\\dot{x}}_0\n \\end{array}\\right) =\n \\left[ \\begin{array}{c c}\n A_1 \\cos(\\phi_1) & A_1\\omega_1 \\sin(\\phi_1) \\\\ A_2 \\cos(\\phi_2) & A_2\\omega_2 \\sin(\\phi_2)\n \\end{array}\\right] = \\left[ \\begin{array}{c c}\n a& b\\\\ c& d\n \\end{array}\\right]\n\\] Denoting the elements of the right hand side matrix as \\(a\\), \\(b\\), \\(c\\) and \\(d\\), we can solve for the phase angles by the inverse \\(\\tan\\) function, \\[\n \\frac{b}{a} = \\omega_1 \\tan{\\phi_1} \\rightarrow \\phi_1 = \\tan^{-1}\\left( \\frac{b}{a\\omega_1} \\right)\\quad \\quad \\frac{d}{c} = \\omega_2 \\tan{\\phi_2} \\rightarrow \\phi_2 = \\tan^{-1}\\left( \\frac{d}{c\\omega_2}\\right)\n\\] and for the magnitudes by using the relations, \\[\n a^2 + \\left(\\frac{b}{\\omega_1}\\right)^2 = A_1^2 \\cos^2(\\phi_1) + A_1^2 \\sin^2(\\phi_1) = A_1^2\\underbrace{\\left( \\cos^2(\\phi_1) + \\sin^2(\\phi_1) \\right)}_{=1}\n\\] \\[\n c^2 + \\left(\\frac{d}{\\omega_1}\\right)^2 = A_2^2 \\cos^2(\\phi_2) + A_2^2 \\sin^2(\\phi_2) = A_2^2\\underbrace{\\left( \\cos^2(\\phi_2) + \\sin^2(\\phi_2) \\right)}_{=1}\n\\] which yield for mode \\(\\mathbf{u}_1\\) and \\(\\mathbf{u}_2\\) the amplitudes, \\[\n A_1 = \\sqrt{a^2 + \\left(\\frac{b}{\\omega_1}\\right)^2 }\\quad \\quad A_2 = \\sqrt{c^2 + \\left(\\frac{d}{\\omega_2}\\right)^2 }\\\n\\]\n\n\n\nIn summary of the above, we have shown that the motion of a two DoF system at any time can be expressed as the superposition of the two natural modes of vibration, each multiplied by their corresponding natural coordinates (also known as modal amplitudes). On the next page we will generalise the above treatment to arbitary \\(N\\) DoF systems.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Two degrees of freedom"
]
},
{
"objectID": "introductions/POV/TDOF/TDOF.html#tuned-mass-damper",
"href": "introductions/POV/TDOF/TDOF.html#tuned-mass-damper",
"title": "Two degree of freedom systems",
"section": "Tuned mass damper",
"text": "Tuned mass damper\nThe tuned mass damper (TMD) is a passive mechanical (mass on a spring) device that can provide targeted attenuation for vibrating machinery. Take for example the system illustrated in Figure 3, an operating machine with mass \\(m_1\\), isolated from the ground by an isolator with stiffness \\(k_1\\), generates an internal force \\(F_1\\) when running. This force might be a result of an out-of-balance within the machine, for example. When rotating machinery like that depicted operates at a frequency close to resonance, the displacement amplitude can become unacceptably large. This might cause the machine to run sub-optimally, or even fail. How might we avoid this problem?\n\n\n\n\n\n\nFigure 3: Two DoF mass-spring system.\n\n\n\nAssuming the system behaves like a SDoF, one option might be to modify the machine mass or isolator stiffness to shift the resonance away from operating frequency. However, this is not always possible… especially if the machine is already installed. An alternative approach is to add a second DoF to the system in the form of a mass-spring system. It turns out that by carefully choosing the added mass \\(m_2\\) and stiffness \\(k_2\\) we can set ensure that the response of the machine at its operating frequency goes to zero, \\(x_1=0\\). We effectively position an anti-resonance (\\(x_1=0\\)) at the operating frequency. Lets look at the maths…\nIgnoring any damping, our EoM for our machine+TMD system takes the form, \\[\n \\left[\\begin{array}{c c}\n k_1+k_2 - \\omega^2m_1 & -k_2 \\\\\n -k_2 & k_2 - \\omega^2 m_2\n \\end{array}\\right] \\left(\\begin{array}{c}\n X_1 \\\\ X_2\n \\end{array}\\right) = \\left(\\begin{array}{c}\n F_1 \\\\ 0\n \\end{array}\\right)\n\\] where we have set \\(F_2=0\\) since the TMD is entirely passive (no forces are generated within). Defining our dynamic stiffness terms, \\[\n \\begin{align}\n D_{11} =-\\omega^2 m_1+k_{11} \\quad \\quad D_{12} = k_{12} = D_{21} \\quad \\quad D_{22} = -\\omega^2 m_2 + k_{22}\n \\end{align}\n\\] and using Equation 29, our response amplitudes for mass 1 and mass 2 become, \\[\n \\begin{align}\n X_1 &= \\frac{(k_{22}-\\omega^2 m_2)F_1}{(k_{11}-\\omega^2 m_1)(k_{22}-\\omega^2 m_2) - k_{12}^2}\\\\\n X_2 &= \\frac{-k_{12}F_1}{(k_{11}-\\omega^2 m_1)(k_{22}-\\omega^2 m_2) - k_{12}^2}\n \\end{align}\n\\tag{30}\\] Now, to simplify Equation 30, and help provide some physical insight, we introduce define the following variables, \\[\n \\begin{align}\n \\omega_n = \\sqrt{\\frac{k_1}{m_1}} \\quad \\quad \\omega_a = \\sqrt{\\frac{k_2}{m_2}} \\quad \\quad x_{st} = \\frac{F_1}{k_1} \\quad \\quad \\mu = \\frac{m_2}{m_1}\n \\end{align}\n\\] where \\(\\omega_n\\) is the undamped resonant frequency of the machine and its isolation without the attacdhed TMD, \\(\\omega_a\\) is the undamped resonant frequency of the TMD if it were attached to a rigid foundation, \\(x_{st}\\) is the equivalent static displacement fue to the internal force \\(F_1\\), and \\(\\mu\\) is the mass ratio.\nUsing these new definitions, after some algebra, we get for \\(X_1\\), \\[\n X_1 = \\frac{\\left(1-\\left(\\frac{\\omega}{\\omega_a}\\right)^2\\right)x_{st}}{\\left(1+\\mu\\left(\\frac{\\omega_a}{\\omega_n}\\right)^2 - \\left(\\frac{\\omega}{\\omega_n}\\right)^2\\right)\\left( 1- \\left(\\frac{\\omega}{\\omega_a}\\right)^2\\right) - \\mu \\left(\\frac{\\omega_1}{\\omega_n}\\right)^2}\n\\tag{31}\\] On inspection of Equation 31 we see from the numerator that if \\(\\omega=\\omega_a\\) then the response ampltiude goes to zero, \\(X_1 = 0\\). That is, by tuning the natural frequency of the TMD to equal operating frequency of machine, its response will be forced to 0! This is illustrated in Figure 4 where we plot the frequency response of mass 1 in terms of its displacement amplitude \\(X_1\\) and its magnitude squared \\(|X_1|^2\\).\n\n\n\n\n\n\nFigure 4: Frequency response of mass 1 with an attached tuned mass damper.\n\n\n\nOf course in reality, the response will never be exactly 0 at \\(\\omega_a\\), rather it will have some finite value proportional to the amount of damping present. But the general premise remains valid.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Two degrees of freedom"
]
},
{
"objectID": "introductions/POV/TDOF/TDOF.html#vented-loudspeaker",
"href": "introductions/POV/TDOF/TDOF.html#vented-loudspeaker",
"title": "Two degree of freedom systems",
"section": "Vented loudspeaker",
"text": "Vented loudspeaker\nA less obvious, though perhaps more frequently encountered, 2 DoF system is the vented loudspeaker (Figure 5 (a)), also known as a bass reflex loudspeaker.\n\n\n\n\n\n\n\n\n\n\n\n(a) Cross-section of vented loudspeaker\n\n\n\n\n\n\n\n\n\n\n\n(b) Equivalent mechanical system\n\n\n\n\n\n\n\nFigure 5: Vented loudspeaker as an equivalent 2 DoF mechanical system.\n\n\n\nLike a sealed cabinet loudspeaker, the vented loudspeaker comprises a diaphragm of mass \\(m_d\\) that is resiliently attached to some cabinetry/housing by a compliant suspension with stiffness \\(k_d\\) and damping \\(c_d\\). An electrical current \\(i\\) is passed to the driver and converted into a mechanical force according to \\(F_1 = BL i\\), where \\(BL\\) is termed the force factor and describes the loudspeakers’s magnet/voice coil efficiency. Behind the diaphragm is a volume of air which acts as an additional stiffness \\(k_a\\) and damping \\(c_a\\). For a sealed cabinet, this acoustic stiffness and damping placed in parallel with that of the mechanical suspension. For the vented enclosure, this acoustic stiffness/damping connects the diaphragm to a secondary mass \\(m_v\\) which represents air that resides within the vent. As the loudspeaker operates, both the diaphragm and vent mass oscillate with displacements \\(x_d\\) and \\(x_v\\), each contributing to the total sound radiated from the loudspeaker. Based on the above description, we can readily draw the mechanical system in Figure 5 (b) that equivalently represents a vented loudspeaker.\n\n\nCode\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\n\n# System properties\nk1 = 17000 # Stiffness of cavity\nk2 = 10000 # Stiffness of diaphragm suspension\nK = np.array([[k2, -k2],\n [-k2, k1 + k2]]) # Stiffness matrix\n\nc1_low = 20 # Damping of cavity\nc2_low = 2 # Damping of diaphragm suspension\nC_low = np.array([[c2_low, -c2_low],\n [-c2_low, c1_low + c2_low]]) # Damping matrix\n\nc1 = 200 # Damping of cavity\nc2 = 20 # Damping of diaphragm suspension\nC = np.array([[c2, -c2],\n [-c2, c1 + c2]]) # Damping matrix\n\nm1 = 0.6 # Weight of vent mass\nm2 = 1.0 # Weight of diaphragm mass\nM = np.array([[m1, 0],\n [0, m2]]) # Mass matrix\n\n# Eigenvalues and eigenvectors\nD_eigvals, V = sp.linalg.eig(K,M)\n\n# Natural frequencies in Hz\nfres = np.sqrt(np.real(D_eigvals)) / (2 * np.pi)\nprint(\"Natural frequencies (Hz):\", round(fres[0],3), round(fres[1],3))\nfhelm = np.sqrt(k1/m1) / (2 * np.pi)\nprint(\"Helmholtz frequency (Hz):\", round(fhelm,3))\n\n# Frequency range\nf = np.arange(1, 1000.1, 0.1)\nw = 2 * np.pi * f\n\n# Initialize arrays for dynamic stiffness, receptance, and accelerance\nA = np.zeros((2, 2, len(f)), dtype=complex)\nA_low = np.zeros((2, 2, len(f)), dtype=complex)\n\nfor i, wi in enumerate(w):\n D = -wi**2 * M + 1j * wi * C + K # Dynamic stiffness\n R = np.linalg.inv(D) # Receptance\n A[:, :, i] = -wi**2 * R # Accelerance\n\n D = -wi**2 * M + 1j * wi * C_low + K # Dynamic stiffness\n R = np.linalg.inv(D) # Receptance\n A_low[:, :, i] = -wi**2 * R # Accelerance\n\n\n# Arbitrary forcing level\nFe = 1\nav_low = A_low[0, 1, :] * Fe # Vent response\nad_low = A_low[1, 1, :] * Fe # Diaphragm response\nat_low = av_low - ad_low # Combined vent and diaphragm\n\nav = A[0, 1, :] * Fe # Vent response\nad = A[1, 1, :] * Fe # Diaphragm response\nat = av - ad # Combined vent and diaphragm\n\n# Plotting\nfig, ax = plt.subplots(1,2,figsize=(9, 4))\nax[0].loglog([fres[0], fres[0]], [1e-2, 1e2], linewidth=0.5, linestyle='dashed', label='ω_1',color='grey')\nax[0].loglog([fres[1], fres[1]], [1e-2, 1e2], linewidth=0.5, linestyle='dotted', label='ω_2',color='grey')\nax[0].loglog(f, np.abs(av_low), linewidth=2, label='Vent')\nax[0].loglog(f, np.abs(ad_low), linewidth=2, label='Driver')\nax[0].loglog(f, np.abs(at_low), 'k--', linewidth=2, label='Total')\nax[0].set_xlabel('Frequency (Hz)')\nax[0].set_ylabel('Sound pressure (Pa)')\nax[0].set_ylim([1e-2, 1e2])\nax[0].set_title('Low damping')\nax[0].legend()\nax[1].loglog([fres[0], fres[0]], [1e-2, 1e2], linewidth=0.5, linestyle='dashed',color='grey')\nax[1].loglog([fres[1], fres[1]], [1e-2, 1e2], linewidth=0.5, linestyle='dotted',color='grey')\nax[1].loglog(f, np.abs(av), linewidth=2, label='Vent')\nax[1].loglog(f, np.abs(ad), linewidth=2, label='Driver')\nax[1].loglog(f, np.abs(at), 'k--', linewidth=2, label='Total')\nax[1].set_xlabel('Frequency (Hz)')\nax[1].set_ylabel('Sound pressure (Pa)')\nax[1].set_ylim([1e-2, 1e2])\nax[1].set_title('Appropriate damping')\nplt.tight_layout()\nplt.show()\n\n\nNatural frequencies (Hz): 14.171 30.088\nHelmholtz frequency (Hz): 26.79\n\n\n\n\n\n\n\n\nFigure 6: Frequency response of a vented loudspeaker for different levels of damping\n\n\n\n\n\nShown in Figure 6 are the sound pressure responses of the diaphragm and vent due to an external force applied to the diaphragm alone (e.g. by the voice coil). Note that the sound radiation is proportional to the acceleration of each radiator, so these plots have the same general shap as the accelerance FRF. The two figures correspond to different levels of damping. In the left hand figure we purposefully reduce the level of damping to show clearly the system resonances. In the right hand figure we use a more appropriate level of damping that one might expect in a vented loudspeaker; this gives us a nice smooth response as expected of a loudspeaker. We also plot the total pressure output response is proportional to difference between the vent and diaphragm, \\[\n p(t) \\propto a_v(t) - a_d(t)\n\\]\nShown in Figure 7 are the relative motions of the diaphragm and vent mass for the two resonant frequencies of the system (\\(\\omega_1\\) and \\(\\omega_2\\)) and the so-called Helmholtz frequency \\(\\omega_h\\). The Helmholtz frequency corresponds to the resonant frequency of the vent and cavity alone, and is given by, \\[\n \\omega_h = \\sqrt{\\frac{k_v}{m_v}}\n\\]\nWe see that at \\(\\omega_1\\) in our equivalent mechanical system the two masses move in phase. For our vented loudspeaker, this corresponds to an out-of-phase motion between the diaphragm and vent, as shown. With regards to sound radiation, the two contributions are out of phase and so interfere destructively, largely canceling with one another and leading to a rapid roll-off in the frequency response below \\(\\omega_1\\) (as see in Figure 6) - this is one of the unique features of a vented loudspeaker; a 24dB per octave roll off compared to the 12 dB per octave roll off of a sealed cabinet loudspeaker.\n\n\n\n\n\n\n\n\nRelative motion at \\(\\omega_1\\)\n\n\n\n\n\n\n\nRelative motion at \\(\\omega_h\\)\n\n\n\n\n\n\n\nRelative motion at \\(\\omega_2\\)\n\n\n\n\n\n\nFigure 7: Animation of the diaphragm and vent motion at the two resonant frequencies and the so-called Helmholtz frequency.\n\n\n\nAt \\(\\omega_2\\) we see our mechanical system oscillating out-of-phase. For our vented loudspeaker, this corresponds to an in-phase motion between the diaphragm and vent, as shown. With regards to sound radiation, the two contributions interfere constructively and support one another, giving an efficient sound radiation. We can clearly see this constructive interference in the approproate damping case in Figure 6.\nAt \\(\\omega_h\\) we see that the diagphram mass remains almost stationary whilst the vent mass exhibits large motion. At this frequency, the entirely of the sound radiation comes from the vent. It is this vent contribution that gives vented loudspeakers their extended bass response.\nTo design a vented loudspeaker it is necessary to carefully tune the vent and cabinet geomtery so to position \\(\\omega_1\\), \\(\\omega_2\\) and \\(\\omega_h\\) just right so that we get the correct level of modal overlap and acheive a smooth frequency response.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Two degrees of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html",
"href": "introductions/POV/SDOF/SDOF.html",
"title": "Single degree of freedom oscillator",
"section": "",
"text": "Figure 1: The Mass-spring-damper system is a single DoF mechanical oscillator.\nThe single Degree of Freedom (DoF) mass-spring oscillator is perhaps one of the simplest, but most important, dynamical systems we can study. The reason being is that (spoiler alert..!) any linear system, regardless of its geometrical complexity, can be represented or modelled as a collection of independent SDoF oscillators. Anyway, thats jumping in the deep-end… Lets take a step back and consider SDoF oscillator from first principles, including the derivation of its equation of motion (EoM), and its unforced (homogenous) and forced solutions.\n(FYI, the term degree of freedom (DoF) is a general term used to denote the number of independent coordinates required to completely describe a system’s configuration. A mass-spring system with a single mass is described by just one coordinate (i.e. that of the mass), hence it is termed ‘single DoF’.)",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#ideal-springs",
"href": "introductions/POV/SDOF/SDOF.html#ideal-springs",
"title": "Single degree of freedom oscillator",
"section": "Ideal springs",
"text": "Ideal springs\nAn ideal spring has no mass and generates a force proportion to its elongation \\(\\Delta x\\). The force generated is always in opposite direction to displacement and linearly related to the elongation of the spring.\nFor a single-sided spring we have, \\[\nF_k = -kx_1\n\\] where \\(k\\) is called the spring stiffness with units (N/m). For a two-sided spring, \\[\nF_k = k\\overbrace{(x_2-x_1)}^{\\Delta x}\n\\] where the force at each end is equal and opposite in direction.\nThe spring force always opposes displacement:\n\nIf \\(\\Delta x<0\\) then \\(F_k>0\\)\nIf \\(\\Delta x>0\\) then \\(F_k<0\\)\n\nIdeal springs are linear - the spring force and elongation remain linearly related indefinitely. Simply put, doubling the extension doubles the force generated, \\[\nF_k = -kx_1 \\quad \\quad 2F_k = -k\\, 2x_1\n\\]\nNote that real springs are non-linear when subject to large deformations.\n\n\n\n\n\n\n\n\n\n\n\n(a) Single sided spring.\n\n\n\n\n\n\n\n\n\n\n\n(b) Two sided spring.\n\n\n\n\n\n\n\nFigure 3: Diagram fo single and two sided spring. Force generate at each end is equal and opposite.\n\n\n\n\nParallel and series springs\nWhen two springs are in parallel, they will always have the same displacement at either end. The total force generated divides itself into \\(F_{k1}\\) and \\(F_{k2}\\), \\[\n\\begin{align}\n F_{k1} & = k_1(x_2-x_1) \\\\\n F_{k2} & = k_2(x_2-x_1)\n\\end{align}\n\\] The total force is given by the sum of the forces generated by the two springs, \\[\n\\begin{align}\n F_k & = F_{k1}+F_{k2} \\\\\n & =\\underbrace{(k_1+k_2)}_{k_t}(x_2-x_1)\n\\end{align}\n\\tag{1}\\]\nFrom Equation 1 we see that the total stiffness of the parallel springs is just the sum of individual spring stiffness’s. We can generalise this result to an arbitrary number of springs as, \\[\n\\begin{equation}\n k_t = \\sum_i^N k_i\n\\end{equation}\n\\tag{2}\\]\n\n\n\n\n\n\n\n\nParallel\n\n\n\n\n\n\n\nSeries\n\n\n\n\n\n\nFigure 4: Ideal springs arranged in parallel and series.\n\n\n\nWhen two springs are arranged in series they by definition must have the same force through their lengths. \\[ \nF_{k} = k_1(x_2-x_1)\n\\tag{3}\\] \\[\nF_{k} = k_2(x_3-x_2)\n\\tag{4}\\] From Equation 3 we get, \\[ \n\\begin{equation}\n x_2 = \\frac{F_k}{k_1} + x_1\n\\end{equation}\n\\] Substituting this into Equation 4, \\[\nF_k = k_2(x_3 -x_1 -\\frac{F_k}{k_1})\n\\] A little bit of algebra leads to, \\[\nF_k =\\underbrace{ \\left(\\frac{1}{k_1}+\\frac{1}{k_2}\\right)^{-1} }_{k}(x_3-x_1)\n\\tag{5}\\] Note that for a series configuration, the total stiffness is always less than individual stiffnesses. We can generalise Equation 5 to an arbitrary number of springs by, \\[\nk = \\left(\\sum_i^N \\frac{1}{k_i}\\right)^{-1}\n\\]",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#ideal-viscous-damper",
"href": "introductions/POV/SDOF/SDOF.html#ideal-viscous-damper",
"title": "Single degree of freedom oscillator",
"section": "Ideal viscous damper",
"text": "Ideal viscous damper\nA viscous damper has no mass and generates a force proportion to the relative velocity \\(\\Delta \\dot{x}\\). The force generated is always in opposite direction and linearly related to the relative velocity of the dashpot.\nFor a single-sided dashpot we have, \\[\nF_c = -c\\dot{x}_1\n\\] where \\(c\\) is the viscous damping coefficient with units (Ns/m).\nFor a two-sided dashpot, \\[\nF_c = c\\overbrace{(\\dot{x}_2-\\dot{x}_1)}^{\\Delta \\dot{x}}\n\\]\nNote that the same equations apply for series and parallel dashpot arrangements as for springs.\n\n\n\n\n\n\n\n\n\n\n\n(a) Single sided dashpot.\n\n\n\n\n\n\n\n\n\n\n\n(b) Two sided dashpot.\n\n\n\n\n\n\n\nFigure 5: Diagram fo single and two sided dashpot. Force generate at each end is equal and opposite.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#equation-of-motion-cont.",
"href": "introductions/POV/SDOF/SDOF.html#equation-of-motion-cont.",
"title": "Single degree of freedom oscillator",
"section": "Equation of motion (cont.)",
"text": "Equation of motion (cont.)\nSubstituting in the spring and dashpot forces, Newton’s second law states that, \\[\n\\begin{align}\n m \\ddot{x} & = F_e + F_k + F_c \\\\\n & = F_e -kx -c\\dot{x}\n\\end{align}\n\\tag{6}\\] To obtain the equation of motion we isolate external force, \\[\nF_e = m\\ddot{x} + kx + c\\dot{x}\n\\tag{7}\\] This is a second order linear differential equation with constant coefficients.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#undamped",
"href": "introductions/POV/SDOF/SDOF.html#undamped",
"title": "Single degree of freedom oscillator",
"section": "Undamped",
"text": "Undamped\nWe start with the undamped case, where \\(c = 0\\). Our EoM takes the form, \\[ \n0 = m\\ddot{x} + kx \\quad \\quad (F_e = c= 0)\n\\] Dividing both sides by \\(m\\) we get, \\[\n0 = \\ddot{x} + \\frac{k}{m}x = \\ddot{x} + \\omega_n^2 x\n\\] Next, we consider a solution of the form \\(x(t) = Ae^{st}\\), \\[\n0 = s^2 Ae^{st} + \\omega_n^2 Ae^{st}\n\\] Dividing through by \\(Ae^{st}\\) we obtain the so-called characteristic equation, \\[\n0 = s^2 + \\omega_n^2 \\quad\\quad s^2 = -\\omega_n^2\n\\] Taking the square root of both sides, recalling that \\(i = \\sqrt{-1}\\), we find \\(s\\) to be, \\[\ns = \\pm \\sqrt{-\\omega_n^2} = \\pm i \\omega _n\n\\] Note that there are two roots; the solution is the sum of each, \\[\nx(t) = A_1 e^{-i\\omega_n t} + A_2 e^{i\\omega_n t}\n\\tag{8}\\] where \\(A_1\\) and \\(A_2\\) are constants that depend on initial conditions, \\(x_0\\) and \\(\\dot{x}_0\\).\nClearly, imaginary roots lead to complex exponential, i.e. harmonic motion at frequency \\(\\omega_n\\). We call \\(\\omega_n\\) the natural frequency of vibration, or the characteristic frequency. From the above we see that a force free undamped mass-spring system will always oscillate at \\(\\omega_n\\), regardless of IC.\n\nInitial conditions\nWe have obtained the general form of the solution in Equation 8. Next, we need to determine the constants \\(A_1\\) and \\(A_2\\) in terms of initial displacement \\(x_0\\) and velocity\\(\\dot{x}_0\\).\nWe begin by evaluating Equation 8 at \\(t=0\\), which yields, \\[\nx(0) = x_0 = A_1 + A_2\n\\] Next, we take the derivative of Equation 8 to obtain an equation for the velocity, before evaluating it also at \\(t=0\\). This yields, \\[\n\\dot{x}(0) = \\dot{x}_0 = -i\\omega_n A_1 +i\\omega_n A_2 \\quad\\rightarrow\\quad \\frac{\\dot{x}_0}{i\\omega_n} = -A_1+A_2\n\\] From the above, after some simple algebra, we obtain, \\[\nA_1 = \\frac{1}{2}\\left(x_0 -\\frac{\\dot{x}_0}{i\\omega}\\right) \\quad \\quad A_2 = \\frac{1}{2}\\left(x_0 +\\frac{\\dot{x}_0}{i \\omega}\\right)\n\\] Notice that \\(A_1\\) and \\(A_2\\) are complex conjugates… So we only have two independent constants since \\(A_1 = a+ib\\) and \\(A_2=a-ib\\). This is expected as system is second order.\nIf we substitute the constants into solution and use Euler’s formula, \\[\n\\begin{align}\n x(t) & = \\frac{1}{2}\\left(x_0 -\\frac{\\dot{x}}{i\\omega}\\right) e^{-i\\omega_n t} + \\frac{1}{2}\\left(x_0 +\\frac{\\dot{x}}{i\\omega}\\right) e^{i\\omega_n t} \\\\\n & = \\frac{1}{2}x_0\\underbrace{ \\left(e^{i\\omega_n t} + e^{-i\\omega_n t} \\right) }_{2\\cos\\omega_n t}+\\frac{1}{2}\\frac{\\dot{x}}{\\omega} \\underbrace{\\frac{1}{i}\\left(e^{i\\omega_n t}- e^{-i\\omega_n t} \\right)}_{2\\sin\\omega_n t}\n\\end{align}\n\\] From the above, the real solution is given by, \\[\nx(t) = x_0 \\cos\\omega_n t + \\frac{\\dot{x}_0}{\\omega_n} \\sin\\omega_n t\n\\] Finally, using the trigonometric identity \\(a\\cos\\theta + b\\sin\\theta = \\sqrt{a^2+b^2}\\cos(\\theta - \\phi)\\) where \\(\\phi = \\tan^{-1}(b/a)\\), we can write the solution in a simple \\(\\cos\\) form as, \\[\nx(t) = |A| \\cos(\\omega_nt-\\phi) \\quad \\mbox{with}\\quad |A| = \\sqrt{x_0^2 +\\frac{\\dot{x}_0^2}{\\omega^2}} \\quad \\mbox{and} \\quad \\phi = \\tan^{-1}\\left(\\frac{1}{\\omega_n}\\frac{\\dot{x}_0}{x_0}\\right)\n\\]\n\n\n\n\n\n\n\nFigure 6: Response of undamped mass-spring system due to initial conditions.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#damped",
"href": "introductions/POV/SDOF/SDOF.html#damped",
"title": "Single degree of freedom oscillator",
"section": "Damped",
"text": "Damped\nLet us now consider the damped system. Reintroducing \\(c\\), our homogenous EoM becomes, \\[\n0 = m\\ddot{x} + c\\dot{x} + kx \\quad \\quad (F_e =0)\n\\] As for the undamped, we divide through by \\(m\\) to get, \\[\n0 = \\ddot{x} +2\\zeta\\omega_n \\dot{x} + \\omega_n^2 x\n\\] where we introduce the viscous damping factor, defined as, \\[\n\\zeta = \\frac{c}{2m\\omega_n}\n\\] Note that the reason for introducing \\(\\zeta\\) is largely for mathematical convenience, as we will see shortly. As before, we assume a solution of form \\(x(t) = Ae^{st}\\) and make the necessary substitutions, \\[\n0 = s^2 Ae^{st} +2\\zeta\\omega_n s Ae^{st} + \\omega_n^2 Ae^{st}\n\\] Finally, dividing through by \\(Ae^{st}\\) we obtain the system’s characteristic equation, \\[\n0 = s^2 +2\\zeta\\omega_n s + \\omega_n^2\n\\tag{9}\\] Note that Equation 9 is simply a quadratic equation in \\(s\\). We can solve this easily using the quadratic formula as, \\[\n\\begin{array}{c}\n s_1 \\\\\n s_2\\end{array} = \\left(-\\zeta \\pm \\sqrt{\\zeta^2-1}\\right)\\omega_n\n\\tag{10}\\] Perhaps it is clear now why we introduced \\(\\zeta\\); it leads to Equation 10 takeing a particularly simple form.\nPlugging our roots \\(s_{1,2}\\) back into our assumed solution \\(x(t) = Ae^{st}\\), \\[\nx(t) = A_1e^{ \\left(-\\zeta - \\sqrt{\\zeta^2-1}\\right)\\omega_n t} + A_2 e^{\\left(-\\zeta + \\sqrt{\\zeta^2-1}\\right)\\omega_n t}\n\\tag{11}\\] or by factoring out constant part of exponential \\(e^{-\\zeta \\omega_n t}\\), \\[\nx(t) = \\left(A_1e^{ -\\sqrt{\\zeta^2-1}\\omega_n t} + A_2 e^{\\sqrt{\\zeta^2-1}\\omega_n t}\\right)e^{-\\zeta\\omega_n t}\n\\tag{12}\\] Equation 11 and Equation 12 represent the general solution of our damped mass-spring system. As before, the constants \\(A_1\\) and \\(A_2\\) depend on the initial condition, and will be dealt with shortly. For now, it is more important to notice that the nature of this solution depends heavily on the value of \\(\\zeta\\)!\n\nIf \\(\\zeta=0\\) then \\(s_{1,2}\\) are purely imaginary and the solution is that of undamped case we have already seen.\nIf \\(0<\\zeta<1\\) then \\(s_{1,2}\\) are complex (because of negative in square root) and the solution will become a decaying sinusoid.\nIf \\(\\zeta=1\\) then \\(s_{1,2}\\) are purely real and the solution decays exponentially at the fastest possible rate.\nIf \\(\\zeta>1\\) then \\(s_{1,2}\\) are purely real and the solution decays exponentially at an increasing slow rate.\n\nThese different scenarios are shown in Figure 7, and described in more detail below.\n\n\n\n\n\n\n\n\nNo damping - \\(\\zeta=0\\)\n\n\n\n\n\n\n\nUnder-damped - \\(0<\\zeta<1\\)\n\n\n\n\n\n\n\n\n\nCritically-damped - \\(\\zeta=1\\)\n\n\n\n\n\n\n\nOver-damped - \\(\\zeta>1\\)\n\n\n\n\n\n\nFigure 7: Different types of solution given by Equation 11 depending on the level of damping (\\(\\zeta\\)) for IC \\(x_0=1\\) and \\(\\dot{x}_0=0\\).\n\n\n\n\nUndamped\nWhen \\(\\zeta=0\\), Equation 12 reduces to (noting that \\(\\sqrt{-1}=i\\)), \\[\nx(t) = \\left(A_1e^{ -i\\omega_n t} + A_2 e^{i\\omega_n t}\\right)\n\\] which, as expected, is exactly what we obtained for the undamped case.\n\n\nUnderdamped\nFor the under-damped case that \\(0<\\zeta<1\\), it is useful to write \\(\\sqrt{\\zeta^2-1}\\) instead as \\(i\\sqrt{1-\\zeta^2}\\) such that Equation 12 becomes, \\[\nx(t) = \\left(A_1e^{ -i\\sqrt{1-\\zeta^2}\\omega_n t} + A_2 e^{i\\sqrt{1-\\zeta^2}\\omega_n t}\\right)e^{-\\zeta\\omega_n t}\n\\] Taking the same form as the undamped case, the bracketed term represents a periodic oscillation at a frequency of \\(\\omega_d = \\sqrt{1-\\zeta^2}\\omega_n\\). This is termed the frequency of damped natural vibration. It is always slightly lower than the undamped natural frequency (\\(\\omega_d<\\omega_n\\)) on account of \\(0<\\sqrt{1-\\zeta^2}<1\\) always being less than one. Using \\(\\omega_d\\), Equation 12 takes the form, \\[\nx(t) = \\left(A_1e^{ -i\\omega_d t} + A_2 e^{i\\omega_d t}\\right)e^{-\\zeta\\omega_n t}\n\\] The exponential sitting outside of the bracket has a real power, and so represents an exponential decay term. Note that this decay is proportional to the undamped natural frequency \\(\\omega_n\\); higher frequencies decay more rapidly. The rate of decay is also proportional to \\(\\zeta\\); larger values of \\(\\zeta\\) leads to a quicker decay.\n\nInitial conditions\nFollowing the same steps as for the undamped case, our damped solution can also be expressed in a simpler \\(\\cos\\) form as, \\[\nx(t) = |A|\\cos(\\omega_d t - \\phi) e^{-\\zeta\\omega_n t}\n\\] with the initial conditions, \\[\nA = x_0 + \\frac{\\dot{x}_0 {\\color{orange} +\\zeta\\omega_n x_0}}{i \\omega_d} \\quad |A| = \\sqrt{\\Re(A)^2 + \\Im(A)^2} \\quad \\phi = \\tan^{-1}\\left(\\frac{\\Im(A)}{\\Re(A)}\\right)\n\\] It is this under-damped solution that we are most interested in, and will revisit later when considering multi-DoF systems.\n\n\n\n\n\n\n\nFigure 8: Response of under-damped mass-spring-damper system due to initial conditions.\n\n\n\n\n\n\nCritically damped\nWhen \\(\\zeta=1\\) we reach the transition point between under and over-damped scenarios. We term this case critical damping. In reality, one never achieves critical damping exactly; \\(\\zeta\\) is always slightly larger or smaller than 1 in practice. Nevertheless, in theory it can be shown that the critically damped solution has the form, \\[\nx(t) = (A_1+tA_2)e^{-\\omega_n t}\n\\tag{13}\\] which occurs when, \\[\n\\zeta = 1 = \\frac{c}{2m\\omega_n} \\rightarrow c_{cr} = 2m\\omega_n = 2\\sqrt{km}\n\\] Admittedly, it is not immediately obvious how Equation 12 reduces to Equation 13 when \\(\\zeta=1\\). Substituting \\(\\zeta=1\\) directly into Equation 12 yields, \\[\nx(t) = \\left(A_1 + A_2 \\right)e^{-\\omega_n t} = A_1e^{-\\omega_n t}\n\\tag{14}\\] which is indeed a valid solution, but it is not the only one. It implies that given an initial displacement \\(x_0\\), the initial velocity must be \\(\\dot{x}_0 = -A_1/\\omega_n\\). Physically however, there is nothing stopping us from setting the initial displacement and then applying whatever initial velocity we want! So, Equation 14 is clearly missing something…\nThe issue is that as \\(\\zeta\\rightarrow 1\\) the two roots \\(s_{1,2}\\) merge into \\(s\\), meaning we no longer have two independent solutions and so cannot guarantee all solutions. To find the missing solution we define \\(\\alpha=\\sqrt{\\zeta^2-1}\\omega_n\\) such that our general solution takes the form, \\[\n x(t) = \\left(A_1e^{ -\\alpha t} + A_2 e^{\\alpha t}\\right)e^{-\\zeta\\omega_n t}\n\\] We then take \\(A_2 = -A_1\\) and consider the limit as we approach critical damping \\(\\alpha \\rightarrow 0\\) and \\(\\zeta\\rightarrow 1\\). For small \\(\\alpha\\) we can take \\(e^{\\pm\\alpha t}=1\\pm\\alpha t\\), which leads to our missing solution, \\[\n x(t) = \\left(A_1(1-\\alpha t) - A_1 (1+\\alpha t)\\right)e^{-\\zeta\\omega_n t} = -2\\alpha t e^{-\\omega_n t}\n\\] Recalling that we can multiply a solution by a constant and it remains a solution, we can rewrite our new solution as, \\[\n x(t) = A_2 t e^{-\\omega_n t}\n\\] Finally, the general solution is the sum of our individual solution, as in Equation 13.\n\n\nOver-damped\nLastly, in the over-damped case where \\(\\zeta>1\\), we see that our solution is just the sum of two exponential decays, \\[\nx(t) = A_1e^{ -\\left(\\zeta + \\sqrt{\\zeta^2-1}\\right)\\omega_n t} + A_2 e^{-\\left(\\zeta - \\sqrt{\\zeta^2-1}\\right)\\omega_n t}\n\\tag{15}\\] Note that \\(\\left(\\zeta \\pm \\sqrt{\\zeta^2-1}\\right) > 1\\), and so Equation 15 will always decay slower than Equation 14. This means that the critically damped response corresponds to the fastest possible decay, without oscillation. Any further redution in the damping \\(\\zeta\\) will lead to complex roots \\(s_{1,2}\\) and some oscillaton.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#harmonic-forcing",
"href": "introductions/POV/SDOF/SDOF.html#harmonic-forcing",
"title": "Single degree of freedom oscillator",
"section": "Harmonic forcing",
"text": "Harmonic forcing\nHarmonic forcing at frequency \\(\\omega\\) can be expressed in complex exponential form as, \\[\nF_e(t) = F_0 e^{i\\omega t}\n\\] where \\(F_0\\) is a complex amplitude. We will assume that this force as been acting for all time, and so any transient behavior due to its initial application have decayed to 0.\nAs a consequence of our mass-spring-damper system being linear, its response must also be harmonic at the same frequency, \\[\nx(t) = X_0 e^{i\\omega t}\n\\tag{16}\\] where \\(X_0\\) is similarly a complex amplitude. Note that the magnitude and phase of \\(X_0\\) will differ from \\(F_0\\) on account of the system’s dynamics.\nSubstituting the harmonic terms above and performing the necessary derivatives we arrive at, \\[\nF_e = kx + c\\dot{x} + m\\ddot{x} \\quad \\quad F_0 e^{i\\omega t} = k X_0 e^{i\\omega t} + ci\\omega X_0 e^{i\\omega t} - m\\omega^2 X_0 e^{i\\omega t}\n\\] Now dividing through by \\(e^{i\\omega t}\\) leaves us with, \\[\n F_0 = \\overbrace{(k + i\\omega c - \\omega^2 m)}^{D(\\omega)}X_0 \\quad \\rightarrow \\quad X_0 = R(\\omega) F_o\n\\tag{17}\\] where \\(D(\\omega)\\) is a complex function termed the dynamic stiffness that depends on frequency and describes the system’s dynamics; \\(R(\\omega)\\) is the inverse of \\(D(\\omega)\\) and is termed the receptance. More generally, \\(D(\\omega)\\) and \\(R(\\omega)\\) are termed frequency response functions or FRFs, a concept we will return to shortly.\nEquation 17 provides a means of calculating the complex amplitude of the harmonic response given the forcing amplitude. Once \\(X_0\\) is obtained, the time series response can be obtained using Equation 16. Alternativly, the magnitude and phase of \\(R(\\omega)\\) can be examined and the response obtained using the standard \\(|A| \\cos(\\omega t - \\phi)\\) form.\nTo examine \\(R(\\omega)\\) further we separate the real and imaginary parts of the denominator, \\[\nR(\\omega) = \\frac{1}{k + i\\omega c - \\omega^2 m} = \\frac{1}{\\underbrace{(k - \\omega^2 m)}_{\\mbox{Real}}+ \\underbrace{i\\omega c}_{\\mbox{Imag}} }\n\\tag{18}\\] The magnitude is now obtained simply as, \\[\n|R(\\omega)| = \\frac{1}{|k - \\omega^2 m+ i\\omega c |} = \\frac{1}{\\sqrt{(k - \\omega^2 m)^2+ (\\omega c)^2}}\n\\] Recalling from our treatment of hte homogenous case \\(\\omega_n = \\sqrt{k/m}\\) and \\(c = \\zeta 2 k/ \\omega_n\\), the receptance magnitude becomes, \\[\n|R(\\omega)| = \\frac{1}{\\sqrt{k^2\\left(1 - \\omega^2 \\frac{m}{k}\\right)^2+ (\\omega c)^2}} = \\frac{1}{\\sqrt{k^2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 k \\frac{\\omega}{\\omega_n}\\right)^2}} = \\frac{1}{k \\sqrt{\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2}}\n\\tag{19}\\] We see that the magnitude increases as \\(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)\\) goes to zero, and its maximum is limited by \\(\\zeta 2 k \\frac{\\omega}{\\omega_n}\\).\nShown in Figure 9 are the receptance frequency response curves obtained for different levels of damping. The peak value of the magnitude is marked and takes a value \\(\\omega_{peak} = \\omega_n\\sqrt{1-2\\zeta^2}\\). Note that this peak value is always less than the undamped natural frequency \\(\\omega_n\\). It is obtained by taking the derivative of Equation 19, setting this to zero, and solving for \\(\\omega\\).\n\n\n\n\n\n\nWant to see where \\(\\omega_{peak} = \\omega_n\\sqrt{1-2\\zeta^2}\\) came from?\n\n\n\n\n\nFactoring our constants and using the reciporcal rule \\([1/u(x)]' = - u(x)'/u(x)^2\\), \\[\n\\frac{\\partial |R|^2}{\\partial \\omega} = - \\frac{1}{k^2}\n\\frac{ \\frac{\\partial}{\\partial \\omega} \\left(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2\\right)}{ \\left(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2\\right)^2}\n\\] Using the summation rule \\((a\\cdot u(x) + b\\cdot v(x))' = a\\cdot u(x)' + b\\cdot v(x)'\\) \\[\n\\frac{\\partial |R|^2}{\\partial \\omega} = - \\frac{1}{k^2}\n\\frac{ \\frac{\\partial}{\\partial \\omega} \\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2 \\frac{\\partial}{\\partial \\omega}\\left(\\omega\\right)^2}{ \\left(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2\\right)^2}\n\\] Using the chain rule \\(u(v(x))' = u(v)'u(x)'\\) and the power rule \\([u(x)^n]' = nu^{n-1}\\), \\[\n\\frac{\\partial |R|^2}{\\partial \\omega} = - \\frac{1}{k^2}\n\\frac{ 2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right) \\frac{\\partial}{\\partial \\omega} \\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)+ \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2 2\\omega}{ \\left(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2\\right)^2}\n\\] Using \\((1)' = 0\\) and reapplying the power rule, \\[\n\\frac{\\partial |R|^2}{\\partial \\omega} = - \\frac{1}{k^2}\n\\frac{ - 2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right) \\left( \\frac{2 \\omega}{\\omega_n^2}\\right)+ \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2 2\\omega}{ \\left(\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2+ \\left( \\zeta 2 \\frac{\\omega}{\\omega_n}\\right)^2\\right)^2}\n\\] We now set the derivative to zero, and multiply both sides by the denominator, \\[\n0 = -\n 2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right) \\left( \\frac{2 \\omega}{\\omega_n^2}\\right)+ \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2 2\\omega\n\\] Do a little algebra, \\[\n\\left(\\frac{2 \\omega}{\\omega_n^2} - \\frac{2 \\omega^3}{\\omega_n^4}\\right) = \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2 \\omega\n\\] …a little more algebra… \\[\n \\frac{2}{\\omega_n^2} - \\frac{2 \\omega^2}{\\omega_n^4} = \\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2\n\\] …nearly there… \\[\n \\frac{2 \\omega^2}{\\omega_n^4} = -\\left(\\frac{\\zeta 2}{\\omega_n}\\right)^2+\\frac{2}{\\omega_n^2}\n\\] \\[\n \\omega^2 = -2\\left(\\frac{\\zeta \\omega_n^2}{\\omega_n}\\right)^2+\\frac{\\omega_n^4}{\\omega_n^2} = -2\\zeta^2 \\omega_n^2+\\omega_n^2 = \\omega_n^2(1 - 2\\zeta^2)\n\\] And finally, taking the square-root of both sides, \\[\n \\omega_{peak} = \\omega_n\\sqrt{1 - 2\\zeta^2}\n\\] Et voila! Now, was that worth it? Probably not!\n\n\n\nNote that if \\(\\zeta>1/\\sqrt{2}\\) the response has no peak, and that if \\(\\zeta=0\\), it has a discontinuity at \\(\\omega/\\omega_n = 1\\).\n\n\n\n\n\n\n\n\nLinear frequency and amplitude axis.\n\n\n\n\n\n\n\nLogrithmic frequency and ampltidue axis.\n\n\n\n\n\n\nFigure 9: Frequency response of the receptance FRF for different levels of damping.\n\n\n\nTo get the phase response we need to separate real and imaginary parts of the receptance. We do this by multiplying numerator and denominator by the complex conjugate of the denominator, \\[\n\\begin{align}\n R(\\omega)&= \\frac{1}{k\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right) + i \\zeta 2 k \\frac{\\omega}{\\omega_n} } {\\color{black!50!green}\\times \\frac{k\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right) - i \\zeta 2 k \\frac{\\omega}{\\omega_n} }{k(1 - \\frac{\\omega^2}{\\omega_n^2}) - i \\zeta 2 k \\frac{\\omega}{\\omega_n} }}\\\\\n &=\\underbrace{\\frac{k\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)}{k^2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2 + \\left(\\zeta 2 k \\frac{\\omega}{\\omega_n}\\right)^2 } }_{\\mbox{real part}}- i\\underbrace{\\frac{ \\zeta 2 k \\frac{\\omega}{\\omega_n} }{k^2\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)^2 + \\left(\\zeta 2 k \\frac{\\omega}{\\omega_n}\\right)^2 }}_{\\mbox{imag part}}\n\\end{align}\n\\] From the above we obtain the phase response as, \\[\n\\angle R(\\omega) = \\tan^{-1}\\left( \\frac{\\mbox{imag}}{\\mbox{real}} \\right) =\\tan^{-1}\\left( \\frac{-\\zeta 2 k \\frac{\\omega}{\\omega_n} }{k\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)} \\right)= \\tan^{-1}\\left( \\frac{-c \\omega }{k - \\omega^2 m} \\right)\n\\] Shown in Figure 10 are the phase response curves for different levels fo damping alongside the harmonic response above and below the natural frequency of vibration. We see that the motion of the mass becomes progressively delayed, relative to the excitation force. In the limit at \\(\\omega\\rightarrow \\infty\\) they are \\(180^\\circ\\) out of phase. For all levels of damping, we have phase of \\(-\\pi/2\\) at undamped resonance \\(\\omega_n\\).\n\n\n\n\n\n\n\n\nPhase response for different levels of damping.\n\n\n\n\n\n\n\nHarmonic response of above and below natural frequency.\n\n\n\n\n\n\nFigure 10: Phase response of a damped mass-spring system.\n\n\n\n\n\n\n\n\n\nWhat happens at \\(\\omega_n\\) if there is no damping…?\n\n\n\n\n\nIn the case of no damping we have discontinuity at resonance - in reality this never happens, there is always some damping to limit response. Nevertheless, lets look at what would happen anyway… Our EoM without damping, \\[\nm\\ddot{x} +k x = F_0 \\cos(\\omega_n t) \\quad \\rightarrow \\quad \\ddot{x} +\\omega_n^2 x = \\frac{F_0}{m} \\cos(\\omega_n t) = \\frac{\\omega_n^2 F_0}{k} \\cos(\\omega_n t)\n\\] How do we solve this? We assume solution of \\(x(t) = \\frac{\\omega_n F_0 t}{2k} \\sin(\\omega_n t)\\) and then prove it…\nOur proof begins by getting the velocity and acceleration by differentiation (using product rule), \\[\n\\begin{align}\n \\dot{x}(t) &= \\frac{\\omega_n F_0}{2k} \\sin(\\omega_n t) + \\frac{\\omega_n^2 F_0 t}{2k} \\cos(\\omega_n t)\\\\\n \\ddot{x}(t) & = \\frac{\\omega_n^2 F_0}{2k} \\cos(\\omega_n t) + \\frac{\\omega_n^2 F_0}{2k} \\cos(\\omega_n t) - \\frac{\\omega_n^3 F_0 t}{2k} \\sin(\\omega_n t)\n\\end{align}\n\\] Next, we plug everything into our EoM and check LHS and RHS are equal… \\[\n\\ddot{x} +\\omega_n^2 x = \\frac{\\omega_n^2 F_0}{k} \\cos(\\omega_n t)\n\\] \\[\n\\begin{align}\n\\overbrace{ \\frac{\\omega_n^2 F_0}{2k} \\cos(\\omega_n t) + \\frac{\\omega_n^2 F_0}{2k} \\cos(\\omega_n t) - \\frac{\\omega_n^3 F_0 t}{2k} \\sin(\\omega_n t)}^{ \\ddot{x}} +\\omega_n^2 \\overbrace{ \\frac{\\omega_n F_0 t}{2k} \\sin(\\omega_n t)}^{x} \\\\= \\frac{\\omega_n^2 F_0}{k} \\cos(\\omega_n t) \n\\end{align}\n\\] As we can see, it all checks out! \\(x(t) = \\frac{\\omega_n F_0 t}{2k} \\sin(\\omega_n t)\\) is a solution at \\(\\omega=\\omega_n\\). What this is saying is that the amplitude grows linearly with time (as in Figure 11) and tends to infinity! In the presence of damping, we obtain a similar response, except that it will plateau at some amplitude on account of the damping removing energy from the system.\n\n\n\n\n\n\nFigure 11: Response of undamped system when excited at natural frequency.\n\n\n\nIt should be obvious that uncontrolled resonances that occur when damping is very very low, have the potential cause structural damage/failure due to the large displacements that result and should be avoided!\n\n\n\n\nFrequency Response Functions (FRFs)\nThe Frequency Response Function (FRF) is a very general and useful concept. In short, it provides a means of describing or characterising the dynamics of a system in the frequency domain. What’s particularly useful about FRFs is that they can be obtained from models by analytical means, or experimentally from measurement. This makes FRFs a very useful tool for analysing all sorts of dynamic systems.\n\n\n\n\n\n\nFigure 12: Representation of FRF as an input-output relation.\n\n\n\nIn general, an FRF \\(G(\\omega)\\) describes the complex (i.e. magnitude and phase) relation between a harmonic input signal and the corresponding output signal on a given system. \\[\nG(\\omega) = \\frac{Y(\\omega)}{X(\\omega)} \\quad \\quad Y(\\omega) = G(\\omega)X(\\omega)\n\\] We can think of the FRF as a black box representation of the system dynamics. Like any complex number, it can be represented in polar form, \\[\nG(\\omega) = |G(\\omega)| e^{-i\\angle G(\\omega)}\n\\] The magnitude \\(|G(\\omega)|\\) describes the change in amplitude between the input and output of a system. The phase \\(\\angle G(\\omega)\\) describes the time lag between the input and output.\nWe have already derived the force-displacement FRFs for our mass-spring system in Equation 17, i.e. the dynamic stiffness \\(D(\\omega)\\) and recpetance \\(R(\\omega)\\). Similar FRFs also exist for velocity and acceleration-based responses.\nRecall that for a harmonic response, displacement, velocity and acceleration are related by, \\[\n\\ddot{X}_0 = i\\omega \\dot{X}_0 = -\\omega^2 X_0\n\\] We can therefore derive the velocity-based FRFs from those of the displacement using \\(\\dot{X}_0 = i\\omega X_0\\). We thus have the impedance \\(Z(\\omega)\\) and mobility \\(Y(\\omega)\\), \\[\nZ(\\omega) = \\frac{F_0(\\omega)}{\\dot{X}_0(\\omega)}= \\frac{F_0(\\omega)}{ i\\omega X_0(\\omega)}= \\frac{1}{i\\omega}D(\\omega)\\quad \\quad \\quad Y(\\omega) = \\frac{\\dot{X}_0(\\omega)}{F_0(\\omega)} = \\frac{i\\omega X_0(\\omega)}{F_0(\\omega)} =i\\omega R(\\omega)\n\\] Similarly, acceleration-based FRFs can be derived from those of displacement or velocity using \\(\\ddot{X}_0 = i\\omega \\dot{X}_0 = -\\omega^2 X_0\\). This leads us to the effective mass \\(M(\\omega)\\) and accelerance \\(A(\\omega)\\), \\[\nM(\\omega) = \\frac{F_0(\\omega)}{\\ddot{X}_0(\\omega)}= \\frac{F_0(\\omega)}{ i\\omega \\dot{X}_0(\\omega)}= \\frac{1}{i\\omega}Z(\\omega)\\quad \\quad \\quad A(\\omega) = \\frac{\\ddot{X}_0(\\omega)}{F_0(\\omega)} = \\frac{i\\omega \\dot{X}_0(\\omega)}{F_0(\\omega)} =i\\omega Y(\\omega)\n\\] Shown in Figure 13 are the receptance, mobility and accelerance FRFs for our single DoF mass-spring system with different levels of damping. Which of these FRFs is most useful very much depends on the problem at hand.\n\n\n\n\n\n\nFigure 13: Representation of FRF as an input-output relation.\n\n\n\nWith knowledge of a systems FRF, say the receptance \\(R(\\omega)\\), its response due to a harmonic excitation at frequency \\(\\omega\\) is given by, \\[\nx(t) = |R(\\omega)| F_0 \\cos(\\omega t - \\angle R(\\omega)) \\quad \\mbox{or} \\quad x(t) = \\mbox{Real}\\left(R(\\omega) F_0 e^{-i \\omega t}\\right)\n\\]",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#periodic-forcing",
"href": "introductions/POV/SDOF/SDOF.html#periodic-forcing",
"title": "Single degree of freedom oscillator",
"section": "Periodic forcing",
"text": "Periodic forcing\nHaving dealt with the case of simple harmonic excitation (periodic over time interval \\(T=2\\pi/\\omega\\)), we are now ready to consider excitation by more general periodic functions, an example of which is shown in Figure 14.\n\n\n\n\n\n\nFigure 14: Example of a periodic signal with interval \\(T\\).\n\n\n\nTo deal with periodic excitation signals we simply note that any periodic function can be represented by a convergent series of harmonic functions whose frequencies are integer multiples of a fundamental \\(\\omega_0 = 2\\pi/T\\). This is the so-called Fourier series, \\[\n F(t) = \\frac{1}{2}F_0 + \\mbox{Real}\\left[ \\sum_{n=1}^\\infty F_n e^{-in\\omega_0 t}\\right]\n\\] where \\(F_n\\) is a complex coefficient that describes amplitude and phase of the \\(n\\)th harmonic, with \\(F_0\\) representing the DC offset of the signal. Since our EoM is linear, we can treat each harmonic individually and recombine their individual solutions afterwards. \\[\nx(t) = \\frac{1}{2}F_0 R(0) + \\mbox{Real}\\left[\\sum_{n=1}^\\infty R(n\\omega_0) F_n e^{-in\\omega_0 t} \\right]\n\\] Importantly, each forcing harmonic is multiplied by the complex FRF at the corresponding frequency, \\(n\\omega_0\\).",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#arbitrary-forcing",
"href": "introductions/POV/SDOF/SDOF.html#arbitrary-forcing",
"title": "Single degree of freedom oscillator",
"section": "Arbitrary forcing",
"text": "Arbitrary forcing\nTo consider an arbitrary, non-harmonic, forcing we can no longer assume that the system is operating at steady state. Rather, we must treat the entire response as transient. To deal with this problem, we will need to introduce the system’s until impulse response function (IRF).\n\nImpulse response function\nThe unit impulse response of a system is the response obtained by considering the external forcing \\(F(t)\\) to be that of a Dirac delta function, defined mathematically as \\[\n\\begin{align}\n \\delta(t-a) &= 0 \\quad \\quad \\mbox{for } t\\neq a\\\\\n \\int_{-\\infty}^\\infty \\delta(t-a){\\rm d}t &=1\n\\end{align}\n\\] Using \\(F_e = \\delta(t)\\), the system impulse response is denoted \\(g(t)\\).\nInserting \\(F_e=\\delta(t)\\) and \\(x(t) = g(t)\\), the EoM becomes, \\[\n\\delta(t) = kg(t) + c\\dot{g}(t) + m\\ddot{g}(t)\n\\] Taking \\(g(0)=\\dot{g}(0) = 0\\) by definition, we integrate our EoM over the time interval \\(\\Delta t=\\epsilon\\) and take the limit as this goes to \\(0\\), \\[\n\\underbrace{ \\lim_{\\epsilon\\rightarrow 0} \\int_0^\\epsilon \\delta(t) dt }_{=1}= \\lim_{\\epsilon\\rightarrow 0} \\int_0^\\epsilon \\left(kg(t) + c\\dot{g}(t) + m\\ddot{g}(t) \\right)dt\n\\] Note that integration of the displacement is just a constant, which goes to zero on evaluation of the limits. Integration of the velocity term yields a displacement, however since it takes some time for a displacement to develop this term also tends to zero in the limit. All we are left with is the acceleration term which evaluates to, \\[\n\\lim_{\\epsilon\\rightarrow 0} \\int_0^\\epsilon m\\ddot{g}(t)dt = \\lim_{\\epsilon\\rightarrow 0} m\\dot{g}(t)\\Big|_0^\\epsilon = m \\lim_{\\epsilon\\rightarrow 0} [\\dot{g}(\\epsilon) - \\dot{g}(0)] = m\\dot{g}(0+)\n\\] where the term \\(\\dot{g}(0+)\\) denotes a change in velocity at the end of the increment \\(\\epsilon\\).\nFrom the above we conclude that, \\[\n\\dot{g}(0+) = \\frac{1}{m}\n\\tag{20}\\] Physically, Equation 20 is telling us that the unit impulse response produces an instantaneous change in velocity; an impulse applied at \\(t=0\\) is equivalent to the effect of initial velocity \\(\\dot{x}_0=1/m\\). Recalling our solution to the unforced EoM (see ), the system impulse response is given by, \\[\n\\begin{align}\n g(t) &= \\frac{1}{\\omega_d m} \\sin(\\omega_d t) e^{-\\zeta\\omega_n t} \\quad &t>0 \\\\\n &=0 &t<0\n\\end{align}\n\\] If excitation happens at \\(t=\\tau\\), \\(F_e = \\delta(t-\\tau)\\), then impulse response \\(g(t)\\) is simply delayed by \\(\\tau\\), \\(g(t-\\tau)\\).\n\n\n\n\n\n\nImpulse response without the integrals..?\n\n\n\nConsider a constant external force \\(F_e\\) applied over the infinitesimal time \\(0<t<\\epsilon\\). During this time, the velocity of the mass increases from 0 to \\(\\dot{x}(\\epsilon)\\), after which its motion is governed by the system dynamics. From Newton’s second law \\(F=m\\ddot{x}\\), and simple kinematics \\(\\dot{x}(t) = t\\ddot{x}\\), we have that the velocity at \\(t=\\epsilon\\) is, \\[\n\\dot{x}(\\epsilon) = \\epsilon\\ddot{x} = \\frac{\\epsilon F}{m}\n\\] The product \\(\\epsilon F\\) is known as the impulse; it describes the change in momentum. We see that a unit impulse (\\(\\epsilon F = 1\\)) is equivalent to an initial velocity that is inversely proportional to the system mass, \\(\\dot{x}_0 = 1/m\\).\n\n\n\n\nConvolution integral\nConsider an arbitrary forcing function \\(F(t)\\), for example as in Figure 15. During a small time instant \\(\\Delta \\tau\\), we can regard \\(F(t)\\) as an impulse \\(F(\\tau)\\Delta \\tau\\), \\[\n\\Delta F(\\tau) = F(\\tau) \\Delta \\tau \\delta(t-\\tau)\n\\] Based on the impulse representation above, the complete forcing function can be approximated as, \\[\nF(t)\\approx \\sum F(\\tau) \\Delta \\tau \\delta(t-\\tau)\n\\] which becomes exact at \\(\\Delta \\tau \\rightarrow 0\\). Because our mass-spring system is linear, we can use the impulse response function \\(g(t)\\) to predict response due to each individual excitation impulse, \\(\\Delta F(t,\\tau)\\), and then recombine them to realize the final response.\n\n\n\n\n\n\nFigure 15: Example of force signal being approximated by series of impulses.\n\n\n\nResponse at time \\(t\\) due to impulse \\(\\Delta F(\\tau)\\) is given by, \\[\n\\Delta x(t) =F(\\tau) \\Delta \\tau g(t-\\tau)\n\\] This is just the impulse response \\(g(t)\\) weighted by the forcing value \\(F(\\tau)\\) and delayed by \\(\\tau\\), as illustrated in Figure 16. The response due to the full \\(F(t)\\) is the sum of all individual impulses, \\[\nx(t) \\approx \\sum F(\\tau) \\Delta \\tau g(t-\\tau)\n\\tag{21}\\]\n\n\n\n\n\n\n\n\nSingle impulse\n\n\n\n\n\n\n\nTwo impulses\n\n\n\n\n\n\nFigure 16: Response as the summation of impulse response functions weighted by forcing function.\n\n\n\nEquation Equation 21 becomes exact at \\(\\Delta \\tau \\rightarrow 0\\), \\[\nx(t) = \\int_0^t F(\\tau) g(t-\\tau) d\\tau\n\\tag{22}\\] This is called the convolution integral. It simply states that the response can be expressed as a superposition of the impulse response weighted by the arbitrary forcing function. In vibration analysis, this is also known as Duhamel’s integral.\nIt is worthwhile noting that convolution in the time domain is simply a multiplication in the frequency domain. Taking the Fourier transform of Equation 22 yields our harmonic forcing equation from before, \\[\nX_0(\\omega)=R(\\omega) F_0(\\omega)\n\\] From the above, we see that the FRF \\(R(\\omega)\\) is related to the IRF \\(g(t)\\) by Fourier transform, \\[\nR(\\omega) = \\mathcal{F}(g(t))\n\\]",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#structural-damping",
"href": "introductions/POV/SDOF/SDOF.html#structural-damping",
"title": "Single degree of freedom oscillator",
"section": "Structural damping",
"text": "Structural damping\nIn reality, damping is very very complicated… We almost always rely on simplified models. So far we have been assuming a viscous damping model, i.e. the damping force is proportional to velocity. In this case our EoM and displacement FRF take the form, \\[\nF_e = kx + c\\dot{x} + m\\ddot{x} \\quad \\quad R(\\omega) = \\frac{1}{k + i\\omega c - \\omega^2 m}\n\\] The viscous damping model, though valid for viscous damping devices, is not always an appropriate representation for the intrinsic damping in structures, as will be discussed below.\nBecause of damping, our mass-spring system is non-conservative, meaning that energy is not conserved, but dissipated with time. The energy dissipated is equal to work done by the external force. For a single cycle over period \\(T\\) (noting \\({\\rm d}x = \\dot{x}{\\rm d}t\\)) the energy dissipated is given by, \\[\n\\Delta E = \\int_0^{T} F_e {\\rm d}x = \\int_0^{2\\pi/\\omega} F_e\\dot{x} {\\rm d}t\n\\] Since the dissipated energy must be real valued, we consider the real parts of \\(F_e=F_0 e^{-i \\omega t}\\) and \\(\\dot{x} = i\\omega|R|F_0 e^{-i(\\omega t-\\phi)}\\), \\[\n\\begin{align}\n \\mbox{Real}(\\dot{x}) &= |R| F_0 \\omega \\sin(\\omega t - \\phi)\\\\\n \\mbox{Real}(F_e) &= F_0 \\cos\\omega t\n\\end{align}\n\\] The energy dissipated then becomes, \\[\n\\begin{align}\n \\Delta E = \\int_0^{2\\pi/\\omega} F_e\\dot{x} {\\rm d}t &= \\int_0^{2\\pi/\\omega} F_0 \\cos(\\omega t)|R| F_0\\omega \\sin(\\omega t -\\phi) {\\rm d}t\\\\\n &= F_0^2 \\omega |R| \\underbrace{\\int_0^{2\\pi/\\omega} \\cos(\\omega t) \\sin(\\omega t - \\phi) {\\rm d}t}_{\\frac{\\pi}{\\omega} \\sin \\phi}\\\\\n &= F_0^2 |R| \\pi \\sin \\phi\n \\end{align}\n\\] Next, we need to determine \\(\\sin\\phi\\)… We know the phase angle is given by, \\[\n\\phi = \\tan^{-1}\\left( \\frac{\\mbox{imag}}{\\mbox{real}} \\right) = \\tan^{-1}\\left( \\frac{-\\zeta 2 k \\frac{\\omega}{\\omega_n} }{k\\left(1 - \\frac{\\omega^2}{\\omega_n^2}\\right)} \\right) =\\tan^{-1}\\left( \\frac{-c \\omega }{k - \\omega^2 m} \\right)\n\\] So by using the trig identity \\(\\sin(\\tan^{-1}(x)) = x/\\sqrt{1+x^2}\\), after a little algebra, we obtain, \\[\n\\sin\\phi = \\frac{-\\omega c}{\\sqrt{(k-\\omega^2m)^2 + (\\omega c)^2}} = -\\omega c |R|\n\\] The energy dissipated per cycle now becomes, \\[\n\\Delta E = F_0^2 |R| \\pi \\sin \\phi = \\underbrace{|R|^2 F_0^2}_{|X_0|^2} \\pi \\omega c\n\\tag{23}\\]\nWe see from Equation 32 that for a viscous damper, the energy dissipated per cycle is proportional to: damping coefficient \\(c\\), frequency \\(\\omega\\) and amplitude squared \\(|X_0|^2\\)\nIn real systems, even without designated damping devices, energy is dissipated over time. For example, in real springs (i.e. coil springs) energy is dissipated as a result of internal friction within the material. This phenomenon occurs in all real structures. Experiments on wide variety of materials indicate that the energy loss per cycle due to internal friction is roughly proportional to amplitude squared, \\[\n\\Delta E = \\alpha |X_0|^2\n\\] where \\(\\alpha\\) is a constant independent of frequency. We call the above structural damping. It can be attributed to hysteresis phenomenon associated with cyclic stresses in elastic materials.\nA system with structural damping can be treated as having viscous damping with an equivalent, frequency dependent, damping coefficient, \\[\n\\begin{align}\nc_{eq} = \\frac{\\alpha}{\\pi\\omega} \\quad \\rightarrow\\quad \\Delta E &= c_{eq}\\pi\\omega |X_0|^2 \\\\\n&=\\alpha|X_0|^2\n\\end{align}\n\\] Using this new equivalent damping factor our EoM becomes, \\[\n\\begin{align}\nF_e = kx + \\frac{\\alpha}{\\pi\\omega} \\dot{x} + m\\ddot{x} \\quad \\rightarrow \\quad F_0 &= kX_0 + \\frac{i\\omega \\alpha}{\\pi\\omega} X_0 +m\\ddot{X}_0\\\\\n&= {\\left(k + \\frac{i\\omega \\alpha}{\\pi\\omega} \\right)} X_0 +m\\ddot{X}_0\n\\end{align}\n\\] From the above it is convenient to define a complex stiffness \\(k(1+i\\gamma)\\), where \\(\\gamma = \\alpha/\\pi k\\) is termed the structural loss factor.\nIn the presence of structural damping, the system’s harmonic response becomes, \\[\nx(t) = \\mbox{Real}\\left(X_0 e^{i\\omega t} \\right) = |X_0| \\cos(\\omega t-\\angle X_0) \\quad \\quad X_0 = \\overbrace{\\frac{1}{k(1+i\\gamma)-\\omega^2m}}^{R(\\omega)}F_0\n\\]\nShown in Figure 17 are some system responses for different levels of viscous and structural damping. We see that with structural damping the peak amplitude doesn’t shift with increasing levels of damping and the response is no longer stiffness controlled at low frequency.\nHere are some loss factors for typical materials:\n\nGlass \\(\\sim 1\\times10^{-4}\\)\nSteel \\(\\sim 1\\times10^{-3}\\)\nPerspex \\(\\sim 0.1\\)\nRubber \\(\\sim >0.1\\)\n\n\n\n\n\n\n\nFigure 17: Response of mass-spring system with different values of viscous and structural damping.\n\n\n\nImportant: structural damping is only valid for harmonic excitation - we assumed \\(F_0e^{i\\omega t}\\) in the foregoing analysis. It can’t use this for transient analysis! It turns out that structural damping can give a non-causal transient responses…",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#estimating-damping",
"href": "introductions/POV/SDOF/SDOF.html#estimating-damping",
"title": "Single degree of freedom oscillator",
"section": "Estimating damping",
"text": "Estimating damping\nThe amount of damping in a system is often unknown, though it can be estimated from measurements in a number of ways. Below we discus two simple approaches based on unforced and forced system responses.\n\nLog-decrement method\nAssuming a viscous damping force, we know that the system response decays exponentially according to, \\[\nx(t) = A\\cos(\\omega_d t - \\phi) e^{-\\zeta\\omega_n t}\n\\] Let us consider the amplitude ratio after a period \\(T\\), \\[\n\\frac{x_1}{x_2} = \\frac{A\\cos(\\omega_d t_1 - \\phi) e^{-\\zeta\\omega_n t_1} }{A\\cos(\\omega_d t_2 - \\phi) e^{-\\zeta\\omega_n t_2} }\n\\] Noting that the \\(\\cos(\\quad)\\) term does not change after period \\(T\\), the amplitude ratio reduces to, \\[\n\\frac{x_1}{x_2} = \\frac{e^{-\\zeta\\omega_n t_1} }{e^{-\\zeta\\omega_n (t_1+T)} } = e^{\\zeta\\omega_n T}\n\\] Taking the natural log \\(\\ln\\) of both sides yields, \\[\n\\ln\\left(\\frac{x_1}{x_2}\\right) = \\zeta\\omega_n T\n\\] where we define the so-called log-decrement as \\(\\delta = \\ln(x_1/x_2)\\). This is something that we can easily obtain from a measurement of the system’s decay response.\n\n\n\n\n\n\nFigure 18: Transient response of a viscously damped mass-spring system.\n\n\n\nSubstituting for the decay period \\(T=2\\pi/\\omega_d\\) and also the natural frequencies \\(\\omega_n\\) and \\(\\omega_d\\), the log-decrement becomes, \\[\n\\delta = \\ln\\left(\\frac{x_1}{x_2}\\right) = \\zeta\\omega_n T = \\zeta\\omega_n \\frac{2\\pi }{\\omega_d} = \\frac{\\zeta 2\\pi}{\\sqrt{1-\\zeta^2}}\n\\tag{24}\\] Equation 24 can be rearranged and used to solve for damping factor \\(\\zeta\\), \\[\n\\zeta = \\frac{\\delta}{\\sqrt{(2\\pi)^2 + \\delta^2}}\n\\] This estimate of \\(\\zeta\\) makes use of two sequential peaks in the decay response. This is fine, but can be quite sensitive to experimental uncertainty. A better estimate can be made by utilizing multiple peaks as described below.\nNote that the displacement ratio for the \\(1\\)st and \\(n\\)th peaks can be written as, \\[\n\\begin{align}\n\\frac{x_1}{x_n} = \\frac{x_1}{x_2}\\, \\frac{x_2}{x_3} \\, \\frac{x_3}{x_4} \\cdots \\frac{x_{n-1}}{x_n} = (e^{\\zeta\\omega_nT})^n= e^{n\\zeta\\omega_nT}\n\\end{align}\n\\] An improved (or at least more robust) estimate of the log-decrement (and so also \\(\\zeta\\)) can then be obtained using, \\[\n\\delta_n = \\frac{1}{n-1} \\ln\\left(\\frac{x_1}{x_n}\\right)\n\\]\n\n\nResonance bandwidth\nConsidering now the forced response of a viscous damped mass-spring system as in Figure 9.\nThe `peakyness’ or sharpness of a resonant response is an important characteristic of vibrating system. We can characterise the sharpness of a resonance using the so called Q-factor.\nThere are two slightly different definitions of Q-factor, one defined in terms of half power bandwidth (as detailed below) and another defined in terms of dissipated energy. The two definitions tend to the same value for lightly damped system (i.e. \\(\\zeta<0.05\\) or so), though the former is more useful from a damping estimation perspective as see shortly.\nThe half power bandwidth-based definition of Q-factor takes the form, \\[\nQ \\triangleq \\frac{\\omega_n}{\\Delta \\omega} = \\frac{\\omega_n}{\\omega_2 - \\omega_1}\n\\] where \\(\\omega_n\\) is the natural frequency of the system, and \\(\\Delta \\omega\\) is the half power bandwidth, i.e. the spacing between the upper and lower frequencies \\(\\omega_2\\) and \\(\\omega_1\\) where the output power of the system is half that of the maximum value (these are the -3dB points surrounding the resonant peak in mobility). Note that \\(\\omega_n\\) and \\(\\Delta \\omega\\) can be measured quite easily.\nIt is possible to reformulate the Q-factor in terms of the mass, stiffness and damping of a system. This form will be particularly useful as it will allow us to estimate the damping based on measured Q-factor.\nTo derive this alternate form we begin by recalling the natural frequency as a function of system mass and stiffness, \\[\n\\omega_n = \\sqrt{\\frac{k}{m}}.\n\\tag{25}\\] Next we must derive an appropriate expression for the full half power bandwidth \\(\\Delta \\omega\\).\nThe complex power \\(P\\) of a vibration system is given by, \\[\nP = \\dot{x}^*F\n\\] where \\(\\dot{x}^*\\) is the complex conjugate of the velocity \\(\\dot{x}\\). Note that we are interested in the real power. This is the power that actually does work on the system. \\[\n\\mbox{real}(P) = \\mbox{real}(\\dot{x}^*F)\n\\] Now we can substitute the force \\(F\\) for the product of velocity and impedance, \\[\n\\mbox{real}(P) = \\mbox{real}(\\dot{x}^* \\dot{x} Z).\n\\] Using the identity \\(z^*z = |z|^2\\), the above becomes \\[\n\\mbox{real}(P) = |\\dot{x}|^2 \\mbox{real}(Z)\n\\]\nwhich leaves us with \\[\n\\mbox{real}(P) = |\\dot{x}|^2 c.\n\\tag{26}\\] where \\(c\\) is the viscous damping factor. Equation 26 states that the real power is proportional to the velocity squared and the damping of the system.\nClearly, the maximum achievable power is that when the velocity is a maximum, \\[\nP_{max} = |\\dot{x}_{max} |^2 c\n\\] The half maximum power can then be written as, \\[\n\\frac{P_{max}}{2} = \\frac{|\\dot{x}_{max} |^2}{2} c = \\left|\\frac{\\dot{x}}{\\sqrt{2}}\\right|^2 c\n\\]\nWe are interested in when the real power \\(\\mbox{real}(P)\\) is equal to the half maximum power, \\[\n\\mbox{real}(P) = \\frac{P_{max}}{2}\\rightarrow |\\dot{x}|^2 c = \\frac{|{\\dot{x}_{max}}|^2}{2} c\n\\] which is equivalent to, \\[\n|\\dot{x}| = \\left|\\frac{\\dot{x}_{max}}{\\sqrt{2}}\\right|\n\\] Noting that \\(|\\dot{x}|=\\frac{F}{Z}\\), the above may be rewritten in the form, \\[\n\\left|\\frac{F}{Z}\\right| = \\left|\\frac{F}{c\\sqrt{2}}\\right| \\rightarrow |Z|=c\\sqrt{2}.\n\\] Squaring both sides of the above we get, \\[\n{c^2 + \\left(\\omega m - \\frac{k}{\\omega}\\right)^2 } = 2c^2\n\\] This equation has two solutions, \\(\\left(\\omega m - \\frac{k}{\\omega}\\right)=\\pm c\\). \\[\n\\begin{align}\n\\omega_1 m - \\frac{k}{\\omega_1 } = c& &\\omega_2 m - \\frac{k}{\\omega_2} = -c\\\\\n\\omega^2_1 m - \\omega_1 c=k& & \\omega^2_2 m + \\omega_2 c=\\phantom{-}k\n\\end{align}\n\\] we arrive at an equation for the full half power bandwidth \\(\\Delta \\omega = \\omega_2 -\\omega_1\\), \\[\n\\omega^2_1 m - \\omega_1 c = \\omega^2_2 m + \\omega_2 c \\rightarrow \\omega_2 -\\omega_1 = \\frac{c}{m}.\n\\tag{27}\\]\nSubstituting Equation 27 and Equation 25 into the definition of Q-factor we can derive alternate forms of the Q-factor, \\[\nQ = \\frac{\\sqrt{\\frac{k}{m}}}{\\frac{c}{m}} = \\frac{m}{c} \\sqrt{\\frac{k}{m}} = \\frac{m \\omega_n}{c} = \\frac{1}{c}\\sqrt{mk}.\n\\] Recalling our defintion of the viscous damping coefficient \\(\\zeta = \\frac{c}{2m\\omega_n}\\), we can substitute \\(c = \\zeta 2m\\omega_n\\) into the above to obtain, \\[\nQ = \\frac{m \\omega_n}{c} = \\frac{m \\omega_n}{\\zeta 2m\\omega_n} = \\frac{1}{2 \\zeta}\n\\tag{28}\\] or more simply, \\[\n\\frac{\\omega_n}{\\Delta \\omega} = \\frac{1}{2 \\zeta}\n\\tag{29}\\] From Equation 29 we can easily obtain \\(\\zeta\\) given a measurement of \\(\\omega_n\\) and \\(\\Delta \\omega\\). Note however, that in the above derivation we used the undamped natural frequnecy \\(\\omega_n\\), even though our system is damped… For lightly damped systems \\(\\omega_n \\approx \\omega_d\\) and Equation 29 holds. For highly damped systems it does not.\n\n\n\n\n\n\nWhats up with the other \\(Q\\) factor definition?\n\n\n\n\n\nA more precicse definition of Q-factor is by the ratio of the mean stored energy at resonance (\\(\\omega_d\\)) to the energy disipated per cycle at resonance, \\[\nQ = 2\\pi \\frac{\\mbox{Mean energy stored at resonance}}{\\mbox{Energy dissipated per cycle at resonance}}\n\\tag{30}\\] The potential and kinetic energy of an oscillating mass-spring system are given by, \\[\nU = \\frac{1}{2}k|X_0|^2 \\quad \\mbox{and} \\quad K = \\frac{1}{2}m|\\dot{X}_0|^2 %= -\\frac{1}{2}m \\omega^2 |X_0|^2\n\\] When a system oscillates, energy is transformed backwards and forwards between potential and kinetic.\nAt resonance, the mean energy stored can be expressed entirely in terms of the potential energy, \\[\nE = \\frac{1}{2}k|X_0|^2%+\\frac{1}{2}m \\omega^2 |X_0|^2 = \\frac{|X_0|^2}{2}\\left(k+\\omega^2 m\\right)\n\\tag{31}\\] Recalling Equation 32, the energy disipated per cycles by a viscous damped is given by, \\[\n\\Delta E = |X_0|^2 \\pi \\omega c\n\\tag{32}\\] Substiuting Equation 31 and Equation 32 into the energy-based defintion of Q-factor (Equation 30), we obtain, \\[\nQ = 2\\pi \\frac{\\frac{1}{2}k|X_{0}|^2}{|X_{0}|^2 \\pi \\omega_d c } = \\frac{k}{\\omega_d c } = \\frac{k}{ 2 \\zeta m \\omega_d\\omega_n } = \\frac{\\omega_n^2}{ 2 \\zeta \\omega_d\\omega_n } \\approx \\frac{1}{2 \\zeta}\n\\] which, for lightly damped systems (\\(\\omega_d\\approx \\omega_n\\)), agrees with the bandwidth-based defintiion in Equation 28.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#base-isolated-machinery-force-transmissibility",
"href": "introductions/POV/SDOF/SDOF.html#base-isolated-machinery-force-transmissibility",
"title": "Single degree of freedom oscillator",
"section": "Base isolated machinery (force transmissibility)",
"text": "Base isolated machinery (force transmissibility)\n\n\n\n\n\n\nFigure 19: Force transmitted to rigid foundation.\n\n\n\nA great example of a single DoF system is a peice of vibrating machinery (like a motor) that has been placed on some vibration isolators to try and minimise the transfer of energy into any connecting structures, e.g. the floor. A simple model of such a system is that of a mass on a spring. The machine is represented by the mass, with its internal force acting as the driving force, whilst the spring-damper represents the vibration isolator.\nWhen designing a vibration isolation system, of particular interest is the amount of force thats transfered into the foundation. This can be characterised by the so-called force transmissibility, which is defined as, \\[\nT_f = \\frac{\\bar{F}}{F_0}\n\\] where \\(F_0\\) is the internal driving force of the machine, and \\(\\bar{F}\\) is the force imparted on the rigid foundation.\nTo get \\(T_f\\) we derive an equation for each force, and substitute them in. Lets start with the driving force \\(F_0\\), which we actually already have from Equation 17, \\[\n F_0 = (k + i\\omega c - \\omega^2 m)X_0\n\\]\nNow, what about \\(\\bar{F}\\)? The force input to the foundation will be due to the spring and dashpot. Their force contributions will be equal but opposite to what they impart on the mass (which recalling Equation 6 was \\(-kx -c\\dot{x}\\)), hence, \\[\n \\bar{F} = (k + i\\omega c)X_0\n\\] Our transmissibility \\(T_f\\) then becomes, \\[\nT_f = \\frac{(k + i\\omega c)}{(k- \\omega^2 m + i\\omega c)}\n\\tag{33}\\] We can tidy this up a little and get it in a more useful form… First recall that the undamped natural frequency is \\(\\omega_n = \\sqrt{k/m}\\), or equivalently the stiffness can be written as \\(k/\\omega^2 = m\\). Also remember that the \\(Q\\) factor can be related to the damping factor \\(c\\) through the following equations, \\[\nQ = \\frac{m \\omega_n}{c} = \\frac{k}{\\omega_n c} \\rightarrow \\frac{c}{k} = \\frac{1}{Q \\omega_n}\n\\] Substiuting these relartions into Equation 33 and doing a bit of rearranging we get the following, \\[\nT_f = \\frac{1 + i\\omega \\frac{c}{k}}{1- \\omega^2 \\frac{m}{k} + i\\omega \\frac{c}{k}} = \\frac{1 + i\\omega \\frac{c}{k}}{1- \\frac{\\omega^2 }{\\omega_n^2} + i\\omega \\frac{c}{k}} = \\frac{1 + i \\frac{\\omega}{ \\omega_n}\\frac{1}{Q}}{1- \\frac{\\omega^2 }{\\omega_n^2} + i \\frac{\\omega}{\\omega_n}\\frac{1}{Q}}\n\\] Finally, considering the magnitude squared transmissibility we have, \\[\n|T_f|^2 = \\frac{1 + \\left(\\frac{\\omega}{ \\omega_n}\\frac{1}{Q}\\right)^2}{\\left(1- \\frac{\\omega^2 }{\\omega_n^2}\\right)^2 + \\left(\\frac{\\omega}{\\omega_n}\\frac{1}{Q}\\right)^2}\n\\tag{34}\\] Equation 34 is quite an important one in the context of vibration isolation. It tells us over what frequency range we would expect the isolation system to provide an attenuation of the transmitted force. Diving a little deeper, lets consider what happens as \\(\\omega/\\omega_n\\rightarrow 0\\), \\[\n|T_f|^2_{\\omega \\rightarrow 0} = \\frac{1 + 0}{\\left(1- 0\\right)^2 + \\left(0 \\right)^2} = 1\n\\] which implies that at low frequencies (below \\(\\omega_n\\)) the isolation system does not attenuate the transmitted force. Considering \\(\\omega/\\omega_n\\rightarrow \\infty\\), \\[\n|T_f|^2_{\\omega \\rightarrow \\infty} = \\frac{1 + \\infty}{\\left(1- \\infty\\right)^2 + \\left(\\infty \\right)^2} \\rightarrow 0\n\\] which implies that at high frequencies (well above \\(\\omega_n\\)) the isolation system attenuates the transmitted force effectively. Finally, lets consider what happens at \\(\\omega_n\\), \\[\n|T_f|^2_{\\omega \\rightarrow \\omega_n} = \\frac{1 + \\left(\\frac{1}{Q}\\right)^2}{\\left(1- 1\\right)^2 + \\left(\\frac{1}{Q}\\right)^2} = Q^2 + 1\n\\] which, since \\(Q>0\\), implies that the transmitted force is amplified!. Not ideal for a vibration isolation system..!\nThe full frequency response of \\(|T_f|^2\\) is shown in Figure 20 for a few different levels of damping. We can clearly the see the above behviours in each curve. In the context of vibrtation isolation, we would like to design the system so that the frequency of the driving force is well above \\(\\omega_n\\), which is where we get attentuation.\n\n\nCode\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# System properties\nωnω = np.linspace(0.1,10,1000) # Stiffness of cavity\nQ1 = 0.5\nQ2 = 1.5\nQ3 = 2.5\nQ4 = 3.5\n\nTf1 = (1+(ωnω/Q1)**2)/((1-ωnω**2)**2 + (ωnω/Q1)**2)\nTf2 = (1+(ωnω/Q2)**2)/((1-ωnω**2)**2 + (ωnω/Q2)**2)\nTf3 = (1+(ωnω/Q3)**2)/((1-ωnω**2)**2 + (ωnω/Q3)**2)\nTf4 = (1+(ωnω/Q4)**2)/((1-ωnω**2)**2 + (ωnω/Q4)**2)\n\n# Plotting\nfig, ax = plt.subplots(1,1,figsize=(9, 4))\nax.semilogx(ωnω, 10*np.log(Tf1), label='Q = 0.5')\nax.semilogx(ωnω, 10*np.log(Tf2), label='Q = 1.5')\nax.semilogx(ωnω, 10*np.log(Tf3), label='Q = 2.5')\nax.semilogx(ωnω, 10*np.log(Tf4), label='Q = 3.5')\nax.set_xlabel('$ω/ω_n$ (-)')\nax.set_ylabel('$10 log_{10}(|T_f|^2)$')\nax.legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 20: Force transmissibility for different levels of damping. Above resonance we get attenuation of the transmitted force.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "introductions/POV/SDOF/SDOF.html#sealed-loudspeaker",
"href": "introductions/POV/SDOF/SDOF.html#sealed-loudspeaker",
"title": "Single degree of freedom oscillator",
"section": "Sealed loudspeaker",
"text": "Sealed loudspeaker\nAnother classic example of a single DoF system is the moving coil loudspeaker, as illustrated in Figure 21. A moving coil loudspeaker comprises a diaphragm of mass \\(m_d\\) (this mass includes the dustcap, former and voice coil mass too) connected to a rigid basket by an elastic surround around the top and ‘spider’ towards the base. The purpose of the surround is to provide a restoring force to return the diapragm to equilibrium when it is displaced. The spider also contributes towards this restoring force, though its main purpose is to ensure that the diaphragm moves only in the translational direction; any rocking side to side would cause the voice coil to colide with the magnet structure. Together, the surround and spider can be modelled as an ideal spring-damper with stiffness \\(k_d\\) and damping \\(c_d\\).\nThe operation of a moving coil (also termed electro-dynamic) loudspeaker involes running an alternatiing electrical current (the signal to be repreoduced) through the voice coil. As this current \\(I\\) runs through the voice coil, it generates a magnetic field that interact with the field generated by the permentant magnet structure. The force generated is decribed by the Lorrentz force law, and is given by \\(F_e = BLI\\) where \\(BL\\) is called the force factor and describes the efficieny of the magnet/voice coil asseembly. This force is what drives the diaphgram.\nIn the low frequency range, we can lump together the mass of diaphragm, former and voice coil. We can similarly disregard the continuous nature of the surround and spider and model them as a discrete spring-damper. This is called a lumped parameter approximation; it assumes that time is only dependent variable is time. At higher frequencies the continuous nature of these components begin to come into play and we start to get complex vibration patterns, which is a break down of the lumped parameter approximation.\n\n\n\n\n\n\n\nFigure 21: Moving coil loudspeaker diagram.\n\n\n\n\n\n\n\n\n\nFigure 22: Rigid piston assumption.\n\n\n\nTo model the sound radiated from a loudspeaker, we approximate the diaphragm geometry as a rigid piston. The complex pressure amplitude at distance \\(r\\) and angle \\(\\theta\\) is then given by, \\[\n P_0(r,\\theta) = {\\color{red}\\frac{j\\rho_0 \\omega U}{4\\pi r}}{\\color{blue} \\left[\\frac{2J_1(\\frac{\\omega}{c_0}a \\sin \\theta)}{\\frac{\\omega}{c_0}a \\sin \\theta}\\right]}\n\\] where \\(\\rho_0\\) is the density of air, \\(U=\\dot{X}_0 S\\) is the diaphragm volume velocity with \\(S\\) being the surface area of the diaphragm, \\(c_0\\) is the speed of sound, \\(a\\) is the radius of the diaphragm, and \\(J_1(\\quad)\\) is the first order Bessle function of the first kind. The terms hightlighted in blue describe the directivity of the radiation. The terms highlighted in red effectively control the efficinecy of conversion between mechanical motion and radiated pressure.\nRecalling that acceleration is related to velocity by a factor of \\(i\\omega\\), \\(\\ddot{X}_0 = i\\omega \\dot{X}_0\\), we can see that the radiated sound pressure is in fact proportional to the acceleration of the diaphragm, \\[\n P_0(r,0,t) = -\\frac{\\rho_0 S}{4\\pi r} \\, \\ddot{X}_0\n\\] where we have considered the on-axis response with \\(\\theta=0\\), in which case the directivity term tends to 1.\nRecalling the receptance FRF from Equation 18, the complex acceleration ampltiude of a moving coil loudspeaker (with \\(F_0=BLI\\)) is given by, \\[\n\\ddot{X}_0 = \\frac{-\\omega^2 BLI}{k_d - \\omega^2 m_d+ i\\omega c_d } = \\frac{BL}{m_d-\\frac{k_d}{\\omega^2} + \\frac{1}{i\\omega} c_d } I\n\\]\nIn the above we effectively considered a loudspeaker embedded in an infinite baffel. A more common design is to place the loudspeaker within an closed or sealed cabinet, as illustrated in Figure 23. The volume of air enclosed behind the diaphragm acts like an air spring, altering the dynamics of the diaphragm.\n\n\n\n\n\n\nFigure 23: Sealed cabient loudspeaker.\n\n\n\nThe effect of this added stiffness can be modelled by introducing a secondary spring in parallel with that of the suspension/spider. The total stiffness of the system is then \\(k_t = k_d+k_b\\) (recall Equation 2), where \\(k_b\\) is the acoustic stiffness of the box and is given by, \\[\n k_b = \\frac{\\rho_0 c_0^2 S^2}{V}\n\\] where \\(V\\) is the volume. If we were to add some absorption in side the cabinet, we could similarly introduce an additional damper such that \\(c_t = c_d+c_b\\).\nShown in Figure 24 are the frequency responses of an infinite baffle loudspeaker vs two sealed cabinets with different volumes. As expected, the addition of a stiffness pushes up the resonant frequency of the system. For a loudspeaker, this has the effect of increasing the cutt-off frequency (i.e. the point at which the response drops to -3dB of the passband). Note that the values used in this example are somewhat made up, but the effect shown is indeed what happens. This upwards shift of the cut-off frequency is the main limitation of the sealed cabinet. An alternative design that can out perform the sealed cabinet at lower frequencies is the so-called vented or bass-reflex loudspeaker, which will be discussed on the next page.\n\n\nCode\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# System properties\nc = 343 # Speed of sound\nrho = 1.21 # Denisty of air\nf = np.linspace(20,1000,2000) # Frequency\nω = 2*np.pi*f # Radian frequency\nmd = 0.1 # Diaphragm mass\nkd = 1e4 # Suspension/spider stiffness\ncd = 10 # Suspension/spider damping\nS = 0.07297 # Surface area of 12 inch driver (m^2)\nBL = 5 # Force factor\nI = 0.01 # Input current\n\n# Infinite baffel (inf volume)\np_vinf = (rho*S/(4*np.pi))*BL*I/(md - kd/ω**2 + cd/(1j*ω))\n\nV = 50/1000 # 50 liter volume in m^3\nkb = rho*(c**2)*(S**2)/V # Box stiffness\ncb = 0 # No added box damping\nkt = kd+kb # Total stiffness\nct = cd+cb # Total damping\np_v50 = (rho*S/(4*np.pi))*BL*I/(md - kt/ω**2 + ct/(1j*ω))\n\nV = 25/1000 # 25 liter volume in m^3\nkb = rho*(c**2)*(S**2)/V\nkt = kd+kb\nct = cd+cb\np_v25 = (rho*S/(4*np.pi))*BL*I/(md - kt/ω**2 + ct/(1j*ω))\n\n# Plotting\nfig, ax = plt.subplots(1,1,figsize=(9, 4))\nax.semilogx(f, 20*np.log(np.abs(p_vinf)/(2e-5)), label='Inf. baffle')\nax.semilogx(f, 20*np.log(np.abs(p_v50)/(2e-5)), label='V = 50 L')\nax.semilogx(f, 20*np.log(np.abs(p_v25)/(2e-5)), label='V = 25 L')\nax.set_xlabel('Frequency (Hz)')\nax.set_ylabel('$20 log_{10}(|p|/p_{ref})$')\nax.legend()\nplt.tight_layout()\nplt.show()\n\n\n\n\n\n\n\n\nFigure 24: Frequency response of infinite baffle (blue) and sealed cabinet with 50L (orange) and 25L (green) cabinets with no added damping.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Single degree of freedom"
]
},
{
"objectID": "teaching/mlsd/mlsd.html",
"href": "teaching/mlsd/mlsd.html",
"title": "Microphone and Loudspeaker Design",
"section": "",
"text": "Microphone and Loudspeaker Design is a second year module on the BEng Acoustics and Audio Engineering degree at the University of Salford.\nLoudspeaker (and microphone) design is the perfect example of a multi-physics problem, requiring an understanding and bringing together of many branches of physics and engineering. In the quest to understand and develop suitable design methods this module covers a broad selection of topics, including: electronics and circuit theory, mechanical vibration and dynamics, magnetism, and acoustics.\nThis page is primarily to offer my current students a convenient way of accessing lecture material (lecture notes and slides) all in one place, but also as a resource for anybody who might find these materials useful. Their use of course comes with the usual caveats… Whilst I have tried to ensure these notes and slides are free from any errors, I cannot guarantee that this is the case. I can therefore not be held accountable for any poorly designed loudspeakers that have their origins in these notes!"
},
{
"objectID": "teaching/mlsd/mlsd.html#course-notes",
"href": "teaching/mlsd/mlsd.html#course-notes",
"title": "Microphone and Loudspeaker Design",
"section": "Course notes",
"text": "Course notes"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-1-module-introduction-lumped-parameter-modelling-ac-circuit-theory",
"href": "teaching/mlsd/mlsd.html#week-1-module-introduction-lumped-parameter-modelling-ac-circuit-theory",
"title": "Microphone and Loudspeaker Design",
"section": "Week 1: Module introduction, lumped parameter modelling, AC circuit theory",
"text": "Week 1: Module introduction, lumped parameter modelling, AC circuit theory"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-2-electronic-filters-the-mechanical-domain-and-equivalent-circuits",
"href": "teaching/mlsd/mlsd.html#week-2-electronic-filters-the-mechanical-domain-and-equivalent-circuits",
"title": "Microphone and Loudspeaker Design",
"section": "Week 2: Electronic filters, the mechanical domain, and equivalent circuits",
"text": "Week 2: Electronic filters, the mechanical domain, and equivalent circuits"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-3-impedance-vs-mobility-analogues-q-factor",
"href": "teaching/mlsd/mlsd.html#week-3-impedance-vs-mobility-analogues-q-factor",
"title": "Microphone and Loudspeaker Design",
"section": "Week 3: Impedance vs mobility analogues, Q-factor",
"text": "Week 3: Impedance vs mobility analogues, Q-factor"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-4-mass-spring-loudspeaker-model-transient-analysis-acoustic-domain",
"href": "teaching/mlsd/mlsd.html#week-4-mass-spring-loudspeaker-model-transient-analysis-acoustic-domain",
"title": "Microphone and Loudspeaker Design",
"section": "Week 4: Mass-spring loudspeaker model, transient analysis, acoustic domain",
"text": "Week 4: Mass-spring loudspeaker model, transient analysis, acoustic domain"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-5-mass-spring-loudspeaker-model-acoustic-domain-domain-coupling",
"href": "teaching/mlsd/mlsd.html#week-5-mass-spring-loudspeaker-model-acoustic-domain-domain-coupling",
"title": "Microphone and Loudspeaker Design",
"section": "Week 5: Mass-spring loudspeaker model, acoustic domain, domain coupling",
"text": "Week 5: Mass-spring loudspeaker model, acoustic domain, domain coupling"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-6-domain-coupling-removing-transformers-nortons-theorem",
"href": "teaching/mlsd/mlsd.html#week-6-domain-coupling-removing-transformers-nortons-theorem",
"title": "Microphone and Loudspeaker Design",
"section": "Week 6: Domain coupling, removing transformers, Norton’s theorem",
"text": "Week 6: Domain coupling, removing transformers, Norton’s theorem"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-7-sound-radiation-acoustic-loading-directivity",
"href": "teaching/mlsd/mlsd.html#week-7-sound-radiation-acoustic-loading-directivity",
"title": "Microphone and Loudspeaker Design",
"section": "Week 7: Sound radiation, acoustic loading, directivity",
"text": "Week 7: Sound radiation, acoustic loading, directivity"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-8-infinite-baffle-electrical-impedance-theile-small-parameters",
"href": "teaching/mlsd/mlsd.html#week-8-infinite-baffle-electrical-impedance-theile-small-parameters",
"title": "Microphone and Loudspeaker Design",
"section": "Week 8: Infinite baffle, electrical impedance, Theile-Small parameters",
"text": "Week 8: Infinite baffle, electrical impedance, Theile-Small parameters"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-9-the-sealed-cabinet-loudspeaker",
"href": "teaching/mlsd/mlsd.html#week-9-the-sealed-cabinet-loudspeaker",
"title": "Microphone and Loudspeaker Design",
"section": "Week 9: The sealed cabinet loudspeaker",
"text": "Week 9: The sealed cabinet loudspeaker"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-10-the-vented-cabinet-loudspeaker-part-1",
"href": "teaching/mlsd/mlsd.html#week-10-the-vented-cabinet-loudspeaker-part-1",
"title": "Microphone and Loudspeaker Design",
"section": "Week 10: The vented cabinet loudspeaker (part 1)",
"text": "Week 10: The vented cabinet loudspeaker (part 1)"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-11-the-vented-cabinet-loudspeaker-part-2",
"href": "teaching/mlsd/mlsd.html#week-11-the-vented-cabinet-loudspeaker-part-2",
"title": "Microphone and Loudspeaker Design",
"section": "Week 11: The vented cabinet loudspeaker (part 2)",
"text": "Week 11: The vented cabinet loudspeaker (part 2)"
},
{
"objectID": "teaching/mlsd/mlsd.html#week-12-performance-parameters-and-magnetvoice-coil-design",
"href": "teaching/mlsd/mlsd.html#week-12-performance-parameters-and-magnetvoice-coil-design",
"title": "Microphone and Loudspeaker Design",
"section": "Week 12: Performance parameters and magnet/voice coil design",
"text": "Week 12: Performance parameters and magnet/voice coil design"
},
{
"objectID": "index.html",
"href": "index.html",
"title": "Welcome",
"section": "",
"text": "This website presents a collection of introductions, tutorials, research articles, and other content related to Vibro-Acoustic Virtual Prototyping, Transfer Path Analysis and other topics I am actively involved in. The resources provided on this website are for educational and informational purposes only. While every effort has been made to ensure accuracy, the content may contain errors or omissions. The materials reflect the my own personal views and experience and do not necessarily represent the views of any affiliated institutions or collaborators. Use of the information is at your own risk."
},
{
"objectID": "index.html#section",
"href": "index.html#section",
"title": "Welcome",
"section": "2025",
"text": "2025\nNovember 18-20th: I will be presenting some of our recent work on uncertainty analysis in TPA and substructuring at the Siemens TPA Masterclass! See below for more information.\n\n\n10th July: I will give a seminar at TUM on the treatment of uncertainty within TPA. See below for more information."
},
{
"objectID": "research/GRVP/Main.html",
"href": "research/GRVP/Main.html",
"title": "Improved force identification using a global–residual interface representation",
"section": "",
"text": "To characterise a vibration source, such as an operating machine, it is standard practice to consider some set of equivalent forces at its connecting interface, as opposed to the complex generating forces that reside within the machine. This choice is largely one of convenience, as the internal forces are inaccessible and often not well defined in terms of their position or direction. It does however offer other advantages; it allows us to compare one source with another, to identify troublesome force contributions, and even build Vibro-Acoustic Virtual Prototypes of hypothetical assemblies [1].\nThe most common means of characterising a vibration source is by inverse methods; the operational response of system (i.e. whilst the machine is active) is used to infer a set of equivalent forces that would yield the observed response when applied to the interface DoFs. This approach forms the basis of inverse force identification (IFI) [2] as used in classical (contact force) [3], [4] and blocked force [5] Transfer Path Analysis methods.\nThe first, and often most crucial, step in the application of IFI is the selection of an interface model, i.e. choosing the location and form of the equivalent forces (translations, rotations, etc.) that are to act across the interface. Mathematically, the choice of interface model is rather arbitrary, providing that it is sufficiently detailed that it is able to control the response within the attached receiver (i.e. enough DoFs are included). However, as a practical method not all representations are created equal.\nAn interface representation appropriate for IFI should be:\n\nInterpretable — the identified forces should be easily understood by a test engineer.\n\nWell conditioned — the resulting inverse problem should be able to recover forces with low error.\n\nPractical — the representation should be accessible from a set of measured DoFs.\n\nA variety of interface representations have been proposed over the decades, including: equivalent multi-point connection [6], equivalent single-point connection [7], finite difference approximations [8], interface mobility [9], virtual point [10] (with flexible extension [11]), and polynomial basis [12], among others. At the time of writing, the virtual point appears to be the front runner in terms of popularity, owing to its straightforward implementation, enforcement of co-located DoFs, and ability to include flexible interface dynamics.\nWith a standard VP, each interface connection point is first characterized experimentally by a set of ‘measurement’ DoFs which are then projected onto a virtual point through a set of rigid body constraints. The result is an interface representation where each connection point is described by 6 DoFs; translations in \\(x\\), \\(y\\) and \\(z\\), along with a rotation about each axis (\\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\)). In the mid frequency range, where the connection points move independently whilst remaining locally rigid, this is a valid representation. At higher frequencies, where local flexure becomes important, the VPT representation can be extended to include the so-called flexible interface deformation modes, such as extension, skew, etc. However, in the low frequency range, where the source dynamics are dominated by global rigid body motions (i.e. the entire machine moves as a single rigid body), the above representation leads to a severe redundancy/over-parametrisation of the interface. As an example, for a 4 footed source \\(24\\) VP DoFs would be used to describe just \\(6\\) dominant rigid body motions. This over-parametrisation can lead to ill-conditioning of the inverse problem and cause increased levels of uncertainty as a result. In this paper, we are interested in means of better representing interfaces, specifically in the low-mid frequency range, to reduce the above issues.\nOur approach is quite simple and is based on an augmentation of the standard VP approach. In short, we define a global VP onto which all interface measurements are projected. This captures the global rigid body motions of the source (more specifically, it captures the linearly correlated interface dynamics). These rigid body motions are then filtered out of the initial interface measurements, the residuals of which are projected onto a series of local VPs which describe the non-rigid body local flexure of the source (i.e. the linearly uncorrelated dynamics). If interest is strictly in the low-frequency range, these residual DoFs can be omitted. Alternatively, they can be retained in their entirety, or subject to a DoF selection procedure[13].\nHaving outlined the context of this paper, its remainder will be organised as follows. In Section 2 we first introduce the relevant background theory (namely, blocked force identification Section 2.1 and the virtual point transformation Section 2.2) before outlining our proposed global-residual interface representation in Section 2.3. In Section 3 we consider an experimental example consisting of a large heavy weight stamping press on both rigid and resilient footings. Finally, in Section 4 we provide some concluding remarks and thoughts for further development.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#sec-indirectBlkF",
"href": "research/GRVP/Main.html#sec-indirectBlkF",
"title": "Improved force identification using a global–residual interface representation",
"section": "2.1 Blocked force identification",
"text": "2.1 Blocked force identification\nThe blocked force describes the reaction force at the interface \\(c\\) of a vibration source whilst operating under a rigid constraint, as illustrated in Figure 1. Equivalently, it can be interpreted as the force which when applied externally to the interface, counteracts the motion of the interface and thus constrains its motion to zero.\nMathematically, we can define the blocked force as, \\[\n\\mathbf{\\bar{f}}_{c} = \\mathbf{{g}}_{c}\\Big|_{\\mathbf{v}_{c}=\\mathbf{0}}\n\\] where \\(\\mathbf{{g}}_{c}\\) denotes the contact force occurring at the interface \\(c\\), with the blocked force \\(\\mathbf{\\bar{{f}}}_{c}\\) simply the limiting force as the interface is placed under the rigid constraint \\(\\mathbf{v}_{c}=\\mathbf{0}\\).\nIt is clear that by providing an idealised rigid constraint at the boundary, the blocked force provides an independent source description free from the influence of any specific receiver. The blocked force is therefore also an independent source property.\nTo characterise the blocked force of a vibration source directly it is necessary to attach the source to an (approx.) infinitely rigid test bench, for example a large blocking mass, with force transducers placed between the source contacts and the test bench. Clearly, this ‘infinitely rigid’ requirement can only be satisfied approximately over a particular frequency range; even the most rigid test bench will eventually enter a modal/flexible regime. Achieving a sufficiently rigid condition can be further complicated by the mounting requirements of the source. Another notable challenge involves the determination of rotational and in-plane blocked forces. In short, the direct measurement of the blocked force is fraught with difficulties and generally not practical. Fortunately, an indirect measurement approach is available.\nThe direct method described above is focused on measuring force directly, by means of installed force transducers. The indirect approach involves the measurement of induced response, from which we infer the unknown blocked force.\nThe key equation on which the indirect approach is based is given by[5], \\[\n\\mathbf{v}_{b} = \\mathbf{Y}_{Cbc}\\,\\mathbf{\\bar{f}}_{c} \n\\tag{1}\\] where \\(\\mathbf{v}_{b}\\) is the operational response (to be measured) of the structure at a set of indicator positions \\(b\\), \\(\\mathbf{Y}_{Cbc}\\) is the transfer FRF of the assembled structure (also to be measured), and \\(\\mathbf{\\bar{f}}_{c}\\) is the unknown blocked force. Equation 1 provides a relation between the blocked force of a vibration source and the response of the assembly in which it is installed.\nNote that the indicator DoFs \\(b\\) can be located at the interface \\(c\\), in which case Equation 1 becomes, \\[\n\\mathbf{v}_{c} = \\mathbf{Y}_{Ccc}\\,\\mathbf{\\bar{f}}_{c}. \n\\tag{2}\\]\nTo determine the blocked force we have to solve the inverse problem. Pre-multiplication of both sides of Equation 1 by the (pseudo-)inverse FRF matrix yields (assuming it exists), \\[\n\\mathbf{\\bar{f}}_{c} = \\mathbf{Y}_{Cbc}^{-1}\\mathbf{v}_{b} \\quad \\left( \\text{or } \\quad\n\\mathbf{\\bar{f}}_{c} = \\mathbf{Y}_{Ccc}^{-1}\\mathbf{v}_{c} \\right).\n\\tag{3}\\]\nTo solve Equation 1, as per Equation 3, it is necessary that number of measured responses (\\(M\\)) is equal to, or greater than, the number of blocked forces that are being determined. That is, to determine \\(N\\) blocked forces, \\(M\\geq N\\) responses should be recorded. The dimensions of the FRF matrix \\(\\mathbf{Y}_{Cbc}\\) should be consistent with those of the blocked force and velocity vectors; it should be \\(M\\times N\\).\nFor more detail on the specifics of Equation 2 and Equation 3, including its derivation, practical implementation, and uncertainty the reader is referred to [1] and the references therein.\nImportantly, Equation 3 provides a means of determining the blocked force without requiring an infinitely rigid test bench. In fact, no restrictions are placed on the assembly \\(C\\) in which the source is installed. As such, Equation 3 enables an in-situ characterisation of the blocked force; the source need not be removed from its intended installation. Of such practical use is the indirect blocked force method that it has now been established as an international standard (ISO 20270:2019).\n\n2.1.1 Interface representations\nIn our description of the in-direct blocked force method we glossed over one very important detail, the interface representation. In Equation 1 and Equation 2 we denoted the interface DoFs simply by the subscript \\(c\\). We did not however, describe what these DoFs are… In fact, we have great flexibility over how we choose to define, or represent, the interface. For example, an interface with discrete connection points might be well represented by a set of co-located point-like DoFs (like the VP method). What about a continuous weld-like interface? Such an interface might be better represented using a point-like discretisation [14], or perhaps a continuous Fourier-based representation [9]. In effect, the choice of interface representation depends on the problem.\nA precise understanding of interface dynamics is essential when analysing complex structural systems, particularly in experimental techniques such as source identification. Since interfaces have several physical forms, from bolted flanges to broader contact surfaces, developing a model from measured data and identifying their dynamic behaviour is not always straightforward. When the contact region is small in comparison to the relevant structural wavelengths, it is a common practice to simplify the interface as a discrete point. This leads to single- or multi-point representations where the interface is modelled as a localized connection. The goal in such models is to represent the full dynamic behaviour at the interface, typically requiring six degrees of freedom: three translations along the Cartesian axes and three corresponding rotations. However, direct measurement of all six DoFs has significant experimental challenges. Rotational motions are particularly difficult to capture, as sensors capable of measuring them are rare in standard practice. Even capturing translational FRF at a single point requires a precise alignment of force and response measurements, which is often impractical.\nTo address these limitations, indirect methods are used to infer interface DoFs from surrounding measurements. One such approach is the finite difference (FD) method, where translation-only data from symmetrically spaced sensors around the point of interest are used to estimate both translational and rotational behaviours. Although conceptually simple, this method can be restrictive due to its reliance on symmetry and may lead to inaccuracies when perfect co-location is not achieved. A more flexible and robust method is the Virtual Point Transformation (VPT). This technique is based on rigid-body kinematics and enables the reconstruction of both translational and rotational DoFs at a virtual point by combining data from multiple arbitrarily positioned sensors. VPT offers several key advantages. By using all available measurements, the system becomes overdetermined, allowing for a more accurate estimation of the virtual point DoFs. Additionally, translational and rotational responses are mathematically located at the same physical point, eliminating alignment inconsistencies when combining models or measurements from different sources. Unlike FD methods, VPT does not require a symmetric sensor arrangement, making it highly adaptable to complex experimental setups.\nHigh frequencies - In the mid-high frequency range structural wavelength becomes comparable in size to the interface contact region. As a result local flexure begins to play an important role in the interface dynamics and the point-like assumption breaks down. At this stage, the VPT can be extended by including flexible deformation modes, to accompany those describing the rigid interface behaviour, thus extending its working frequency range. However, due to instrumentation and practicality requirements, there is a limit to how far this extension can take us. At some point, it will be necessary to adopt an alternative, perhaps wave or energy-based, paradigm better suited to high frequency dynamics. Though, at the time of writing this is a relatively unexplored area of research.\nLow frequencies - What about low frequencies? Well, challenges lie here also. When the structural wavelength is comparable to the size of the source, the global rigid-body (RB) dynamics of the source can dominate the response of a system. Whilst these rigid-body DoFs are implicitly captured by the collection of VP DoFs defined at each interface connection, they do so with massive redundancy. Recall from Equation 3 that to obtain the blocked force we must compute a matrix inverse. When the FRF matrix \\(\\mathbf{Y}_C\\), defined using the VP representation, is dominated by rigid-body motions, it becomes rank deficient, i.e. 2 or more of its rows/columns become linearly related. For example, if a source exhibits perfect RB vertical motion, the \\(z\\) DoF at each VP will be identical and so will the corresponding rows/columns of the FRF matrix. The result is a poorly conditioned matrix inverse that is incredibly sensitive to small measurement imperfections, which leads to unreliable force estimates. A common solution here is to apply a form of regularisation termed truncated Singular Value Decomposition [15]. The rationale being that in the low frequency range, the rigid body motions of the source are likely to be described by the first 6 (or so) singular values (SVs) of the FRF matrix; the remaining SVs can then be discarded and a well-conditioned matrix inverse computed. This approach tends to work well, though its main shortcoming is a lack of interpretability in the resulting forces; the tSVD is simply a mathematical tool, it does not consider the underlying physics of the problem. In this paper we propose an alternative approach based on an augmented global-residual VP, as described below in Section 2.3.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#sec-virtualPoint",
"href": "research/GRVP/Main.html#sec-virtualPoint",
"title": "Improved force identification using a global–residual interface representation",
"section": "2.2 Virtual Point Transformation",
"text": "2.2 Virtual Point Transformation\nThe Virtual Point Transformation is a mathematical framework used to reconstruct both translational and rotational degrees of freedom at a single point on an interface of a complex structure, and it is based on excitation forces and translational response measurements at multiple surrounding locations. This method allows for arbitrarily positioned sensors and forces, making it highly flexible in experimental applications. Based on rigid body kinematics, the method enables the transformation of distributed measurements into a six-DoF response at a single, collocated virtual point. It is also extended to include flexible displacement modes that add more DoFs to the interface representation.\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 2: An interface with impact (left) and response (right) positions for virtual point transformation.\n\n\n\nTo perform the virtual point transformation, the relationship between the measured responses and the virtual point is expressed through a rigid-body transformation matrix \\(\\mathbf{T}_v\\). This matrix relates the translational responses measured at various sensor locations \\(\\mathbf{v}\\) to the translational and rotational degrees of freedom at the virtual point \\(\\mathbf{v}_{vp}\\).\n\\[\n\\mathbf{v}_{vp} = \\mathbf{T}_v \\,\\mathbf{v}\n\\]\nTo construct the transformation matrix \\(\\mathbf{T}_v\\), the spatial locations and orientations of the sensors with respect to the virtual point must be accurately defined. In practice, it is common to use three triaxial sensors arranged around each virtual point, providing a total of nine translational measurements. For each sensor (say, sensor number \\(n\\) shown in Figure 2) a local transformation matrix \\(\\mathbf{R}_v^n\\) that accounts for its position and orientation relative to the virtual point can be constructed as, \\[\n\\mathbf{R}_v^n =\n\\begin{bmatrix}\ne_{x,X}^n & e_{x,Y}^n & e_{x,Z}^n \\\\\ne_{y,X}^n & e_{y,Y}^n & e_{y,Z}^n \\\\\ne_{z,X}^n & e_{z,Y}^n & e_{z,Z}^n\n\\end{bmatrix}\n\\begin{bmatrix}\n1 & 0 & 0 & 0 & r_Z^n & -r_Y^n \\\\\n0 & 1 & 0 & -r_Z^n & 0 & r_X^n \\\\\n0 & 0 & 1 & r_Y^n & -r_X^n & 0\n\\end{bmatrix}\n\\]\nTo allow for interface flexibility, additional modes can be incorporated into the \\(\\mathbf{R}_v^n\\) matrices, specifically by adding more columns along with the existing rigid body modes. As an example, extension and torsion flexible modes will add six more columns: three extension (or strain) modes along the \\(X\\), \\(Y\\), and \\(Z\\) axes, and three torsion modes about these axes.\n\\[\n\\mathbf{R}_v^n =\n\\begin{bmatrix}\ne_{x,X}^n & e_{x,Y}^n & e_{x,Z}^n \\\\\ne_{y,X}^n & e_{y,Y}^n & e_{y,Z}^n \\\\\ne_{z,X}^n & e_{z,Y}^n & e_{z,Z}^n\n\\end{bmatrix}\n\\left[\n\\begin{array}{ccc|ccc|ccc|ccc}\n1 & 0 & 0 & 0 & r_Z^n & -r_Y^n & r_X^n & 0 & 0 & 0 & r_Y^n r_Z^n & -r_Z^n r_Y^n \\\\\n0 & 1 & 0 & -r_Z^n & 0 & r_X^n & 0 & r_Y^n & 0 & -r_X^n r_Z^n & 0 & r_Z^n r_X^n \\\\\n0 & 0 & 1 & r_Y^n & -r_X^n & 0 & 0 & 0 & r_Z^n & r_X^n r_Y^n & -r_Y^n r_X^n & 0\n\\end{array}\n\\right]\n\\tag{4}\\]\nThis method defines flexible interface behavior without requiring prior knowledge of the component’s dynamics, making it broadly applicable.Other flexible modes can be added with the same manner.\nHaving \\(\\mathbf{R}_v^n\\) for all three sensors, we stack them vertically to construct the full transformation matrix \\(\\mathbf{R}_v\\), which has a size of \\(9 \\times 6\\) and relates the virtual point response to the measured translational responses,\n\\[\n\\mathbf{R}_v =\n\\begin{bmatrix}\n\\mathbf{R}_v^1 \\\\\n\\mathbf{R}_v^2 \\\\\n\\mathbf{R}_v^3\n\\end{bmatrix}.\n\\]\nOnce the full transformation matrix \\(\\mathbf{R}_v\\) is assembled, the virtual point transformation matrix \\(\\mathbf{T}_v\\) can be obtained by computing the pseudo-inverse of \\(\\mathbf{R}_v\\),\n\\[\n\\mathbf{T}_v = \\mathbf{R}_v^{+} = (\\mathbf{R}_v^{\\rm T} \\mathbf{R}_v)^{-1} \\mathbf{R}_v^{\\rm T}.\n\\]\nIn addition to transforming the measured responses, it is also necessary to transform the excitation forces applied at several locations of the interface to their equivalent representation at the virtual point. The force transformation matrix \\(\\mathbf{T}_f\\) converts the forces at the virtual point \\(\\mathbf{f}_{vp}\\) to those physically applied to the interface \\(\\mathbf{f}\\) as follows,\n\\[\n\\mathbf{f} = \\mathbf{T}_f^{\\mathrm{T}}\\, \\mathbf{f}_{vp}.\n\\]\nConsidering a single impact force applied at an arbitrary location on the interface (denoted as \\(\\mathbf{f}^m\\) in Figure 2), the corresponding six-DoF force vector at the virtual point (three translational forces and three rotational moments) can be obtained using a transformation matrix \\(\\mathbf{R}_f^m\\). This matrix is defined based on the distance \\(\\mathbf{r}_f^m\\) and direction \\(\\mathbf{e}_f^m\\) of the impact force relative to the virtual point. The local transformation matrix \\(\\mathbf{R}_f^{m,\\rm T}\\) for the \\(m\\)th force is defined as,\n\\[\n\\mathbf{R}_f^{m,\\rm T} =\n\\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\\\\n0 & -r_Z & r_Y \\\\\nr_Z & 0 & -r_X \\\\\n-r_Y & r_X & 0\n\\end{bmatrix}\n\\begin{bmatrix}\ne_X^m \\\\\ne_Y^m \\\\\ne_Z^m\n\\end{bmatrix}.\n\\]\nOnce all \\(\\mathbf{R}_f^m\\) matrices are defined for the set of \\(m = 9\\) independent impact points, they are stacked to form the \\(\\mathbf{R}_f\\) matrix,\n\\[\n\\mathbf{R}_f =\n\\begin{bmatrix}\n\\mathbf{R}_f^1 \\\\\n\\mathbf{R}_f^2 \\\\\n\\vdots \\\\\n\\mathbf{R}_f^9\n\\end{bmatrix}\n\\]\nThe force transformation matrix \\(\\mathbf{T}_f\\) is then computed as the pseudo-inverse of \\(\\mathbf{R}_f\\),\n\\[\n\\mathbf{T}_f^{\\mathrm{T}} = \\mathbf{R}_f (\\mathbf{R}_f^{\\rm T} \\mathbf{R}_f)^{-1}.\n\\]\nWith both transformation matrices \\(\\mathbf{T}_v\\) and \\(\\mathbf{T}_f\\) defined, we are now equipped to transform the measured frequency response functions (FRFs) from the surrounding sensor locations to the virtual point. By pre-multiplying \\(\\mathbf{T}_v\\) and and post-multiplying \\(\\mathbf{T}_f\\) to the measured FRFs \\(\\mathbf{Y}\\), \\[\n\\mathbf{Y}_{vp} = \\mathbf{T}_v\\, \\mathbf{Y}\\, \\mathbf{T}_f^{\\mathrm{T}}\n\\] they can be expressed in terms of translational and rotational degrees of freedom at the virtual point \\(\\mathbf{Y}_{vp}\\).\n\n2.2.1 Force identification\nBased on the VP transformations described above, two options are available to us when estimating the blocked force, depending on whether or not we transform the response vector. Considering the VP transformation applied to both forces and responses, our blocked force estimation becomes,\n\\[\n\\mathbf{v}_{vp} = \\mathbf{T}_v \\mathbf{v} = \\big( \\mathbf{T}_v \\mathbf{Y}_{C} \\mathbf{T}_f^{\\mathrm{T}} \\big)\\,\\mathbf{\\bar{f}}_{vp}\n\\quad \\longrightarrow \\quad\n\\mathbf{\\bar{f}}_{vp} = \\big( \\mathbf{T}_v \\mathbf{Y}_{C} \\mathbf{T}_f^{\\mathrm{T}} \\big)^{+}\\, \\mathbf{T}_v\\, \\mathbf{v}\n\\]\nwhere it is noted that the elements of response vector \\(\\mathbf{v}\\) situated at the interface \\(c\\) have been projected onto the VP by \\(\\mathbf{T}_v\\). Note that the transformation \\(\\mathbf{T}_v\\) occurs twice; together this has the effect of filtering out the flexible dynamics from \\(\\mathbf{v}\\) and inferring \\(\\mathbf{\\bar{f}}_{vp}\\) from the rigid body response only. It can be argued that this step unnecessary and in fact weakens the blocking constraints imposed (only rigid interface motions will be constrained). Omitting the response transformation entirely we have,\n\\[\n\\mathbf{v} = \\big( \\mathbf{Y}_{C} \\mathbf{T}_f^{\\mathrm{T}} \\big)\\, \\mathbf{\\bar{f}}_{vp}\n\\quad \\longrightarrow \\quad\n\\mathbf{\\bar{f}}_{vp} = \\big( \\mathbf{Y}_{C} \\mathbf{T}_f^{\\mathrm{T}} \\big)^{+}\\, \\mathbf{v}\n\\tag{5}\\]\nwhere \\(\\mathbf{\\bar{f}}_{vp}\\) is now defined such that both rigid and flexible interface motions are constrained.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#sec-gloablresidual",
"href": "research/GRVP/Main.html#sec-gloablresidual",
"title": "Improved force identification using a global–residual interface representation",
"section": "2.3 Global–Residual Virtual Point",
"text": "2.3 Global–Residual Virtual Point\nIn this section we develop the VP theory into the proposed Global-Residual representation. We will focus on the case where interface transformations are applied only to the sought after forces, i.e. measurement/indicator DoFs are left untransformed. We are then interested in the equation,\n\\[\n\\mathbf{v} = \\big( \\mathbf{Y}_{m,m} \\mathbf{T}_f^{\\mathrm{T}} \\big) \\mathbf{\\bar{f}}_{vp}\n\\]\nwhere \\(\\mathbf{\\bar{f}}_{vp}\\) is the VP blocked force, \\(\\mathbf{Y}_{m,m}\\) is matrix of FRFs relating the measurement DoFs, and \\(\\mathbf{T}_f\\) is the transformation matrix that projects the measurement DoF forces onto the VP.\nWe seek a transformation matrix \\(\\mathbf{T}_f\\) that separates the global (rigid body) dynamics from the local connection dynamics such that the acquired blocked force takes the form, \\[\n\\mathbf{\\bar{f}}_{vp} =\n\\begin{bmatrix}\n\\mathbf{\\bar{f}}_{vp}^{g} \\\\\n\\mathbf{\\bar{f}}_{vp}^{\\Delta}\n\\end{bmatrix}\n\\] where \\(\\mathbf{\\bar{f}}_{vp}^{g}\\) is the force component describing the global (rigid body) source behaviour, and \\(\\mathbf{\\bar{f}}_{vp}^{\\Delta}\\) the residual forces that remain at the connections when global forces are removed.\nRecalling Equation 5, the response vector \\(\\mathbf{v}\\) (indicator DoFs) can be expressed in terms of a VP at each interface connection in the standard form, \\[\n\\mathbf{v} = \\mathbf{Y}_{m,vp}\\,\\mathbf{f}_{vp}\n\\] where \\(\\mathbf{Y}_{m,vp} = \\mathbf{Y}_{m,m}\\mathbf{T}_f^{\\rm T}\\). In this representation, the force \\(\\mathbf{f}_{vp}\\) contains the net forces at each connection point of the source.\nThis net force can be decomposed into a contribution from the rigid body dynamics (\\(\\mathbf{f}_{vp}^{g}\\)), and the residuals (\\(\\mathbf{f}_{vp}^{\\Delta}\\)). The response at some set of measurement DoFs can then be given by, \\[\n\\mathbf{v} =\n\\mathbf{Y}_{m,vp}^{g}\\mathbf{f}_{vp}^{g} + \\mathbf{Y}_{m,vp}^{\\Delta}\\mathbf{f}_{vp}^{\\Delta}\n=\n\\begin{bmatrix}\n\\mathbf{Y}_{m,vp}^{g} & \\mathbf{Y}_{m,vp}^{\\Delta}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{f}_{vp}^{g} \\\\\n\\mathbf{f}_{vp}^{\\Delta}\n\\end{bmatrix}\n\\] where \\(\\mathbf{Y}_{m,vp}^g\\) and \\(\\mathbf{Y}_{m,vp}^{\\Delta}\\) are the FRFs that relate the global and residual forces to the set of indicator responses. To solve the above equation we need to determine the global and residual FRFs, which amounts to finding the transformation \\(\\mathbf{T}_f\\) that filters out the rigid body components from each connection point.\nConsider the force transformations for the global and local VP representations, \\[\n\\mathbf{f}_{vp}^{g} = \\mathbf{R}_{f}^{g,\\mathrm{T}}\\,\\mathbf{f} \\qquad \\mbox{and} \\qquad\n\\mathbf{f}_{vp} = \\mathbf{R}_{f}^{\\mathrm{T}}\\,\\mathbf{f}\n\\tag{6}\\] where \\(\\mathbf{f}_{vp}^g\\) and \\(\\mathbf{f}_{vp}\\) are the transformed global (RB) and local VP forces. Note that \\(\\mathbf{R}_f^{\\rm T}\\) is a matrix that projects the forces applied around each connection point to their corresponding VP, whilst \\(\\mathbf{R}_f^{g, {\\rm T}}\\) projects the same set of forces instead onto a single VP placed at some arbitrary position, most likely within the source.\nTo obtain a transformation for the residual VP forces, \\(\\mathbf{f}_{vp}^{\\Delta}\\), we need to remove the rigid body contributions from the local VP representation. This can be done by first projecting the global forces \\(\\mathbf{f}_{vp}^g\\) back onto the measurement set, \\[\n\\mathbf{f}' = \\underbrace{\\mathbf{R}_{f}^{+g,\\mathrm{T}}\\,\\mathbf{R}_{f}^{g,\\mathrm{T}}}_{\\mathbf{F}_{f}^{g}}\\,\\mathbf{f}\n\\] where \\(\\mathbf{F}^g_f\\) is the rigid body filtering matrix and \\(\\square'\\) denotes a rigid body filtered quantity. Next, we subtract the rigid body contributions from the initial forces before projecting them onto the local VP, \\[\n\\mathbf{f}_{vp}^{\\Delta} = \\mathbf{R}_{f}^{\\mathrm{T}} \\big( \\mathbf{f} - \\mathbf{f}' \\big)\n= \\mathbf{R}_{f}^{\\mathrm{T}} \\big( \\mathbf{I} - \\mathbf{F}_{f}^{g} \\big)\\, \\mathbf{f}\n\\tag{7}\\] The resulting force \\(\\mathbf{f}_{vp}^\\Delta\\) is the residual component of the local VP, with the rigid body/global component removed.\nRearranging Equation 6 and Equation 7 above obtain equations for \\(\\mathbf{f}\\), \\[\n\\mathbf{f} = \\mathbf{R}_f^{+g, {\\rm T}} \\mathbf{f}_{vp}^g = \\mathbf{T}_f^{g, {\\rm T} }\\mathbf{f}_{vp}^g\n\\] \\[\n\\mathbf{f} = \\left[\\mathbf{R}_f^{\\rm T}\\left(\\mathbf{I}-\\mathbf{F}^g_f\\right)\\right]^+ \\mathbf{f}_{vp}^\\Delta\n\\] from which we can identify the necessary transformations to separate the measured FRF matrix into its global and residual components, \\[\n\\mathbf{v} = \\mathbf{Y}_{m,m}\n\\begin{bmatrix}\n\\mathbf{T}_{f}^{g,\\mathrm{T}} &\n\\big[ \\mathbf{R}_{f}^{\\mathrm{T}} ( \\mathbf{I} - \\mathbf{F}_{f}^{g} ) \\big]^{+}\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{f}_{vp}^{g} \\\\\n\\mathbf{f}_{vp}^{\\Delta}\n\\end{bmatrix}.\n\\tag{8}\\]",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#resilient-connection",
"href": "research/GRVP/Main.html#resilient-connection",
"title": "Improved force identification using a global–residual interface representation",
"section": "3.1 Resilient connection",
"text": "3.1 Resilient connection\nWe consider first the resiliently coupled case. Shown in ?@fig-coh_res is the average coherence across all FRFs (including both interface and validation DoFs) for the resiliently coupled system. Given the heavy weight nature of this assembly, we see poor coherence in the low frequency range, below approx. 10 Hz. For this reason, in what follows we will only present results between 10-1500 Hz.\n\n\n\n\n\n\nFigure 4: Average coherence across all measurements for resiliently mounted source.\n\n\n\nFigure 5 shows the time averaged blocked force maps for each representation; from left to right we have the I-VP, G-VP and GR-VP. In each subplot the DoFs 1-6 correspond to the G-VP defined at the machine centre, whilst the DoFs 7-24 are those defined locally at the centre of each foot. These local DoFs are ordered as, \\[\n\\left(\\begin{array}{cccccc|cccccc|cccccc}x_1& y_1 & z_1 & \\alpha_1& \\beta_1& \\gamma_1 & x_2& y_2& z_2& \\alpha_2& \\beta_2& \\gamma_2& x_3& y_3& z_3&\\alpha_3& \\beta_3& \\gamma_3\\end{array}\\right)\n\\] where the subscript denotes the foot.\nInspecting the force maps we see in the I-VP considerable variation across the individual DoFs, in particular between the translational and rotational DoFs. This is expected on account of their different units. We see that in the GR-VP this variation is much reduced; it is suspected that the translational part of each I-VP is largely accounted for by the G-VP, and so the residual is of a much lower magnitude. The G-VP reveals the expected result that in the low frequency range (where the G-VP is considered appropriate), the vertical force (i.e. in the direction of the machine stamping) is of the greatest magnitude.\nIt should be noted that in the GR-VP case, a singular value truncation is used; 18 singular values are retained based on the notion that the the I-VPs implicitly capture the G-VP dynamics, and so by separating the global and residual terms, no additional information is added. We therefore limit the number of singular values in the GR-VP representation to the same as the I-VP.\n\n\n\n\n\n\nFigure 5: Force maps for the time averaged blocked force obtained using; individual VPs (I-VP left), global VP (G-VP middle) and global-residual (GR-VP right).\n\n\n\nTo validate the blocked forces we use the standard onboard validation approach, whereby the blocked forces are used to make a response prediction within the attached receiver. Whilst the onboard validation is a somewhat limited test of the blocked force, modification or replacement of the receiver was not possible within this case study. Shown in Figure 6 and Figure 7 are the narrowband and 1/3rd octave-band onboard validation predictions for the two remote sensors. In all three cases we see a reasonable level of predictive accuracy. A surprising result is the apparent success of the G-VP representation in the mid-high frequency range. This supports the notion that the G-VP representation, though accurately capturing the low frequency rigid-body dynamics, in fact captures the dynamics across the entire frequency range.\n\n\n\n\n\n\nFigure 6: On-board validation results in narrow band for two remote sensors using each of the interface representations; individual VPs (I-VP top), global VP (G-VP middle) and global-residual (GR-VP bottom).\n\n\n\n\n\n\n\n\n\nFigure 7: On-board validation results in 1/3rd octave bands for two remote sensors using each of the interface representations; individual VPs (I-VP top), global VP (G-VP middle) and global-residual VPs (GR-VP bottom).\n\n\n\nTo better judge the performance of each representation, in Figure 8 we plot the 1/3rd octave-band errors compared against the directly measured response. We see that all three methods remain below 5dB error across the majority of the frequency range. It appears that the G-VP out performs I-VP in the lowest part of the response (i.e.below approx 30 Hz). This is likely because a) the source is behaving rigidly and so the G-VP completely captures its dynamics and b) the G-VP yields a smaller matrix, less sensitive to ill conditioning and sensitivities in the matrix inversion. In the mid frequency range the I-VP out performs the G-VP which begins to see its largest errors. This is likely due to the source exhibiting some flexibility not captured by the G-VP. Again, as previously noted we see that the G-VP begins to out perform the I-VP in the upper end of the frequency range. We suspect the reason for this is as follows; at high frequencies the individual feet are uncorrelated with one another and as a result can readily be collapse into a single equivalent rigid-body force. This is commensurate with the idea of using power-based methods at high frequencies, where increasing modal complexity in fact leads to simpler dynamical relations, e.g. as in Statistical Energy Analysis.\nConsidering now the GR-VP error, we see that it is comparable to the I-VP and G-VP, and tends to lie between the two. In Figure 8 we also plot the GR-VP error when no singular value truncation is used in the blocked force solution; this is likely due to the GR-VP acting as a DoF expansion, and the resulting matrix being rank deficient/ill-conditioned.\n\n\n\n\n\n\nFigure 8: 1/3rd octave band error plots for on-board validation. The GR-VP representation in blue uses a TSVD with 18 singular values. Shown in gray is the error plot for the GR-VP without TSVD.\n\n\n\n\n\n\n\n\n\nFigure 9: 1/3rd octave band error plots for on-board validation. The I-VP and GR-VP representations use a TSVD with 6 singular values.\n\n\n\nRecalling the rational of the GR-VP – to factor out the rigid-body (or linearly correlated) dynamics with the aim of improving interpretability of the interface dynamics – it will prove interesting to consider the truncation of both the I-VP and GR-VP representations to just 6 singular values. By doing so, each representation is reduced to contain only the most important (in terms of explanation of variance) information. The resulting errors are presented in Figure 9. Comparison against previous errors in Figure 8 we identity two key results; 1) the I-VP error has increased significantly across almost the entire frequency range, whilst 2) the GR-VP error has tended towards that of the G-VP. This latter result implies that the 6 most dominant singular values in the GR-VP are closely related to the G-VP DoFs, making them both more interpretable and expressive than those of the I-VP. Based on this result, later in Section 3.3 we will breifly investigate whether the GR-VP representation might be well suited to a best sub-set selection procedure; the first 6 G-VP DoFs would always be retained, and the selection procedure applied to the remaining residuals.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#rigid-connection",
"href": "research/GRVP/Main.html#rigid-connection",
"title": "Improved force identification using a global–residual interface representation",
"section": "3.2 Rigid connection",
"text": "3.2 Rigid connection\nWe now consider the results obtained from the rigidly coupled assembly. Shown in Figure 10 is the average coherence across all FRFs (including both interface and validation FRFs) for the rigidly coupled case. As expected, compared to the resilient case the rigid FRFs are affected to a greater extent by noise, especially at low frequencies. We can see that only from approx 50 Hz do we begin to get reasonable measurement quality. Therefore, hereafter we will only present results between 50-1500 Hz.\nFigure 11 - Figure 15 are structured in the same way as for the resilient case, and largely lead to the same conclusions. Perhaps the main difference is that for the rigid case, the G-VP does not perform quite as well. This is likely because the rigid-body blocked forces occur mostly below the frequency range considered (i.e. within the noise floor). As before, in the high frequency range we see that the G-VP produces relatively low errors considering its simplicity, performing slightly better than the I-VP. The GR-VP error is seen to mostly follow that of the I-VP when 18 singular values are retained. Reducing this to 6, the GR-VP errors begins to follow the G-VP error. This again supports the notion that the dominant singular values of the GR-VP representation are related to the rigid-body (linearly correlated) interface dynamics. In contrast, I-VP error is seen to increase dramatically when only 6 singular values are retained.\n\n\n\n\n\n\nFigure 10: Average coherence across all measurements for rigidly mounted source. Useable frequency range: 50-1500 Hz.\n\n\n\n\n\n\n\n\n\nFigure 11: Force maps for the time averaged blocked force obtained using; individual VPs (I-VP left), global VP (G-VP middle) and global-residual (GR-VP right).\n\n\n\nPerhaps the main conclusion from the above results is that the G-VP (and by extension first 6 DoFs of the GR-VP) appears to provide a better representation of the interface dynamics than the 6 most dominant singular values of the conventional I-VP representation. This begs the question whether a more interpretable solution with lower error could be obtained from the GR-VP by utilising a subset selection scheme. The idea would be to always retain the G-VP DoFs within the GR-VP (which have been shown to out perform I-VP at low and high frequencies), whilst using a selection procedure to determine which residual DoFs to include in the solution. We briefly consider this approach in the following section, though a more in depth and general study of subset selection in the context of blocked force identification is considered beyond the scope of this paper.\n\n\n\n\n\n\nFigure 12: On-board validation results in narrow band for two remote sensors using each of the interface representations; individual VPs (I-VP top), global VP (G-VP middle) and global-residual (GR-VP bottom).\n\n\n\n\n\n\n\n\n\nFigure 13: On-board validation results in 1/3rd octave bands for two remote sensors using each of the interface representations; individual VPs (I-VP top), global VP (G-VP middle) and global-residual VPs (GR-VP bottom).\n\n\n\n\n\n\n\n\n\nFigure 14: 1/3rd octave band error plots for on-board validation. The GR-VP representation in blue uses a TSVD with 18 singular values. Shown in gray is the error plot for the GR-VP without TSVD.\n\n\n\n\n\n\n\n\n\nFigure 15: 1/3rd octave band error plots for on-board validation. The I-VP and GR-VP representations use a TSVD with 6 singular values.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/GRVP/Main.html#sec-subset",
"href": "research/GRVP/Main.html#sec-subset",
"title": "Improved force identification using a global–residual interface representation",
"section": "3.3 GR‑VP Subset Selection",
"text": "3.3 GR‑VP Subset Selection\nBest subset selection is a solution strategy for over-parametrised linear problems. It’s aim is to find a reduced set of parameters that adequately satisfy the underlying systems of equations with respect to some chosen error metric. Though well established in disciplines such as signal processing (i.e. compressive sensing), it has only recently been proposed for in-situ blocked force estimation [16]. An argument for its use in this context is that whilst an interface might be represented by a complete set of interface DoFs, at a given frequency only a reduced subset of these DoFs are active. Including non-contributing DoFs can lead to ill-conditioned, or over-parametrized, interface models and large errors on the identified blocked forces. Subset selection aims to identify a reduced set of interface DoFs that provide better predictive accuracy whilst also benefiting from improved interpretability.\nA variety of subset selection procedures exist and the time of writing there is no consensus as to which is most appropriate for blocked force identification. In this paper we employ an orthogonal least squares approach [17]. Also termed sequential forward selection, this method has been applied to blocked force identification in [13], where it was termed X-DoF.\nThe procedure starts by defining an empty set \\(\\Gamma_0 = \\{\\}\\) as the set of indices corresponding to non-zero elements in \\(\\mathbf{f}\\) (i.e. all elements in \\(\\mathbf{f}\\) are initially zero). In the \\(k\\)th step of the procedure, a single index \\(i_{max}\\) is added to the set \\(\\Gamma_{k-1}\\), which meets the criterion, \\[\n i_{max} = \\underset{i}{\\mbox{argmin}} ||\\overbrace{\\mathbf{v} - \\mathbf{Y}_{\\Gamma_{k-1}^i} \\mathbf{Y}_{\\Gamma_{k-1}^i}^+ \\mathbf{v}}^{\\mathbf{r}_k}||_2^2 \\quad \\mbox{with} \\quad \\Gamma_{k-1}^i = \\Gamma_{k-1} \\cup i\n\\] where \\(\\mathbf{Y}_{\\Gamma_{k-1}^i}\\) is defined as the matrix containing the columns of \\(\\mathbf{Y}\\) corresponding to the non-zero indices in \\(\\Gamma_{k-1}^i\\). Effectively, we add the index that minimises the error on the current response reconstruction, i.e. the residual \\(\\mathbf{r}_k\\).\nIn contrast to an exhaustive BSS, orthogonal least squares reduces the total number of evaluations from \\(2^n\\) to \\(n+(n-1)+\\cdots+2+1=\\frac{n(n+1)}{2}\\). In each step of the algorithm a candidate solution is produced that contains an increasing number of DoFs. Once all iterations have been completed, the final step is to perform a model selection, i.e. pick from the set of candidate solutions based on an appropriate error metric. Our implementation of this algorithm is given in the code block below.\n\n\nCode: Sequential forward selection function used for BSS approximation\ndef forwardSelection(start_dofs, select_dofs, Y_ind, Y_val, u_ind, u_val):\n '''\n Forward selection algorithm (same as Orthogonal Least Squares, OLS) - iteratively add one DoF at a time to the solution - added DoF is the one that gives the smallest error\n \n start_dofs - initial DoF array\n select_dofs - DoFs to be selected\n Y_ind - indicator-interface FRF matrix\n Y_val - interface-validation FRF matrix\n u_ind - indicator responses\n u_val - validation responses\n '''\n\n λ_range = select_dofs.shape[0] # Number of DoFs to iterate over\n blkF_final = np.zeros((Y_ind.shape[0],\n Y_ind.shape[2],\n u_ind.shape[2]), dtype=complex) # Final blocked force after model selection\n blkF_all = np.zeros((Y_ind.shape[0],\n λ_range+1,\n Y_ind.shape[2],\n u_ind.shape[2]), dtype=complex) # Blocked force for all models\n error_all = np.zeros((Y_ind.shape[0], λ_range+1)) # Error for each model - used to perform model selection\n\n for fi in tqdm(range(Y_ind.shape[0]), desc=\"Computing solutions\"): # Loop over frequencies\n Γ = start_dofs # Selection indices: initialised as starting DoFs [0,1,2,3,4,5]\n Γ_ = select_dofs # Remaining DoFs: [6,7,8,...]\n\n # ind = 0 # Model index\n for i in range(λ_range+1): # Loop over models; i.e. number of select_dofs\n error = [] \n if i == 0: # Compute blocked force using initial DoFs\n OLS_block_force = np.einsum('ij,jk->ik',\n invSVD(Y_ind[fi][:, Γ.astype(int)], 24),\n u_ind[fi, :, :])\n obVal = np.einsum('ij,jk->ik', \n Y_val[fi][:, Γ.astype(int)],\n OLS_block_force)\n error_all[fi, i] = np.linalg.norm(u_val[fi,:,:].flatten() - obVal.flatten(), ord=2)\n blkF_all[fi, i, Γ.astype(int), :] = OLS_block_force\n else:\n for N in Γ_: # Loop over remaining DoFs\n Γ_i = np.sort(np.append(Γ, N)) # Append current DoF to list of selected DoFs\n # Compute blocked force solution using previous 'i' iteration DoFs and current 'N' iteration DoF:\n OLS_block_force = np.einsum('ij,jk->ik',\n invSVD(Y_ind[fi][:, Γ_i.astype(int)], 24),\n u_ind[fi, :, :])\n # Compute onboard validation and error metric\n obVal = np.einsum('ij,jk->ik',\n Y_val[fi][:, Γ_i.astype(int)],\n OLS_block_force)\n error.append(np.linalg.norm(u_val[fi,:,:].flatten() - obVal.flatten(), ord=2)) \n\n Γ_min = np.argmin(np.asarray(error)) # Find which new DoF index gives smallest error\n Γ = np.sort(np.append(Γ, Γ_[Γ_min])) # Add corresponding DoF value (not index) to DoF set\n Γ_ = np.delete(Γ_, Γ_min) # Remove this DoF index from the remaining DoF set\n\n error_all[fi, i] = np.min(error) # Find index for lowest error from current models with M DoFs\n blkF_all[fi, i, Γ.astype(int), :] = np.einsum('ij,jk->ik', \n invSVD(Y_ind[fi][:, Γ.astype(int)],24), \n u_ind[fi, :, :]) # Compute blocked force for the lowest error model\n\n min_model_error = np.argmin(error_all[fi,:]) # Find the model with the lowest error (model selection)\n blkF_final[fi, Γ.astype(int), :] = blkF_all[fi, min_model_error, Γ.astype(int)] # Select the corresponding blocked force\n return blkF_final, error_all\n\n\nWith the purpose of subset selection being to identify sparse solutions, if two candidate models produce the same error, the one with fewer non-zero elements is typically chosen. To this end, it is common practice to use for model selection an error metric that takes into account the complexity of the solution (i.e. the number of non-zero elements), see for example in [13]. In our brief investigation here, we omit this step and use the standard mean-squared-error to perform model selection.\nTo validate a model selection it is good practice to use a novel data set that was not used in the selection procedure itself. In our study, limited channel count has meant that this was not possible. Hence, the model selection was based on the onboard validation errors, with no further validation available. Whilst this is not the most robust approach to model selection, we believe it sufficient to demonstrate the potential application of subset selection applied to the GR-VP and to promote further research on this application.\nOne final remark with regards to our implementation of the forward selection, it is noted that rather than starting with the empty solution, as is done is a typical forward selection, for the GR-VP representation we start by including all G-VP DoFs; the selection procedure is applied to the residual DoFs only. For the I-VP we start in the usual way with an empty solution.\n\n3.3.1 Resilient connection (BSS)\nShown in Figure 16 are the blocked forces obtained from the resiliently coupled source for both the I-VP and GR-VP representations using the sequential forward selection described above. For both cases an onboard validation MSE is used for model selection.\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 16: Blocked force map obtained from resilient assembly using sequential forward selection with MSE model selection using VP (left) and GR-VP (right).\n\n\n\nWe see that in both cases a sparse solution is obtained in preference to a ‘full’ solution, though the resulting sparsity patterns differ greatly. Inspecting the I-VP forces, we see a slight increase in sparsity in the low frequency range, likely on account of the rigid-body dynamics being dominant in this range. In the mid-high frequency range the forces, though remaining somewhat sparse, behave erratically ‘turning on and off’ rapidly over frequency with little to no identifiable trends. Lastly, we see that across the majority of the frequency range, the \\(z\\) DoF of each VP is kept active. This is expected as the main excitation mechanism of the source acts in this direction.\nInspecting the GR-VP forces, we see that similarly in the low frequency range the solution is relatively sparse, with the majority of residual terms being set to 0. This is in-agreement with the notion that at low frequencies the rigid body forces are largely captured by the retained G-VP. In contrast to the I-VP, the GR-VP produces a sparse solution also in the high frequency range. This is in keeping with the notion that at high frequencies the residual terms become uncorrelated and can therefore be described by an equivalent single point representation. In the mid frequency range, where the source is suspected to deform flexibly with low modal density, the solution is rather dense with many of the residual terms being active.\n\n\n\n\n\n\nFigure 17: O1/3rd octave band error plots of the on-board validation for the I-VP and GR-VP representation using forward selection.\n\n\n\nBased on the force maps shown in Figure 16, Figure 17 presents the onboard validation error for the sparse I-VP and GR-VP solutions. For both cases we see a significant reduction in the error across all frequencies compared to Figure 8. This illustrates the potential benefit of a sparse BSS solution. Of the two solutions we see that the GR-VP generally out performs the I-VP.\n\n\n3.3.2 Rigid connection (BSS)\nShown in Figure 18 are the blocked forces obtained from the rigidly coupled source for both the I-VP and GR-VP representations using the sequential forward selection described above. As before, for both cases an onboard validation MSE is used for model selection.\n\n\n\n\n\n\n\n\n\n\n\n\n\nFigure 18: Blocked force map obtained from rigid assembly using sequential forward selection with MSE model selection using VP (left) and GR-VP (right).\n\n\n\nThe rigid force maps bare some similarities to the resilient case in Figure 16, especially for the GR-VP case. For the I-VP we see some increased sparsity in the high frequency range with reduced sparsity in the frequency range. This reduction in sparsity at low frequencies is likely due to the rigid body dynamics occurring outside of the frequency range considered, i.e. within the noise floor.\nWith regards to the GR-VP we draw the same general conclusions as before; at low and high frequencies the interface is well described by just the G-VP DoFs, whilst in the mid frequency range the residual DoFs provide important contributions.\nInspecting the onboard validation errors in Figure 19 we see similar, though perhaps not great, reductions in the error, again with the GR-VP generally out performing the I-VP.\n\n\n\n\n\n\nFigure 19: 1/3rd octave band error plots for the on-board validation of the I-VP and GR-VP representations with forward selection.\n\n\n\nIn the above we limited our frequency range to >50 Hz, on account of the noise floor dominating below this (see Figure 10). Inspecting the error plots below 50 Hz, as in Figure 20, illustrates a further potential benefit of the GR-VP representation. As expected, we see that the I-VP error increases significantly below \\(50\\) Hz, up to 17.5 dB at 10 Hz. The GR-VP error remains below 3 dB down to 10 Hz. The suspected reasons for this massive reduction in error are two fold: 1) the fact that the G-VP DoFs are estimated based on all interface measurements makes them massively over-determined, thus reducing sensitivity to error and b) the reduced matrix size as a result of the forward selection (i.e. retaining mostly just the G-VPs) reduces the amplification of uncertainty when the inverse is performed.\n\n\n\n\n\n\nFigure 20: Extended frequency range: 1/3rd octave errors below 50 Hz for I‑VP vs GR‑VP with forward selection.",
"crumbs": [
"Research",
"Improved force identification using a global-residual interface representation"
]
},
{
"objectID": "research/summary.html",
"href": "research/summary.html",
"title": "My research",
"section": "",
"text": "On this page you will find a growing collection of articles based on peer reviewed journal papers that I have written and/or contributed to post August 2025. Where possible, the articles on this page will include code snippets for any numerical examples presented, allowing you to easily reproduce the results and play around with the variables.\nIf you would like to cite any of these articles, at the bottom of each page you will find a reference to the published journal paper (unless the article is still under review, in which case you may wish to cite the online article instead).\nPapers published prior to August 2025 are unlikely to be included here but can be found in .pdf form on my ResearchGate profile.\nIf you have any questions, comments or suggestions related to any of the articles presented here, please feel free to get in touch!",
"crumbs": [
"Research",
"**Summary of research**"
]
},
{
"objectID": "research/frf_tmm_substruct/frf_tmm_substruct.html",
"href": "research/frf_tmm_substruct/frf_tmm_substruct.html",
"title": "A mixed admittance/transfer matrix substructuring approach",
"section": "",
"text": "To analyse the vibro-acoustic performance of a complex built-up structure it is convenient to adopted a component-based approach [1]. The assembly is decomposed into a series of interconnecting components which are independently characterised, either by experiment or numerically. These component representations are then combined mathematically to yield a model of the assembled system, a process termed substructuring. The process of substructuring amounts to the enforcement of compatibility and equilibrium between the connecting degrees of freedom (DoFs) of each component. This can be achieved in a variety of ways, and in any of the domains typically used for structural dynamic analysis, including the physical, modal, state, time and frequency domains [2]. In recent years, frequency domain methods have gained popularity, given their ease in which they allow for combining experimental and numerical component representations.\nIn the frequency domain, components are most often represented by their free-interface admittance matrices, which describe their kinematic response due to a set of applied forces. Based on component admittance, various substructuring schemes are available, including direct [1], primal/dual [2] and mixed [3] formulations. The advantage of an admittance-based approach, is the relative ease in which the component admittance matrices can be measured. It is not however the only approach available. An alternative frequency domain representation is by the so-called transfer matrix [4], which relates the state vector (stacked force and response) between two sets of DoFs, usually considered the input and output of the component. The advantage of a transfer matrix approach is that structural coupling of two of more components is reduced to a simple multiplication of their respective transfer matrices. This is especially convenient for modeling systems where the components form a cascaded chain, such as when dealing with layered materials [5],[6].\nThe transfer matrix method (TMM) has found greatest application in acoustics, where it is used extensively to model sound transmission through multi-layered structures [7], [8], [9], though applications also exist in structural dynamics and vibration. For example, in [10] and [11] the authors use the TMM to model the dynamics of beams with variable cross-sections. In [12] the TMM is used as an analytical tool to study the vibration of compressed helical springs. In [13] and [14] the TMM is applied to the study of rotor-bearing systems comprising coupled lateral and torsional vibration and local/global coupler offsets. In [15] TMs are used to analyses the vibrational power flow through a MIMO system.\nThe TMM has also been incorporated within other modeling schemes to provide an efficient inclusion of layered materials/structures. In [16] the hybridisation of Finite Element and TMM is proposed for the efficient analysis of plate and shell structures. In [17] the TMM is combined with hybrid FEA-SEA method for diffuse sound transmission through finite-sized thick and layered wall and floor systems. In [18] the coupling of TMM and Finite Element method is used for the efficient acoustic analysis of automotive hollow body networks. The TM representation also forms the basis of the so-called Wave-FE approach used for efficient modeling of periodic structures [19].\nIn this paper we present a mixed admittance/transfer matrix-based substructuring scheme. We consider the problem depicted in Figure 1 (though the proposed scheme can readily be generalised to more complex systems), where two admittance-based components are coupled by a series of connecting elements which together form a junction. We wish to represent the dynamics of this layered junction by the more convenient transfer matrix, whilst keeping the end components in their admittance form. The result is a generalization of the dual formulation for admittance-based substructuring, where the conditions of equilibrium and compatibility between components are replaced by the junction transfer matrix equations, themselves built from an arbitrary number of sub-element transfer matrices. The principle advantage of the proposed approach is the ease of which complex layered systems can be incorporated within an admittance substructuring scheme. Adding, removing, or reordering junction components is achieved by directly including, removing, or reordering their respective transfer matrices, thus avoiding the need to redefine the Boolean coupling matrices required by the conventional dual formulation.\nHaving outlined the context of this paper, its remainder will be structured as follows. In Section 2 we introduce the admittance and transfer matrix methods for structural coupling. Following this in Section 3, we present the mixed admittance-transfer matrix substructuring scheme with a simple numerical example in Section 4. Finally, Section 5 draws some concluding remarks.",
"crumbs": [
"Research",
"A mixed admittance/transfer matrix substructuring approach"
]
},
{
"objectID": "research/frf_tmm_substruct/frf_tmm_substruct.html#sec-dualSS",
"href": "research/frf_tmm_substruct/frf_tmm_substruct.html#sec-dualSS",
"title": "A mixed admittance/transfer matrix substructuring approach",
"section": "2.1 Admittance-based (FRF) coupling",
"text": "2.1 Admittance-based (FRF) coupling\nWith admittance-based substructuring, each component is represented by a complex admittance matrix \\(\\mathbf{Y}^{(n)}\\) that satisfies the equation of motion (per frequency), \\[\n \\mathbf{u}^{(n)} = \\mathbf{Y}^{(n)} (\\mathbf{f}^{(n)} + \\mathbf{g}^{(n)})\n\\] where \\(\\mathbf{f}\\) and \\(\\mathbf{g}\\) are, respectively, the externally applied and interface coupling forces that act on the component, and \\(\\mathbf{u}\\) is its kinematic response (displacement, velocity or acceleration depending on the units of \\(\\mathbf{Y}\\)).\nTo model an assembly of \\(P\\) such components, we begin by writing their equations of motion in a block diagonal form, \\[\n \\mathbf{u} = \\mathbf{Y}\\left( \\mathbf{f} + \\mathbf{g}\\right)\n\\tag{1}\\] where, \\(\\mathbf{Y}\\) is the block diagonal admittance matrix of the \\(P\\) uncoupled components, \\(\\mathbf{u}\\) is the corresponding block vector of responses, \\(\\mathbf{f}\\) is the block vector of applied forces, and \\(\\mathbf{g}\\) the block vector of interface coupling forces, \\[\n\\begin{align}\n \\mathbf{Y} &= \\left[\\begin{array}{c c c c}\n \\mathbf{Y}^{(1)} & & &\\\\\n & \\mathbf{Y}^{(2)} & &\\\\\n & & \\mathbf{\\ddots} &\\\\\n & & & \\mathbf{Y}^{(P)}\n \\end{array}\\right], \\quad\n \\mathbf{u} = \\left(\\begin{array}{c}\n \\mathbf{u}^{(1)} \\\\\n \\mathbf{u}^{(2)} \\\\\n \\vdots \\\\\n \\mathbf{u}^{(P)} \n \\end{array}\\right),\\quad \\mathbf{f} = \\left(\\begin{array}{c}\n \\mathbf{f}^{(1)} \\\\\n \\mathbf{f}^{(2)} \\\\\n \\vdots \\\\\n \\mathbf{f}^{(P)} \n \\end{array}\\right),\n \\quad \\mathbf{g} = \\left(\\begin{array}{c}\n \\mathbf{g}^{(1)} \\\\\n \\mathbf{g}^{(2)} \\\\\n \\vdots \\\\\n \\mathbf{g}^{(P)} \n \\end{array}\\right).\n\\end{align}\n\\] To rigidly couple the connecting components, the conditions of compatibility and equilibrium must be satisfied across the set of co-located coupling DoFs. To aid flexible DoF selection, compatibility and equilibrium are expressed in the general matrix form, \\[\n \\mathbf{B}\\mathbf{u} = \\mathbf{0},\n\\tag{2}\\] and \\[\n \\mathbf{L}^{\\rm T}\\mathbf{g} = \\mathbf{0}\n\\tag{3}\\] where \\(\\mathbf{B}\\) and \\(\\mathbf{L}^{\\rm T}\\) are signed and unsigned Boolean matrices that specify which DoFs are to be coupled [2]. Equations 1, 2 and 3 are commonly referred to as the three field formulation.\nThe rigid coupling of two components implies the internal forces are of equal magnitude (or intensity) and opposite direction. Hence, we are able to define a priori the form of \\(\\mathbf{g}\\) as, \\[\n \\mathbf{g} = -\\mathbf{B}^{\\rm T}\\lambda\n\\] where \\(\\mathbf{B}\\) prescribes the equal and opposite signage, and \\(\\lambda\\) is a vector of interface force magnitudes. Substituting \\(\\mathbf{g}\\) into the block equations of motion we have, \\[\n \\mathbf{u} = \\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\lambda \\right).\n\\tag{4}\\] To determine the coupled assembly response, we first find the interface force required to bring the system into alignment (i.e. satisfy Equation 2). To do so we substitute Equation 4 into the compatibility equation, \\[\n \\mathbf{B}\\mathbf{u} = \\mathbf{B}\\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\lambda \\right) =\\mathbf{B}\\mathbf{Y}\\mathbf{f} - \\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\lambda = \\mathbf{0},\n\\] before rearranging to solve for \\(\\lambda\\), \\[\n \\lambda = \\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f}. \n\\] We can now substitute \\(\\lambda\\) back into the equations of motion to yield the coupled response, \\[\n \\mathbf{u} = \\mathbf{Y}\\left(\\mathbf{f} -\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}\\mathbf{f} \\right) = \\left(\\mathbf{Y} -\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y} \\right)\\mathbf{f}.\n\\] From the above we can identify the coupled admittance as, \\[\n \\mathbf{Y}_C = \\mathbf{Y} -\\mathbf{Y}\\mathbf{B}^{\\rm T}\\left(\\mathbf{B}\\mathbf{Y}\\mathbf{B}^{\\rm T}\\right)^{-1}\\mathbf{B}\\mathbf{Y}. \n\\tag{5}\\] Equation Equation 5 provides a convenient framework for general substructuring; it allows an arbitrary number of components to be assembled in any configuration, and avoids large matrix inversions typical of other admittance-based formulations.",
"crumbs": [
"Research",
"A mixed admittance/transfer matrix substructuring approach"
]
},
{
"objectID": "research/frf_tmm_substruct/frf_tmm_substruct.html#transfer-matrix-based-tm-coupling",
"href": "research/frf_tmm_substruct/frf_tmm_substruct.html#transfer-matrix-based-tm-coupling",
"title": "A mixed admittance/transfer matrix substructuring approach",
"section": "2.2 Transfer matrix-based (TM) coupling",
"text": "2.2 Transfer matrix-based (TM) coupling\nWith transfer matrix-based coupling, each component is represented by a complex transfer matrix \\(\\mathbf{T}\\) that relates the force and response between two sets of DoFs (say \\(a\\) and \\(b\\)), \\[\n \\left(\\begin{array}{c}\\mathbf{u}_{b} \\\\ \\mathbf{f}_b - \\mathbf{g}_{b}\\end{array}\\right) = \\overbrace{\\left[\\begin{array}{c c} \\mathbf{T}^u& \\mathbf{\\Gamma} \\\\ \\mathbf{\\Omega} & \\mathbf{T}^g\\end{array}\\right]}^{\\mathbf{T}}\\left( \\begin{array}{c}\\mathbf{u}_{a} \\\\ \\mathbf{f}_a + \\mathbf{g}_a \\end{array} \\right).\n\\] Within the transfer matrix, the submatrices \\(\\mathbf{T}^g\\) and \\(\\mathbf{T}^u\\) represent a pair of force and response transmissibilities. Whilst transmissibilities have many interesting properties and applications, they do not themselves fully characterize a component [20]. The off-diagonal terms \\(\\mathbf{\\Gamma}\\) and \\(\\mathbf{\\Omega}\\) provide the necessary force-response cross-terms required for the transfer matrix representation of a component to be complete (see Note 1).\nTo rigidly couple two components \\(A\\) and \\(B\\), \\[\n \\left(\\begin{array}{c}\\mathbf{u}_{Aa} \\\\ -\\mathbf{g}_{Aa}\\end{array}\\right) = \\left[\\begin{array}{c c} \\mathbf{T}_{A}^u& \\mathbf{\\Gamma}_A \\\\ \\mathbf{\\Omega}_A & \\mathbf{T}_{A}^g\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_{Ab} \\\\ \\mathbf{g}_{Ab}\\end{array} \\right) \\qquad \\left(\\begin{array}{c}\\mathbf{u}_{Bb} \\\\ -\\mathbf{g}_{Bb}\\end{array}\\right) = \\left[\\begin{array}{c c} \\mathbf{T}_{B}^u& \\mathbf{\\Gamma}_B \\\\ \\mathbf{\\Omega}_B & \\mathbf{T}_{B}^g\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_{Bc} \\\\ \\mathbf{g}_{Bc}\\end{array} \\right).\n\\tag{6}\\] through the DoFs \\(b\\) we simply recall the conditions of compatibility and equilibrium, \\[\n \\left(\\begin{array}{c}\\mathbf{u}_{Ab} \\\\ \\mathbf{g}_{Ab}\\end{array}\\right) = \\left(\\begin{array}{c}\\mathbf{u}_{Bb} \\\\ -\\mathbf{g}_{Bb}\\end{array}\\right)\n\\] and combine Equation 6 as so, \\[\n\\begin{align}\\label{}\n \\left(\\begin{array}{c}\\mathbf{u}_{Aa} \\\\ -\\mathbf{g}_{Aa}\\end{array}\\right) = \\left[\\begin{array}{c c} \\mathbf{T}_{A}^u& \\mathbf{\\Gamma}_A \\\\ \\mathbf{\\Omega}_A & \\mathbf{T}_{A}^g\\end{array}\\right]\\left[\\begin{array}{c c} \\mathbf{T}_{B}^u& \\mathbf{\\Gamma}_B \\\\ \\mathbf{\\Omega}_B & \\mathbf{T}_{B}^g\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_{Bc} \\\\ \\mathbf{g}_{Bc}\\end{array} \\right).\n\\end{align}\n\\] Clearly, this approach to coupling can be extended for an arbitrary number of components, \\[\n\\begin{align}\n \\left(\\begin{array}{c}\\mathbf{u}_{Aa} \\\\ -\\mathbf{g}_{Aa}\\end{array}\\right) &= \\left[\\begin{array}{c c} \\mathbf{T}_{A}^u& \\mathbf{\\Gamma}_A \\\\ \\mathbf{\\Omega}_A & \\mathbf{T}_{A}^g\\end{array}\\right]\\left[\\begin{array}{c c} \\mathbf{T}_{B}^u& \\mathbf{\\Gamma}_B \\\\ \\mathbf{\\Omega}_B & \\mathbf{T}_{B}^g\\end{array}\\right]\\cdots \\left[\\begin{array}{c c} \\mathbf{T}_{V}^u& \\mathbf{\\Gamma}_V \\\\ \\mathbf{\\Omega}_V & \\mathbf{T}_{V}^g\\end{array}\\right]\n \\left[\\begin{array}{c c} \\mathbf{T}_{X}^u& \\mathbf{\\Gamma}_X \\\\ \\mathbf{\\Omega}_X & \\mathbf{T}_{X}^g\\end{array}\\right]\n \\left( \\begin{array}{c}\\mathbf{u}_{Xy} \\\\ \\mathbf{g}_{Xy}\\end{array} \\right)\\\\\n &=\n \\left[\\begin{array}{c c} \\mathbf{T}_{A\\cdots X}^u& \\mathbf{\\Gamma}_{A\\cdots X} \\\\ \\mathbf{\\Omega}_{A\\cdots X} & \\mathbf{T}_{A\\cdots X}^g\\end{array}\\right]\n \\left( \\begin{array}{c}\\mathbf{u}_{Xy} \\\\ \\mathbf{g}_{Xy}\\end{array} \\right).\n\\end{align}\n\\tag{7}\\] This is especially useful for complex layered structures, where each layer is simply represented by its own transfer matrix. In addition to providing convenient structural coupling, the transfer matrix \\(\\mathbf{T}\\) can be used to derive and inspect the free wave characteristics of a component (or collection thereof), as done by the WFE method .\n\n\n\n\n\n\nNote 1: Relation between admittance and transfer matrix\n\n\n\n\n\nThe dynamics of a general linear time invariant mechanical system can be described by the following equation of motion, \\[\n \\left(\\begin{array}{c}\\mathbf{f}_a +\\mathbf{g}_a \\\\ \\mathbf{f}_b + \\mathbf{g}_b\\end{array}\\right) = \\left[\\begin{array}{c c}\\mathbf{Z}_{aa} & \\mathbf{Z}_{ab} \\\\ \\mathbf{Z}_{ba} & \\mathbf{Z}_{bb}\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{u}_b\\end{array} \\right)\n\\tag{8}\\] where \\(\\mathbf{Z}_{\\square}\\) denotes the mechanical impedance at/between the subscripted DoFs.\nWe are interested in the transference of force and displacement across a component. Ignoring external force terms, Equation 8 can be rearranged to yield transfer-style matrix as follows. Using the top line of Equation 8 find \\(\\mathbf{u}_b\\), \\[\n \\mathbf{g}_a = \\mathbf{Z}_{aa}\\mathbf{u}_a + \\mathbf{Z}_{ab}\\mathbf{u}_b\\rightarrow \\mathbf{u}_b = \\mathbf{Z}_{ab}^{-1}\\left(\\mathbf{g}_a - \\mathbf{Z}_{aa} \\mathbf{u}_a\\right).\n\\] Substitute \\(\\mathbf{u}_b\\) into \\(\\mathbf{g}_b\\) (second line of Equation 8), \\[\n \\mathbf{g}_b = \\mathbf{Z}_{ba}\\mathbf{u}_a + \\mathbf{Z}_{bb}\\mathbf{u}_b= \\mathbf{Z}_{ba}\\mathbf{u}_a + \\mathbf{Z}_{bb} \\left(\\mathbf{Z}_{ab}^{-1}\\left(\\mathbf{g}_a - \\mathbf{Z}_{aa} \\mathbf{u}_a\\right)\\right) =\n \\mathbf{Z}_{ba}\\mathbf{u}_a + \\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{g}_a - \\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} \\mathbf{u}_a.\n\\] After grouping terms we have the following pair of equations, \\[\n \\mathbf{u}_b = \\mathbf{Z}_{ab}^{-1}\\mathbf{g}_a - \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} \\mathbf{u}_a\n\\] \\[\n \\mathbf{g}_b = \n \\left(\\mathbf{Z}_{ba}-\\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa}\\right)\\mathbf{u}_a + \\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{g}_a\n\\] or in matrix form, \\[\n \\left(\\begin{array}{c}\\mathbf{u}_b \\\\ \\mathbf{g}_b\\end{array}\\right) = \\left[\\begin{array}{c c} - \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} &\\mathbf{Z}_{ab}^{-1} \\\\ \\mathbf{Z}_{ba}-\\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} & \\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{g}_a\\end{array} \\right).\n\\tag{9}\\] Equation 9 describes the transmission of force and displacement across the component. Its terms can be calculated analytically for simple components (e.g. spring dash-pot joints, beam elements, etc.) or numerically for more complex cases. The transfer matrix is often written in a slightly modified form, with the force \\(\\mathbf{g}_b\\) negated, \\[\n \\left(\\begin{array}{c}\\mathbf{u}_b \\\\ -\\mathbf{g}_b\\end{array}\\right) = \\left[\\begin{array}{c c} - \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} &\\mathbf{Z}_{ab}^{-1} \\\\ -\\mathbf{Z}_{ba}+\\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\mathbf{Z}_{aa} & -\\mathbf{Z}_{bb} \\mathbf{Z}_{ab}^{-1}\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{g}_a\\end{array} \\right) = \\left[\\begin{array}{c c} \\mathbf{T}^{u}& \\mathbf{\\Gamma} \\\\ \\mathbf{\\Omega} & \\mathbf{T}^{g}\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{g}_a\\end{array} \\right),\n\\tag{10}\\] where we introduce the force and response transmissibilities, respectively, as \\(\\mathbf{T}_{u}\\) and \\(\\mathbf{T}_{g}\\), and the cross-terms as \\(\\mathbf{\\Gamma}\\) and \\(\\mathbf{\\Omega}\\). This negation reduces the coupling of components to a simple matrix multiplication.\nIt is worth noting that the transfer matrix can similarly be described in-terms of a component’s admittance, rather than impedance, by following a similar procedure as above, though starting from the equation of motion, \\[\n \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{u}_b\\end{array} \\right) = \\left[\\begin{array}{c c}\\mathbf{Y}_{aa} & \\mathbf{Y}_{ab} \\\\ \\mathbf{Y}_{ba} & \\mathbf{Y}_{bb}\\end{array}\\right] \\left(\\begin{array}{c}\\mathbf{f}_a + \\mathbf{g}_a \\\\ \\mathbf{f}_b + \\mathbf{g}_b\\end{array}\\right).\n\\] The result is, \\[\n \\left(\\begin{array}{c}\\mathbf{u}_b \\\\ -\\mathbf{g}_b\\end{array}\\right) = \\left[\\begin{array}{c c} \\mathbf{T}^{u}& \\mathbf{\\Gamma} \\\\ \\mathbf{\\Omega} & \\mathbf{T}^{g}\\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{g}_a\\end{array} \\right) = \\left[\\begin{array}{c c} \\mathbf{Y}_{bb}\\mathbf{Y}_{ab}^{-1} & \\mathbf{Y}_{ba} - \\mathbf{Y}_{bb}\\mathbf{Y}_{ab}^{-1}\\mathbf{Y}_{aa} \\\\ -\\mathbf{Y}_{ab}^{-1} & \\mathbf{Y}_{ab}^{-1}\\mathbf{Y}_{aa} \\end{array}\\right] \\left( \\begin{array}{c}\\mathbf{u}_a \\\\ \\mathbf{g}_a\\end{array} \\right).\n\\]",
"crumbs": [
"Research",
"A mixed admittance/transfer matrix substructuring approach"
]
},
{
"objectID": "teaching/teaching.html",
"href": "teaching/teaching.html",
"title": "Microphone and Loudspeaker Design",
"section": "",
"text": "Figure 1: Elephant"
},
{
"objectID": "book/book.html",
"href": "book/book.html",
"title": "Experimental Vibro-Acoustics: In-situ and blocked force methods for component-based simulation and virtual prototyping",
"section": "",
"text": "JWR Meggitt & AT Moorhouse\nExperimental Vibro-acoustics is the first comprehensive practical engineering guide for the effective use of measured vibro-acoustic data in a component-based approach to the analysis, simulation, virtual prototyping and ‘digital twinning’ of machines and mechanical systems.\n\n\n\n\n\n\n\nFigure 1: Experimental Vibro-acoustics - cover.\n\n\n\n\nThe book provides practical techniques which cover measurement, data processing and uncertainties and includes many ‘tricks of the trade.’ It also includes a range of case studies and a detailed walk-through example in a tutorial style. Further, it focuses on the in situ blocked force method, now a full international standard, through which many of the developments in the component-based approach have been based.\nThis book is essential for design engineers in vibration, acoustics and structural dynamics diagnosing and troubleshooting vibro-acoustic problems in machines and mechanical systems, as well as simulation of existing and virtual assemblies. It extends beyond the core of the automotive industries, to new applications in air, rail and marine transport, as well as for domestic and industrial equipment and buildings and is relevant to both researchers and industrial engineers.\nIf you would like an idea of the content/style of the book, a series of ‘Introduction to’ pages can be found here. These introductions are largely reproduced/adapted from the content of the book, albeit with considerably less detail."
},
{
"objectID": "introductions/POV/MDOF/MDOF.html",
"href": "introductions/POV/MDOF/MDOF.html",
"title": "Multi-degree of freedom systems",
"section": "",
"text": "So far we have covered, in reasonable detail, the dynamics of single and two DoF mass-spring systems. Whilst some physical systems accept this simple representation, for example isolated machinery or the vented loudspeaker, most require models of much greater complexity, including many DoFs. Indeed, real systems with distributed physical properties (mass/stiffness) are said to have \\(\\infty\\) DoFs. Using techniques such as the Finite Element Method we are able to approximate such contiuous systems by lumping together mass and stiffness behaviour to form finite number of DoFs. An example of this is shown in Figure 1, where a continuous rod with varying geometry is approximated by a discrete mass-spring system.\nThe long and short of it is that the modelling of real world systems inevitably involves the analysis of complex, but discrete, multi-DoF systems. And so on this page, we look to generalise some of the concepts and ideas previously covered to arbitrary multi-DoF systems.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Multi-degrees of freedom"
]
},
{
"objectID": "introductions/POV/MDOF/MDOF.html#building-system-matrices",
"href": "introductions/POV/MDOF/MDOF.html#building-system-matrices",
"title": "Multi-degree of freedom systems",
"section": "Building system matrices",
"text": "Building system matrices\n\n\n\n\n\n\nFigure 2: Forces imparted on mass 1.\n\n\n\nWhilst is is beyond the scope of this page to introduce the Finite Element method in any real detail, it is useful to introduce a means of quickly building the system matrices for mass spring systems without having to go through every mass, draw a free-body diagram and start writing out all the forces. We want a quick, programmatic, way of building these EoM. To do that we are going to return to a two DoF system to build a little intuition before generalising to any system we like.\nThe trickiest part of writing Newton’s 2nd law is making sure we ge the spring and damper forces all in the right direction. We do this by displacing each mass in turn, and imagining in what direction the restoring force acts. For example, considering mass 1 in Figure 2. We first displace mass 1 upwards by some amount \\(x_1\\) whilst keeping mass 2 fixed. Clearly, the spring \\(k_1\\) will act to pull the mass back down, hence the spring force is \\(-k_1 x_1\\). Let us now keep mass 1 fixed and instead displace mass 2 upwards by some amount \\(x_2\\). The spring \\(k_1\\) is now in compression, and as a result with push on mass 1 with an upwards force, \\(k_1 x_2\\). With the above in mind, the Newton’s 2nd law for mass 1 takes the form, \\[\n\\begin{align}\n m_1\\ddot{x} _1&= \\sum_i F_i\\\\\n &= F_{e_1} -k_1x_1 + k_1x_2\n\\end{align}\n\\] Rearranging to isolate the external force yields our EoM as, \\[\nF_{e_1} = m_1\\ddot{x}_1 + k_1 (x_1-x_2)\n\\]\n\n\n\n\n\n\nFigure 3: Forces imparted on mass 2.\n\n\n\nWe can now repeat this process for mass 2 using Figure 3. We first displace mass 2 and observed that both \\(k_1\\) and \\(k_2\\) develop forces that push/pull mass 2 downwards. Each spring contirbutes a force, and so the total force is \\(-k_1 x_2 - k_2 x_2\\). Next we displace mass 1 upwards, which generates an upwards pulling force on mass 2, \\(k_1x_1\\). Now Newton’s 2nd law for mass 2 is, \\[\n\\begin{align}\nm_2\\ddot{x} _2&= \\sum_i F_i\\\\\n&= F_{e_2} -k_1x_2 - k_2 x_2 + k_1 x_1\n\\end{align}\n\\] and its EoM is given by, \\[\n\\begin{align}\nF_{e_1} &= m_1\\ddot{x}_1 + k_1 (x_2-x_1) + k_2 x_2\\\\\n& = m_1\\ddot{x}_1 + (k_1+k_2)x_2 -k_1x_1\n\\end{align}\n\\]\nNotice from the above that when a mass is displaced its connected springs impart a negative restoring force (i.e. to bring it back to equilibrium), whilst when a neighboring mass is displaced a positive force is developed. This asymmetry can be seen in the EoM, \\[\n \\begin{align}\n F_{e_1} &= m_1\\ddot{x}_1 + k_1 (x_1{\\color{orange}-x_2})\\\\\n F_{e_2} & = m_2\\ddot{x}_2 + (k_1+k_2)x_2 {\\color{orange}-k_1x_1}\n \\end{align}\n\\] where the coupling terms are always negative. In matrix form, the off-diagonal stiffness terms are always negative whilst the diagonal terms are always positive, \\[\n \\left(\\begin{array}{c}\n F_{e_1} \\\\ F_{e_2}\n \\end{array}\\right) = \\left[\\begin{array}{cc}\n m_1 & 0 \\\\ 0 & m_2\n \\end{array}\\right] \\left(\\begin{array}{c}\n\\ddot{x}_1 \\\\ \\ddot{x}_2\n\\end{array}\\right) + \\left[\\begin{array}{cc}\n k_1 & -k_1 \\\\ -k_1 & k_1+k_2\n\\end{array}\\right]\\left(\\begin{array}{c}\n{x}_1 \\\\ {x}_2\n\\end{array}\\right)\n\\] Now, with the above in mind we can readily construct stiffness matrices or arbitary mass-spring systems. All we need to do is remember that:\n\nThe \\(n\\)th diagonal of \\(\\mathbf{K}\\) is simply the sum of all spring stiffness connected to mass \\(n\\)\nIf mass \\(m\\) is connected to mass \\(n\\) via a spring, its negative stiffness goes into the \\(mn\\) and \\(nm\\) entries of \\(\\mathbf{K}\\).\n\nThe exact same notion can be applied to the damping terms and used to build \\(\\mathbf{C}\\). As said before, for a discrete mass-sopring system the mass matrix is just diagonal with the mass of mass \\(n\\) in the \\(n\\)th diagonal of \\(\\mathbf{M}\\). In case you need a little more convincing, shown in Figure 4 are three different 3-DOF systems, the sitffness matrices for which are given by, \\[\n\\mathbf{K} = \\left[\\begin{array}{c c c}\nk_1+k_2 & -k_2 & 0 \\\\\n-k_2 & k_2+k_3 & -k_3 \\\\\n0& -k_3 & k_3+k_4 \\\\\n\\end{array}\\right], \\quad\n\\left[\\begin{array}{c c c}\n k_1+k_2+k_5 & -k_2 & -k_5 \\\\\n -k_2 & k_2+k_3 & -k_3 \\\\\n -k_5& -k_3 & k_3+k_4 +k_5 \\\\\n\\end{array}\\right], \\quad\n\\left[\\begin{array}{c c c}\n k_1+k_2+k_5 & -k_2 & -k_5 \\\\\n -k_2 & k_2 & 0 \\\\\n -k_5& 0 & k_5 \\\\\n\\end{array}\\right]\n\\tag{2}\\]\n\n\n\n\n\n\nFigure 4: Some example 3-DOF systems with different spring connectivity’s. The stiffness matrix for each is given in Equation 2, from left to right.\n\n\n\nNow that we have a quick and progamatic way of building our system matrices, we want to start thinking about how we can go about solving Equation 1. However, there are some important concepts we might like to introduce first, particularly that of potential and kinetic energy (quadratic forms) and the notion of positive definiteness. These concepts are not really essential for what follows, so I have chucked it into the drop down box below in case you are interested.\n\n\n\n\n\n\nPotential energy, kinetic energy and positive definiteness\n\n\n\n\n\nThe potential energy of an ideal spring given its final displacement \\(x_f\\) takes the form, \\[\n\\begin{align}\n V = \\int_0^{-x_f} F_s dx &= \\int_0^{x_f} kx dx \\\\ &=\\frac{1}{2} kx^2_f = \\frac{1}{2} F_s(x_f) x_f\n\\end{align}\n\\] where we have used \\(F_s = -k x\\). We see that the potential energy \\(V\\) is quadratic in \\(x_f\\), with \\(k\\) playing role of constant. For multi-DoF system, the total potential energy can be written in a similar quadratic form.\nNoting that the potential energy depends only on displacement, for each mass we can write, \\[\nV_i = \\frac{1}{2} F_i x_i\n\\] For system with \\(N\\) masses the total potential energy is just sum of each mass’s potential energy, \\[\nV = \\sum_i^N V_i = \\frac{1}{2} \\sum_i^N F_i x_i\n\\tag{3}\\] Now, in the absence of any external forces, the total force acting on \\(m_i\\) is due to the displacement of all directly connected masses and can be written generally as, \\[\nF_i = \\sum_j^N k_{ij} x_j\n\\tag{4}\\] Combining Equation 4 and Equation 3 above we have for the total potential energy of the system, \\[\nV = \\frac{1}{2} \\sum_i^N \\left( \\sum_j^N k_{ij} x_j \\right) x_i = \\frac{1}{2} \\sum_i^N \\sum_j^N k_{ij} x_j x_i\n\\] where \\(k_{ij}\\) are simply the elements of the stiffness matrix \\(\\mathbf{K}\\). We can write this in a more convenient matrix form, \\[\nV = \\frac{1}{2} \\mathbf{x}^{\\rm T} \\mathbf{K} \\mathbf{x}\n\\tag{5}\\] where it is noted that \\(V\\) remains a scalar term and that the form of Equation 5 is known as a quadratic form; the matrix equivalent of a quadratic equation.\n\n\n\n\n\n\nFigure 5: An unconstrained system can exhibit rigid body motion.\n\n\n\nMoving on to the kinetic energy now. The kinetic energy of a mass depends on its mass and velocity, \\[\nT_i = \\frac{1}{2} m_i\\dot{x}_i^2\n\\] Like the potential energy, the total kinetic energy (also a scalar quantity) is simply the sum of all the individual mass energies, \\[\nT = \\frac{1}{2} \\sum_i^N m_i\\dot{x}_i^2\n\\] which can also be written in quadratic form, \\[\nT = \\frac{1}{2} \\mathbf{\\dot{x}}^{\\rm T} \\mathbf{M}\\mathbf{\\dot{x}}\n\\]\nOn physical grounds, the kinetic energy \\(T\\) should always be positive. Hence, for any velocity vector \\(\\mathbf{\\dot{x}}\\), we should have that \\(\\mathbf{\\dot{x}}^{\\rm T} \\mathbf{M}\\mathbf{\\dot{x}}>0\\). A matrix that satisfies this inequality is said to be positive-definite.\nSimilarly, \\(V\\) should always be positive or zero for a given displacement \\(\\mathbf{x}\\). That is, for any displacement vector \\(\\mathbf{x}\\), we should have that \\(\\mathbf{x}^{\\rm T} \\mathbf{K} \\mathbf{x}\\geq 0\\). A matrix that satisfies this inequality is said to be positive-semi-definite. A positive value indicates that a deformation of the system which incurs some potential energy. A zero value suggests that there is some set of displacements \\(\\mathbf{x}\\) that does not cause any deformation of the system, hence zero potential energy. This occurs when the system is not rigidly constrained and is able to move entirely as a single rigid body. An example of this is illustrated in Figure 5.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Multi-degrees of freedom"
]
},
{
"objectID": "introductions/POV/MDOF/MDOF.html#synchronous-motion",
"href": "introductions/POV/MDOF/MDOF.html#synchronous-motion",
"title": "Multi-degree of freedom systems",
"section": "Synchronous motion",
"text": "Synchronous motion\nWe are claiming above that, in the absence of damping, the EoM can be decoupled by using the modal matrix \\(\\mathbf{U}\\) as a coordinate transformation. To prove this we must first determine the modal matrix. This is done by considering the force free problem, \\[\n\\mathbf{M}\\ddot{\\mathbf{x}}(t) + \\mathbf{K}\\mathbf{x}(t) = \\mathbf{0}\n\\tag{8}\\] As for the TDoF case on the previous page, we are interested in synchronous motion. That is, solutions to Equation 8 where all displacements have the same time dependence. Mathematically, \\[\n\\mathbf{x}(t) = \\mathbf{u} g(t) \\quad \\quad x_i(t) = u_i g(t)\n\\tag{9}\\] where \\(\\mathbf{u}\\) describes the configuration of the system, and \\(g(t)\\) its time dependence.\nAssuming synchronous motion, i.e. substituting Equation 9 into our EoM, we have that \\[\n\\mathbf{M} \\mathbf{u} \\ddot{g}(t)+ \\mathbf{K} \\mathbf{u} g(t)= \\mathbf{0} \\quad \\quad\\mbox{or} \\quad \\quad \\ddot{g}(t) \\sum_j^N M_{ij} u_j + g(t) \\sum_j^N K_{ij} u_j = 0 \\quad \\mbox{for}\\quad i=1,\\, 2, \\, \\cdots, \\, N\n\\tag{10}\\] We can rewrite the element-wise form of Equation 10 for each of the \\(N\\) DoFs as, \\[\n-\\frac{\\ddot{g}(t)}{g(t)} = \\frac{\\sum_j^N K_{ij} u_j }{ \\sum_j^N M_{ij} u_j } \\quad \\quad \\mbox{for } i=1,\\, 2, \\, \\cdots, \\, N\n\\tag{11}\\] Note that the left hand side of Equation 11 does not depend on cooridnste index \\(i\\), nor does the right hand side depend on time \\(t\\). From this we conclude that the two ratios can only be equal if they are themselves equal to a constant, defined here as \\(\\lambda_i\\).\nFrom the above we can establish the follow equations, \\[ \n\\ddot{g}(t) + \\lambda_i g(t) = 0\n\\tag{12}\\] \\[\n \\sum_j^N (K_{ij} - \\lambda M_{ij})u_j = 0 \\quad \\quad \\mbox{for } i=1,\\, 2, \\, \\cdots, \\, N\n\\] which are very similar to what we had previously for our TDOF system.\nLets solve Equation 12 first, which is trival as this point as we have already dealt with this form of SDOF equation several times. The solution has the form \\(g(t) = Ae^{st}\\), from which we obtain from Equation 12, \\[\ns^2+\\lambda_i = 0\\quad \\quad s = \\pm \\sqrt{-\\lambda_i}\n\\] If \\(\\lambda_i<0\\) then \\(s\\) is real and the solution is made up of one decaying and one growing exponential. This is not a synchronous oscillation. So, we must consider case \\(\\lambda_i>0\\), which is ensured by defining \\(\\lambda_i =\\omega_i^2\\). Our solution then takes the form, \\[\ng(t) = A_1 e^{-i \\omega_i t} + A_2 e^{i\\omega_i t}\n\\quad \\mbox{or} \\quad g(t) = C \\cos(\\omega_i t - \\phi)\n\\tag{13}\\] from which we conclude that if synchronous motion is possible, its time dependence will be harmonic at frequency \\(\\omega_i\\)\nWith Equation 13 in hand, our synchronous EoM becomes, \\[\n\\mathbf{M} \\mathbf{u} \\ddot{g}(t)+ \\mathbf{K} \\mathbf{u} g(t)= \\mathbf{0} \\quad \\quad \\rightarrow \\quad \\quad -\\omega_i^2 \\mathbf{M} \\mathbf{u}+ \\mathbf{K} \\mathbf{u} = \\mathbf{0}\n\\] Written in a slightly different form we have that, \\[\n\\left( \\mathbf{K}-\\omega_i^2 \\mathbf{M}\\right) \\mathbf{u} = \\mathbf{0} \\quad \\quad \\mbox{or} \\quad \\quad \\omega_i^2 \\mathbf{M} \\mathbf{u} =\\mathbf{K} \\mathbf{u} \n\\tag{14}\\] This equation does not permit a solution for any value of \\(\\omega_i\\), only particular values are valid of which there are \\(N\\). Finding the values of \\(\\omega_i\\) is called characteristic/eigenvalue problem.\nTo find the characteristic values of Equation 14 we note that \\(\\left(\\mathbf{K}-\\omega^2 \\mathbf{M}\\right) \\mathbf{u} = \\mathbf{0}\\) only has non-trivial solutions if the determinant of the bracketed matrix goes to 0 (we justified this requirement on the previous page!), \\[\n\\mbox{det}\\left( \\mathbf{K}-\\omega^2 \\mathbf{M} \\right) = 0\n\\tag{15}\\] Equation 15 is a polynomial of degree \\(N\\) in \\(\\omega^2\\), and has \\(N\\) distinct roots. These roots are the characteristic/eigenvalues \\(\\omega_r \\quad r = 1,\\, 2, \\, \\cdots ,\\, N\\). Associated with each \\(\\omega_r\\) (previously denoted \\(\\omega_i\\)) is a unique vector \\(\\mathbf{u}_r\\) whose elements define the relative motion of the cooridnates - these are the so-called modal vectors, \\[\n\\left( \\mathbf{K}-\\omega_r^2 \\mathbf{M}\\right) \\mathbf{u}_r = \\mathbf{0}\n\\tag{16}\\]",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Multi-degrees of freedom"
]
},
{
"objectID": "introductions/POV/MDOF/MDOF.html#orthogonality-and-mass-normalisation-of-mode-shapes",
"href": "introductions/POV/MDOF/MDOF.html#orthogonality-and-mass-normalisation-of-mode-shapes",
"title": "Multi-degree of freedom systems",
"section": "Orthogonality and mass normalisation of mode shapes",
"text": "Orthogonality and mass normalisation of mode shapes\nThe natural modes possess a very important and useful property - orthogonality. It is the orthogonality of the mode shapes that leads to the decoupling of our EoM.\nConventional orthogonality means that the inner product of two vectors is equal to zero, \\(\\mathbf{x}^{\\rm T} \\mathbf{y} = 0\\). The orthogonality of the natural modes is slightly different. We say that they are orthogonality with respect to the mass and stiffness matrices, meaning, \\[\n \\mathbf{u}_s^{\\rm T} \\mathbf{M} \\mathbf{u}_r = 0 \\quad \\mbox{and} \\quad \\mathbf{u}_s^{\\rm T}\\mathbf{K}\\mathbf{u}_r = \\mathbf{0} \\quad \\quad \\mbox{for} \\quad r\\neq s\n\\tag{17}\\] We proved this fact on the previous page for a two DOF system. If you recall, it involved quite a lot of algebra… A more general `proof’, which requires some tricks from linear algebra, is detailed in the box below. I say ‘proof’ as we will use some linear algebra relations without explicitly proving them. If you are happy to accept Equation 17 on face value, feel free to skip the box enitrely!\n\n\n\n\n\n\nProof of orthogonality (kind of…)\n\n\n\n\n\nTo prove that the mode shapes are indeed orthogonal with respect to the mass and stiffness matrices let us consider 2 distinct solutions (\\(\\omega_r^2, \\, \\mathbf{u}_r\\)) and (\\(\\omega_s^2, \\, \\mathbf{u}_s\\)), each of which satisfy Equation 16, \\[\n\\mathbf{K}\\mathbf{u}_r = \\omega_r^2\\mathbf{M} \\mathbf{u}_r\n\\tag{18}\\] \\[\n\\mathbf{K}\\mathbf{u}_s = \\omega_s^2\\mathbf{M} \\mathbf{u}_s \n\\tag{19}\\] We beging by pre-multiplying Equation 18 by \\(\\mathbf{u}_s^{\\rm T}\\) and Equation 19 by \\(\\mathbf{u}_r^{\\rm T}\\), \\[\n\\mathbf{u}_s^{\\rm T}\\mathbf{K}\\mathbf{u}_r = \\omega_r^2 \\mathbf{u}_s^{\\rm T}\\mathbf{M} \\mathbf{u}_r\n\\tag{20}\\] \\[\n\\mathbf{u}_r^{\\rm T}\\mathbf{K}\\mathbf{u}_s = \\omega_s^2\\mathbf{u}_r^{\\rm T}\\mathbf{M} \\mathbf{u}_s \n\\tag{21}\\] Next, we transpose Equation 20 and subtract it from Equation 21 (noting that the stiffness and mass matrices are reciprocal, i.e. \\(\\mathbf{K}=\\mathbf{K}^{\\rm T}\\) and \\(\\mathbf{M}=\\mathbf{M}^{\\rm T}\\)), \\[\n0 = (\\omega_s^2 - \\omega_r^2) \\mathbf{u}_s^{\\rm T} \\mathbf{M} \\mathbf{u}_r\n\\] Now, its time to call on a useful fact from linear algebra. For almost all \\(N\\times N\\) matrices, it is the case that its \\(N\\) eigenvalues are distinct, i.e. \\(\\omega_s^2 \\neq \\omega_r^2\\). We won’t prove this here, but you can have a google of it if you want to know more. Using this fact, we conclude that, \\[\n \\mathbf{u}_s^{\\rm T} \\mathbf{M} \\mathbf{u}_r = 0 \\quad \\quad \\mbox{for} \\quad r\\neq s\n\\] This is our orthogonality with respect to the mass matrix \\(\\mathbf{M}\\)!\nNow, recalling Equation 20, \\[\n \\mathbf{u}_s^{\\rm T}\\mathbf{K}\\mathbf{u}_r = \\omega_r^2 \\overbrace{\\mathbf{u}_s^{\\rm T}\\mathbf{M} \\mathbf{u}_r}^{=0}\n\\] we see immediately that \\(\\mathbf{u}_r\\) and \\(\\mathbf{u}_s\\) are also orthogonal with respect to the stiffness matrix \\(\\mathbf{K}\\), \\[\n\\mathbf{u}_s^{\\rm T}\\mathbf{K}\\mathbf{u}_r = 0 \\quad \\quad \\mbox{for} \\quad r\\neq s\n\\]\n\n\n\nTaking instead the inner product of mode shape \\(\\mathbf{u}_r\\) with itself we have, \\[\n\\mathbf{u}_r^{\\rm T} \\mathbf{M} \\mathbf{u}_r \\neq 0 \\quad \\quad \\mathbf{u}_r^{\\rm T} \\mathbf{K} \\mathbf{u}_r \\neq 0 \\quad \\quad \\mbox{for } r=1,\\, 2, \\, \\cdots, \\, N\n\\] where, on account of the modal vectors \\(\\mathbf{u}_r\\) only being unique up to a scalar multiple (meaning that if \\(\\mathbf{u}_r\\) is a solution, so is \\(\\alpha \\mathbf{u}_r\\)), the right hand sides are not well defined. It is therefore convenient to make \\(\\mathbf{u}_r\\) unique by normalising it some way. As we will see, a particularly useful normalisation is so-called mass normalisation where the elements of \\(\\mathbf{u}_r\\) are normalised such that, \\[\n\\mathbf{u}_r^{\\rm T} M_r \\mathbf{u}_r = 1 \\quad \\quad \\mbox{for } r=1,\\, 2, \\, \\cdots, \\, N\n\\] where \\(M_r\\) is the \\(r\\)th diagonal element of the mass matrix \\(\\mathbf{M}\\).\nThe benefit of this particular normalisation is clear if we pre-multiply Equation 16 by \\(\\mathbf{u}_r^{\\rm T}\\), \\[\n \\mathbf{u}_r^{\\rm T} \\left( \\mathbf{K}-\\omega_r^2 \\mathbf{M}\\right) \\mathbf{u}_r = \\mathbf{u}_r^{\\rm T}\\mathbf{K}\\mathbf{u}_r - \\omega_r^2 = {0}\n\\] as we see immediately that, \\[\n \\mathbf{u}_r^{\\rm T}\\mathbf{K}\\mathbf{u}_r = \\omega_r^2\n\\]\nBy combining all the mass normalised natural modes to build the modal matrix, \\[\n\\mathbf{U} = \\left[\\begin{array}{c c c c}\n \\mathbf{u}_1 & \\mathbf{u}_2 & \\cdots & \\mathbf{u}_N\n\\end{array}\\right]\n\\] we are able to diagonalize the mass and stiffness matrices \\(\\mathbf{M}\\) and \\(\\mathbf{K}\\), \\[\n\\mathbf{U}^{\\rm T} \\mathbf{M} \\mathbf{U} = \\mathbf{I} \\quad \\quad\n\\mathbf{U}^{\\rm T} \\mathbf{K} \\mathbf{U} = \\mathbf{\\Omega} \\quad \\quad \\mbox{with} \\quad \\quad \\mathbf{\\Omega} \\left[\\begin{array}{cccc}\n \\omega_1^2 & && \\\\\n & \\omega_2^2 && \\\\\n & &\\ddots &\\\\\n &&& \\omega_N^2\n \\end{array}\\right]\n\\] By diagonalising the mass and stiffness matrices, we are effectively decoupling our EoM! Recalling Equation 6 and its surrounding discussion, this is exactly what we have been looking for; a transformation matrix \\(\\mathbf{U}\\) that simultaneously diagonalises the mass and stiffness matrices! As we have just seen, the modal shape matrix is exactly what we are looking for.",
"crumbs": [
"A quick introduction to:",
"Some Basic Vibration Stuff",
"Multi-degrees of freedom"
]
},
{
"objectID": "introductions/POV/MDOF/MDOF.html#wait-what-just-happened",
"href": "introductions/POV/MDOF/MDOF.html#wait-what-just-happened",
"title": "Multi-degree of freedom systems",
"section": "Wait, what just happened?",
"text": "Wait, what just happened?\nIn summary, by using the mode shape matrix \\(\\mathbf{U}\\) as a coordinate transformation as per Equation 6, \\[\n\\mathbf{x}(t) = \\mathbf{U}\\mathbf{q}(t)\n\\tag{22}\\] we are able to diagonalise our mass and stiffness matrices, \\[\n\\underbrace{\\mathbf{U}^{\\rm T}\\mathbf{M}\\mathbf{U}}_{\\mathbf{\\tilde{M}}} \\ddot{\\mathbf{q}}(t)+ \\underbrace{\\mathbf{U}^{\\rm T}\\mathbf{K}\\mathbf{U}}_{\\mathbf{\\tilde{K}}}\\mathbf{q}(t)= \\underbrace{\\mathbf{U}^{\\rm T}\\mathbf{F}(t)}_{\\mathbf{Q}(t)}\n\\] which effectively uncouples our EoM giving us \\(N\\) independent SDOF EoMs, \\[\n\\mathbf{\\tilde{M}}\\ddot{\\mathbf{q}}(t) + \\mathbf{\\tilde{K}}\\mathbf{q}(t) = \\mathbf{Q}(t)\\quad \\quad\\mbox{or} \\quad \\quad\n\\tilde{M}_j \\ddot{q}_j(t) + \\tilde{K}_{j}q_j(t) = Q_j(t)\n\\] which we can solve for the modal coordinates \\(q_j(t)\\) using our SDOF techniques from earlier. After solving each modal EoM, we can reconstruct the solution in the physical coordinates by substituting our modal solution \\(\\mathbf{q}(t)\\) into our initial coordinate transformation in Equation 22. With this approach we have completely avoided having to solve any coupled EoMs!",
"crumbs": [