You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v𝒱 and seismic
94
-
waveforms p ∈ p𝒱 can be projected to their corresponding latent space representations, ṽ and p̃, respectively, which can be mapped back to their reconstructions in the original space, v̂ and p̂.
93
+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ 𝒱 and seismic
94
+
waveforms p ∈ 𝒫 can be projected to their corresponding latent space representations, ṽ and p̃, respectively, which can be mapped back to their reconstructions in the original space, v̂ and p̂.
95
95
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of ṽ may not match with the size of p̃. Second, according to the latent space
96
96
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
119
-
the question: “can we learn a single latent space translation model that can simultaneously solve
120
-
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
121
-
dim(ṽ) = dim(p̃).
118
+
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to simultaneously solve both forward and inverse problems. The Invertible X-Net architecture employ invertible U-Net (IU-Net) in the latent spaces of velocity and waveforms to learn bijective translation. The architecture also offers several
119
+
key advantages over baselines. First, it simultaneously addresses both the forward and inverse problems within a single model architecture, whereas other baselines typically require training separate
120
+
models for each task (e.g., Latent U-Net and Auto-Linear), leading to greater parameter efficiency. Second, the use of IU-Net ensures that the mappings between the latent spaces of velocity maps
121
+
and seismic waveforms are bijective, guaranteeing a one-to-one mapping between these representations – a property not necessarily true for other models such as Latent U-nets and Auto-Linear. Third, the bi-directional training of the forward and inverse problems introduces a strong regularization effect as the gradients of the forward and inverse loss affects affects the parameters of f<sub>IU-Net</sub>, thereby
122
+
affecting both forward and inverse performance.
123
+
And fourth, the architecture can be trained with unpaired examples using cycle-loss consistency. We consider both variants of Invertible X-Net (with and without cycle loss) in our experiments to
124
+
demonstrate its effect on generalization performance.
We consider the OpenFWI collection of datasets, comprising multi-structural benchmark datasets for DL4SI grouped into: Vel, Fault, and Style Families. We compare Latent U-Net and Invertible X-Net on these datasets against several baseline methods for both forward and inverse problems.
140
+
For quantitative comparisons, we used Mean Absolute Error (MAE), Mean Square Error (MSE), and Structured Similarity (SSIM) as evaluation metrics since neither metric
141
+
alone is fully comprehensive. MAE captures pixel-level accuracy while SSIM highlights structural similarity.
Figure 4: Comparison of Latent U-Nets (Small and Large), Invertible X-Net, Invertible X-Net (Cycle) with different baseline methods across different OpenFWI datasets.
0 commit comments