You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In subsurface imaging, learning the mapping from velocity maps to seismic waveforms (forward problem) and waveforms to velocity (inverse problem) is important for several applications. While traditional techniques for solving forward and inverse problems are computationally prohibitive, there is a growing interest in leveraging recent advances in deep learning to learn the mapping between velocity maps and seismic waveform images directly from data.
84
84
Despite the variety of architectures explored in previous works, several open questions remain unanswered such as the effect of latent space sizes, the importance of manifold learning, the complexity of translation models, and the value of jointly solving forward and inverse problems.
85
85
We propose a unified framework to systematically characterize prior research in this area termed the Generalized Forward-Inverse (GFI) framework, building on the assumption of manifolds and latent space translations.
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v𝒱 and seismic
94
-
waveforms p ∈ p𝒱 can be projected to their corresponding latent space representations, ṽ and p̃, respectively, which can be mapped back to their reconstructions in the original space, v̂ and p̂.
95
-
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of ṽ may not match with the size of p̃. Second, according to the latent space
96
-
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
97
-
latent spaces.
92
+
<pstyle="text-align:justify;">
93
+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v𝒱 and seismic
94
+
waveforms p ∈ p𝒱 can be projected to their corresponding latent space representations, ṽ and p̃, respectively, which can be mapped back to their reconstructions in the original space, v̂ and p̂.
95
+
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of ṽ may not match with the size of p̃. Second, according to the latent space
96
+
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
97
+
latent spaces.
98
98
</p>
99
99
<ol>
100
100
<li>
101
-
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
102
-
U-Net uses ConvNet backbones for both encoder-decoder pairs: (<code>E</code><sub>v</sub>, <code>D</code><sub>v</sub>) and (<code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>), to project
103
-
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
104
-
ṽ and p̃ to be identical, i.e., dim(ṽ) = dim(p̃), so that we can train two separate U-Net models to implement the latent space mappings L<sub>ṽ → p̃</sub> and L<sub>p̃ → ṽ</sub>.
101
+
<pstyle="text-align:justify;">
102
+
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
103
+
U-Net uses ConvNet backbones for both encoder-decoder pairs: (<code>E</code><sub>v</sub>, <code>D</code><sub>v</sub>) and (<code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>), to project
104
+
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
105
+
ṽ and p̃ to be identical, i.e., dim(ṽ) = dim(p̃), so that we can train two separate U-Net models to implement the latent space mappings L<sub>ṽ → p̃</sub> and L<sub>p̃ → ṽ</sub>.
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
117
+
<pstyle="text-align:justify;">
118
+
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
116
119
the question: “can we learn a single latent space translation model that can simultaneously solve
117
120
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
0 commit comments