Skip to content

Commit 75eff01

Browse files
Update index.html
1 parent 56c0cb2 commit 75eff01

File tree

1 file changed

+19
-14
lines changed

1 file changed

+19
-14
lines changed

projects/GFI-framework/index.html

Lines changed: 19 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ <h2 style="text-align: center;">ICLR 2025</h2>
7979

8080
<!-- Abstract -->
8181
<h2 class="banded"> Abstract </h2>
82-
<p>
82+
<p style="text-align:justify;">
8383
In subsurface imaging, learning the mapping from velocity maps to seismic waveforms (forward problem) and waveforms to velocity (inverse problem) is important for several applications. While traditional techniques for solving forward and inverse problems are computationally prohibitive, there is a growing interest in leveraging recent advances in deep learning to learn the mapping between velocity maps and seismic waveform images directly from data.
8484
Despite the variety of architectures explored in previous works, several open questions remain unanswered such as the effect of latent space sizes, the importance of manifold learning, the complexity of translation models, and the value of jointly solving forward and inverse problems.
8585
We propose a unified framework to systematically characterize prior research in this area termed the Generalized Forward-Inverse (GFI) framework, building on the assumption of manifolds and latent space translations.
@@ -89,36 +89,41 @@ <h2 class="banded"> Abstract </h2>
8989
</p>
9090

9191
<h2 class="banded">Method Overview</h2>
92-
<p>
93-
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v&#119985; and seismic
94-
waveforms p ∈ p&#119985; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
95-
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v&#771; may not match with the size of p&#771;. Second, according to the latent space
96-
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
97-
latent spaces.
92+
<p style="text-align:justify;">
93+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v&#119985; and seismic
94+
waveforms p ∈ p&#119985; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
95+
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v&#771; may not match with the size of p&#771;. Second, according to the latent space
96+
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
97+
latent spaces.
9898
</p>
9999
<ol>
100100
<li>
101-
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
102-
U-Net uses ConvNet backbones for both encoder-decoder pairs: (<code>E</code><sub>v</sub>, <code>D</code><sub>v</sub>) and (<code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>), to project
103-
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
104-
v&#771; and p&#771; to be identical, i.e., dim(v&#771;) = dim(p&#771;), so that we can train two separate U-Net models to implement the latent space mappings L<sub>v&#771; &rarr; p&#771;</sub> and L<sub>p&#771; &rarr; v&#771;</sub>.
101+
<p style="text-align:justify;">
102+
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
103+
U-Net uses ConvNet backbones for both encoder-decoder pairs: (<code>E</code><sub>v</sub>, <code>D</code><sub>v</sub>) and (<code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>), to project
104+
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
105+
v&#771; and p&#771; to be identical, i.e., dim(v&#771;) = dim(p&#771;), so that we can train two separate U-Net models to implement the latent space mappings L<sub>v&#771; &rarr; p&#771;</sub> and L<sub>p&#771; &rarr; v&#771;</sub>.
106+
</p>
105107

106108
<div class="latent_unet">
107109
<figure>
108-
<img src="./static/images/LatentU-Net.png" alt="Latent U-Net architecture" loading="lazy" width=45%>
110+
<img src="./static/images/LatentU-Net.png" alt="Latent U-Net architecture" loading="lazy" width=30%>
109111
<figcaption> Latent U-Net architecture </figcaption>
110112
</figure>
111113
</div>
112114

113115
</li>
114116
<li>
115-
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
117+
<p style="text-align:justify;">
118+
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
116119
the question: “can we learn a single latent space translation model that can simultaneously solve
117120
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
118121
dim(v&#771;) = dim(p&#771;).
122+
</p>
123+
119124
<div class="inv_xnet">
120125
<figure>
121-
<img src="./static/images/InvertibleX-Net.png" alt="Invertible X-Net architecture" loading="lazy" width=45%>
126+
<img src="./static/images/InvertibleX-Net.png" alt="Invertible X-Net architecture" loading="lazy" width=30%>
122127
<figcaption> Invertible X-Net architecture </figcaption>
123128
</figure>
124129
</div>

0 commit comments

Comments
 (0)