Skip to content

Commit 5d53897

Browse files
Update index.html
1 parent da44882 commit 5d53897

File tree

1 file changed

+8
-2
lines changed

1 file changed

+8
-2
lines changed

projects/GFI-framework/index.html

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,9 @@ <h2 class="subtitle is-4 has-text-weight-bold">ICLR 2025</h2>
7979
<img src="./static/images/GFI_Framework.png" alt="GFI Framework" loading="lazy" width=25%>
8080
<figcaption> Generalized Forward-Inverse Framework </figcaption>
8181
</figure>
82+
<p>
83+
Figure 1: A unified framework for solving forward and inverse problems in subsurface imaging.
84+
</p>
8285
</div>
8386
<hr style="width: 60%; margin: 2rem auto;">
8487
</section>
@@ -104,14 +107,17 @@ <h2 class="title is-3"> Abstract </h2>
104107
<h2 class="title is-3">Method Overview</h2>
105108
<p>
106109
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ \(\mathcal{V}\) and seismic
107-
waveforms p ∈ \(\mathcal{P}\) can be projected to their corresponding latent space representations, \(\tilde{v}\) and \(\tilde{p}\), respectively, which can be mapped back to their reconstructions in the original space, \(\hat{v}\) and \(\hat{p}\).
110+
waveforms p ∈ \(\mathcal{P}\) can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, \(\hat{v}\) and \(\hat{p}\).
108111
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of \(\tilde{v}\) may not match with the size of \(\tilde{p}\). Second, according to the latent space
109112
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
110113
latent spaces.
111114
</p>
112115
<ol>
113116
<li>
114-
<b>Latent U-Net Architecture: </b>
117+
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. As shown in Figure below, Latent
118+
U-Net uses ConvNet backbones for both encoder-decoder pairs: (Ev, Dv) and (Ep, Dp), to project
119+
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
120+
\(\tilde{v}\) and \(\tilde{p}\) to be identical, i.e., dim(\(\tilde{v}\)) = dim(\(\tilde{p}\)), so that we can train two separate U-Net models to implement the latent space mappings L<sub>v&#771; &rarr; p&#771;</sub> and L<sub>p&#771; &rarr; v&#771;</sub>.
115121

116122
<div class="latent_unet">
117123
<figure>

0 commit comments

Comments
 (0)