Skip to content

Commit 56c0cb2

Browse files
Update index.html
1 parent 516fd98 commit 56c0cb2

File tree

1 file changed

+52
-66
lines changed

1 file changed

+52
-66
lines changed

projects/GFI-framework/index.html

Lines changed: 52 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -70,74 +70,60 @@ <h2 style="text-align: center;">ICLR 2025</h2>
7070
</a>
7171
</p>
7272

73+
<div class="gfi_framework">
74+
<figure>
75+
<img src="./static/images/GFI_Framework.png" alt="GFI Framework" loading="lazy" width=25%>
76+
<figcaption> Figure 1: A unified framework for solving forward and inverse problems in subsurface imaging. </figcaption>
77+
</figure>
78+
</div>
7379

74-
<section class="section">
75-
<div class="gfi_framework">
76-
<figure>
77-
<img src="./static/images/GFI_Framework.png" alt="GFI Framework" loading="lazy" width=25%>
78-
<figcaption> Figure 1: A unified framework for solving forward and inverse problems in subsurface imaging. </figcaption>
79-
</figure>
80-
</div>
81-
<hr style="width: 60%; margin: 2rem auto;">
82-
</section>
83-
8480
<!-- Abstract -->
85-
<section class="section">
86-
<div class="container is-max-desktop has-text-justified">
87-
<h2 class="title is-3"> Abstract </h2>
88-
<p>
89-
In subsurface imaging, learning the mapping from velocity maps to seismic waveforms (forward problem) and waveforms to velocity (inverse problem) is important for several applications. While traditional techniques for solving forward and inverse problems are computationally prohibitive, there is a growing interest in leveraging recent advances in deep learning to learn the mapping between velocity maps and seismic waveform images directly from data.
90-
Despite the variety of architectures explored in previous works, several open questions remain unanswered such as the effect of latent space sizes, the importance of manifold learning, the complexity of translation models, and the value of jointly solving forward and inverse problems.
91-
We propose a unified framework to systematically characterize prior research in this area termed the Generalized Forward-Inverse (GFI) framework, building on the assumption of manifolds and latent space translations.
92-
We show that GFI encompasses previous works in deep learning for subsurface imaging, which can be viewed as specific instantiations of GFI.
93-
We also propose two new model architectures within the framework of GFI: Latent U-Net and Invertible X-Net, leveraging the power of U-Nets for domain translation and the ability of IU-Nets to simultaneously learn forward and inverse translations, respectively.
94-
We show that our proposed models achieve state-of-the-art performance for forward and inverse problems on a wide range of synthetic datasets and also investigate their zero-shot effectiveness on two real-world-like datasets.
95-
</p>
96-
</div>
97-
<hr style="width: 60%; margin: 2rem auto;">
98-
</section>
99-
100-
<section class="section">
101-
<div class="container is-max-desktop has-text-justified">
102-
<h2 class="title is-3">Method Overview</h2>
103-
<p>
104-
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v&#119985; and seismic
105-
waveforms p ∈ p&#119985; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
106-
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v&#771; may not match with the size of p&#771;. Second, according to the latent space
107-
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
108-
latent spaces.
109-
</p>
110-
<ol>
111-
<li>
112-
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
113-
U-Net uses ConvNet backbones for both encoder-decoder pairs: <code>E</code><sub>v</sub>, <code>D</code><sub>v</sub> and <code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>, to project
114-
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
115-
v&#771; and p&#771;to be identical, i.e., dim(v&#771;) = dim(p&#771;), so that we can train two separate U-Net models to implement the latent space mappings L<sub>v&#771; &rarr; p&#771;</sub> and L<sub>p&#771; &rarr; v&#771;</sub>.
116-
117-
<div class="latent_unet">
118-
<figure>
119-
<img src="./static/images/LatentU-Net.png" alt="Latent U-Net architecture" loading="lazy" width=45%>
120-
<figcaption> Latent U-Net architecture </figcaption>
121-
</figure>
122-
</div>
123-
124-
</li>
125-
<li>
126-
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
127-
the question: “can we learn a single latent space translation model that can simultaneously solve
128-
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
129-
dim(v&#771;) = dim(p&#771;).
130-
<div class="inv_xnet">
131-
<figure>
132-
<img src="./static/images/InvertibleX-Net.png" alt="Invertible X-Net architecture" loading="lazy" width=45%>
133-
<figcaption> Invertible X-Net architecture </figcaption>
134-
</figure>
135-
</div>
136-
</li>
137-
</ol>
138-
</div>
139-
<hr style="width: 60%; margin: 2rem auto;">
140-
</section>
81+
<h2 class="banded"> Abstract </h2>
82+
<p>
83+
In subsurface imaging, learning the mapping from velocity maps to seismic waveforms (forward problem) and waveforms to velocity (inverse problem) is important for several applications. While traditional techniques for solving forward and inverse problems are computationally prohibitive, there is a growing interest in leveraging recent advances in deep learning to learn the mapping between velocity maps and seismic waveform images directly from data.
84+
Despite the variety of architectures explored in previous works, several open questions remain unanswered such as the effect of latent space sizes, the importance of manifold learning, the complexity of translation models, and the value of jointly solving forward and inverse problems.
85+
We propose a unified framework to systematically characterize prior research in this area termed the Generalized Forward-Inverse (GFI) framework, building on the assumption of manifolds and latent space translations.
86+
We show that GFI encompasses previous works in deep learning for subsurface imaging, which can be viewed as specific instantiations of GFI.
87+
We also propose two new model architectures within the framework of GFI: Latent U-Net and Invertible X-Net, leveraging the power of U-Nets for domain translation and the ability of IU-Nets to simultaneously learn forward and inverse translations, respectively.
88+
We show that our proposed models achieve state-of-the-art performance for forward and inverse problems on a wide range of synthetic datasets and also investigate their zero-shot effectiveness on two real-world-like datasets.
89+
</p>
90+
91+
<h2 class="banded">Method Overview</h2>
92+
<p>
93+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v&#119985; and seismic
94+
waveforms p ∈ p&#119985; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
95+
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v&#771; may not match with the size of p&#771;. Second, according to the latent space
96+
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
97+
latent spaces.
98+
</p>
99+
<ol>
100+
<li>
101+
<b>Latent U-Net Architecture: </b> We propose a novel architecture to solve forward and inverse problems using two latent space translation models implemented using U-Nets, termed Latent U-Net. Latent
102+
U-Net uses ConvNet backbones for both encoder-decoder pairs: (<code>E</code><sub>v</sub>, <code>D</code><sub>v</sub>) and (<code>E</code><sub>p</sub>, <code>D</code><sub>p</sub>), to project
103+
v and p to lower-dimensional representations. We also constrain the sizes of the latent spaces of
104+
v&#771; and p&#771; to be identical, i.e., dim(v&#771;) = dim(p&#771;), so that we can train two separate U-Net models to implement the latent space mappings L<sub>v&#771; &rarr; p&#771;</sub> and L<sub>p&#771; &rarr; v&#771;</sub>.
105+
106+
<div class="latent_unet">
107+
<figure>
108+
<img src="./static/images/LatentU-Net.png" alt="Latent U-Net architecture" loading="lazy" width=45%>
109+
<figcaption> Latent U-Net architecture </figcaption>
110+
</figure>
111+
</div>
112+
113+
</li>
114+
<li>
115+
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
116+
the question: “can we learn a single latent space translation model that can simultaneously solve
117+
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
118+
dim(v&#771;) = dim(p&#771;).
119+
<div class="inv_xnet">
120+
<figure>
121+
<img src="./static/images/InvertibleX-Net.png" alt="Invertible X-Net architecture" loading="lazy" width=45%>
122+
<figcaption> Invertible X-Net architecture </figcaption>
123+
</figure>
124+
</div>
125+
</li>
126+
</ol>
141127

142128
<!-- Results Table Placeholder -->
143129
<section class="section">

0 commit comments

Comments
 (0)