Skip to content

Commit c070036

Browse files
Update index.html
Experiments and Results
1 parent fa2ca13 commit c070036

File tree

1 file changed

+41
-8
lines changed

1 file changed

+41
-8
lines changed

projects/GFI-framework/index.html

Lines changed: 41 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -90,8 +90,8 @@ <h2 class="banded"> Abstract </h2>
9090
<!-- Method Overview -->
9191
<h2 class="banded">Method Overview</h2>
9292
<p style="text-align:justify;">
93-
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ v&#119985; and seismic
94-
waveforms p ∈ p&#119985; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
93+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ &#119985; and seismic
94+
waveforms p ∈ &#119979; can be projected to their corresponding latent space representations, v&#771; and p&#771;, respectively, which can be mapped back to their reconstructions in the original space, v&#770; and p&#770;.
9595
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v&#771; may not match with the size of p&#771;. Second, according to the latent space
9696
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
9797
latent spaces.
@@ -108,30 +108,63 @@ <h2 class="banded">Method Overview</h2>
108108
<div class="latent_unet">
109109
<figure>
110110
<img src="./static/images/LatentU-Net.png" alt="Latent U-Net architecture" loading="lazy" width=30%>
111-
<figcaption> Latent U-Net architecture </figcaption>
111+
<figcaption> Figure 2: Latent U-Net architecture </figcaption>
112112
</figure>
113113
</div>
114114

115115
</li>
116116
<li>
117117
<p style="text-align:justify;">
118-
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to answer
119-
the question: “can we learn a single latent space translation model that can simultaneously solve
120-
both forward and inverse problems?” We employ invertible U-Net in the latent spaces of velocity and waveforms, which can be constrained to be of the same size (just like Latent-UNets), i.e.,
121-
dim(v&#771;) = dim(p&#771;).
118+
<b>Invertible X-Net Architecture: </b> We propose another novel architecture within the GFI framework termed Invertible X-Net to simultaneously solve both forward and inverse problems. The Invertible X-Net architecture employ invertible U-Net (IU-Net) in the latent spaces of velocity and waveforms to learn bijective translation. The architecture also offers several
119+
key advantages over baselines. First, it simultaneously addresses both the forward and inverse problems within a single model architecture, whereas other baselines typically require training separate
120+
models for each task (e.g., Latent U-Net and Auto-Linear), leading to greater parameter efficiency. Second, the use of IU-Net ensures that the mappings between the latent spaces of velocity maps
121+
and seismic waveforms are bijective, guaranteeing a one-to-one mapping between these representations – a property not necessarily true for other models such as Latent U-nets and Auto-Linear. Third, the bi-directional training of the forward and inverse problems introduces a strong regularization effect as the gradients of the forward and inverse loss affects affects the parameters of f<sub>IU-Net</sub>, thereby
122+
affecting both forward and inverse performance.
123+
And fourth, the architecture can be trained with unpaired examples using cycle-loss consistency. We consider both variants of Invertible X-Net (with and without cycle loss) in our experiments to
124+
demonstrate its effect on generalization performance.
122125
</p>
123126

124127
<div class="inv_xnet">
125128
<figure>
126129
<img src="./static/images/InvertibleX-Net.png" alt="Invertible X-Net architecture" loading="lazy" width=30%>
127-
<figcaption> Invertible X-Net architecture </figcaption>
130+
<figcaption> Figure 3: Invertible X-Net architecture </figcaption>
128131
</figure>
129132
</div>
130133
</li>
131134
</ol>
132135

133136
<!-- Experiments -->
134137
<h2 class="banded"> Experiments </h2>
138+
<p style="text-align:justify;">
139+
We consider the OpenFWI collection of datasets, comprising multi-structural benchmark datasets for DL4SI grouped into: Vel, Fault, and Style Families. We compare Latent U-Net and Invertible X-Net on these datasets against several baseline methods for both forward and inverse problems.
140+
For quantitative comparisons, we used Mean Absolute Error (MAE), Mean Square Error (MSE), and Structured Similarity (SSIM) as evaluation metrics since neither metric
141+
alone is fully comprehensive. MAE captures pixel-level accuracy while SSIM highlights structural similarity.
142+
<ol>
143+
144+
<li> <p><b>Quantitative Comparison</b></p>
145+
<div style="display: flex; justify-content: space-around; align-items: flex-start;">
146+
<figure style="margin: 0 10px;">
147+
<img src="./static/images/Supervised_Inverse_SSIM_flipped.png" alt="supervised_inverse_ssim" loading="lazy" width=30%>
148+
<figcaption> (a) Inverse problem </figcaption>
149+
</figure>
150+
151+
<figure style="margin: 0 10px;">
152+
<img src="./static/images/Supervised_Forward_SSIM_flipped.png" alt="supervised_forward_ssim" loading="lazy" width=30%>
153+
<figcaption> (b) Forward problem </figcaption>
154+
</figure>
155+
<figcaption style="margin-top: 10px;">
156+
Figure 4: Comparison of Latent U-Nets (Small and Large), Invertible X-Net, Invertible X-Net (Cycle) with different baseline methods across different OpenFWI datasets.
157+
</figcaption>
158+
</li>
159+
<li><p><b>Qualitative Comparison</b></p></li>
160+
161+
162+
</ol>
163+
164+
165+
</div>
166+
167+
135168

136169

137170
<!-- BibTeX -->

0 commit comments

Comments
 (0)