You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://kgml-lab.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://kgml-lab.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2025-04-20T19:51:49+00:00</updated><id>https://kgml-lab.github.io/feed.xml</id><title type="html">KGML Lab</title><subtitle>Group website of Knowledge-guided Machine Learning (KGML) Lab at Virginia Tech </subtitle><entry><title type="html">Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra</title><link href="https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra/" rel="alternate" type="text/html" title="Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra"/><published>2024-05-14T00:00:00+00:00</published><updated>2024-05-14T00:00:00+00:00</updated><id>https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra</id><content type="html" xml:base="https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra/"><![CDATA[]]></content><author><name></name></author><summary type="html"><![CDATA[We’re sharing updates across our Gemini family of models and a glimpse of Project Astra, our vision for the future of AI assistants.]]></summary></entry><entry><title type="html">Displaying External Posts on Your al-folio Blog</title><link href="https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/" rel="alternate" type="text/html" title="Displaying External Posts on Your al-folio Blog"/><published>2022-04-23T23:20:09+00:00</published><updated>2022-04-23T23:20:09+00:00</updated><id>https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog</id><content type="html" xml:base="https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/"><![CDATA[]]></content><author><name></name></author></entry></feed>
1
+
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://kgml-lab.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://kgml-lab.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2025-04-20T20:00:51+00:00</updated><id>https://kgml-lab.github.io/feed.xml</id><title type="html">KGML Lab</title><subtitle>Group website of Knowledge-guided Machine Learning (KGML) Lab at Virginia Tech </subtitle><entry><title type="html">Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra</title><link href="https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra/" rel="alternate" type="text/html" title="Google Gemini updates: Flash 1.5, Gemma 2 and Project Astra"/><published>2024-05-14T00:00:00+00:00</published><updated>2024-05-14T00:00:00+00:00</updated><id>https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra</id><content type="html" xml:base="https://kgml-lab.github.io/blog/2024/google-gemini-updates-flash-15-gemma-2-and-project-astra/"><![CDATA[]]></content><author><name></name></author><summary type="html"><![CDATA[We’re sharing updates across our Gemini family of models and a glimpse of Project Astra, our vision for the future of AI assistants.]]></summary></entry><entry><title type="html">Displaying External Posts on Your al-folio Blog</title><link href="https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/" rel="alternate" type="text/html" title="Displaying External Posts on Your al-folio Blog"/><published>2022-04-23T23:20:09+00:00</published><updated>2022-04-23T23:20:09+00:00</updated><id>https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog</id><content type="html" xml:base="https://kgml-lab.github.io/blog/2022/displaying-external-posts-on-your-al-folio-blog/"><![CDATA[]]></content><author><name></name></author></entry></feed>
In subsurface imaging, learning the mapping from velocity maps to seismic waveforms (forward problem) and waveforms to velocity (inverse problem) is important for several applications. While traditional techniques for solving forward and inverse problems are computationally prohibitive, there is a growing interest in leveraging recent advances in deep learning to learn the mapping between velocity maps and seismic waveform images directly from data.
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ V and seismic
107
-
waveforms p ∈ P can be projected to their corresponding latent space representations, v˜ and p˜,
108
-
respectively, which can be mapped back to their reconstructions in the original space, vˆ and pˆ. Note
109
-
that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size
110
-
of v˜ may not match with the size of p˜. Second, according to the latent space
111
-
translation assumption, we assume that the problem of learning forward and inverse mappings in
112
-
the original spaces of velocity and waveforms can be reformulated as learning translations in their
106
+
We propose Generalized Forward-Inverse (GFI) framework based on two assumptions. First, according to the manifold assumption, we assume that the velocity maps v ∈ V and seismic
107
+
waveforms p ∈ P can be projected to their corresponding latent space representations, v˜ and p˜, respectively, which can be mapped back to their reconstructions in the original space, vˆ and pˆ.
108
+
Note that the sizes of the latent spaces can be smaller or larger than the original spaces. Further, the size of v˜ may not match with the size of p˜. Second, according to the latent space
109
+
translation assumption, we assume that the problem of learning forward and inverse mappings in the original spaces of velocity and waveforms can be reformulated as learning translations in their
0 commit comments