Skip to content

Commit 402d5e6

Browse files
committed
Updated image width for mpp and astroclip.
1 parent 762f899 commit 402d5e6

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

_posts/2023-10-09-astroclip.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Our system, AstroCLIP, takes inspiration from CLIP (Contrastive Language Image P
2424
In the process, we also introduce the first transformer-based model for galaxy spectra, along with an effective pre-training strategy for this model.
2525

2626
<p align="center">
27-
<img src="/images/blog/im_embedding.png" alt="AstroCLIP Method" width="85%" style="mix-blend-mode: darken;">
27+
<img src="/images/blog/im_embedding.png" alt="AstroCLIP Method" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
2828
</p>
2929

3030
#### Method
@@ -41,19 +41,19 @@ The figure above shows on the left how the contrastive loss naturally will tend
4141
We show that our embedding scheme is able to align representations of galaxies both in-modality and cross-modality around meaningful shared semantics. Specifically, we query our embedding space with either the image or spectrum representation of a galaxy, and show that the retrieved galaxies by cosine similarity of their embeddings are extremely close to the original one. Below, we present all four retrieval types (spectrum-spectrum, image-image, spectrum-image, and image-spectrum, from left to right) for four randomly chosen query galaxies in our testing set (highlighted in red on the left).
4242

4343
<p align="center">
44-
<img src="/images/blog/query-retrieval.png" alt="Query and Retrieval" width="85%" style="mix-blend-mode: darken;">
44+
<img src="/images/blog/query-retrieval.png" alt="Query and Retrieval" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
4545
</p>
4646

4747
As one can see, the retrieved examples are galaxies of similar types, both for in-modality retrieval (b and c) and cross-modal retrieval (d and e).
4848

4949
We also present a couple of examples for the retrieved spectra, for both spectra queries (in-modality) and image queries (cross-modality) below:
5050

5151
<p align="center">
52-
<img src="/images/blog/spectra_retrieval_spectrum.png" alt="Spectrum-Spectrum Retrieval" width="85%" style="mix-blend-mode: darken;">
52+
<img src="/images/blog/spectra_retrieval_spectrum.png" alt="Spectrum-Spectrum Retrieval" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
5353
</p>
5454

5555
<p align="center">
56-
<img src="/images/blog/spectra_retrieval_im_cross.png" alt="Image-Spectrum Retrieval" width="85%" style="mix-blend-mode: darken;">
56+
<img src="/images/blog/spectra_retrieval_im_cross.png" alt="Image-Spectrum Retrieval" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
5757
</p>
5858

5959
These results demonstrate a strong correlation between the semantic content of the query, such as the red quiescent galaxy or a blue star forming galaxy, and the semantic content of the retrieved images or spectra.
@@ -66,11 +66,11 @@ In particular, we use simple k-Nearest Neighbour (k-NN) regression of our embedd
6666
Additionally, in-modality similarity appears to outperform cross-modality similarity as an input for the k-NN regression, indicating that, although our our contrastive training aims to connect embeddings between modalities, it has the emergent property of helping to structure the embeddings space within respective modalities. This is particularly evident for the redshift prediction (c, top panel) by similarity between spectra which is near perfect, even though redshift is not an information perfectly contained in images. This means that redshift has naturally emerged as a fundamental property which helps the spectral encoder to structure its embedding space.
6767

6868
<p align="center">
69-
<img src="/images/blog/redshift.png" alt="Redshift Prediction" width="85%" style="mix-blend-mode: darken;">
69+
<img src="/images/blog/redshift.png" alt="Redshift Prediction" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
7070
</p>
7171

7272
<p align="center">
73-
<img src="/images/blog/stellar-mass.png" alt="Stellar Mass Prediction" width="85%" style="mix-blend-mode: darken;">
73+
<img src="/images/blog/stellar-mass.png" alt="Stellar Mass Prediction" width="770px" style="max-width:100%" style="mix-blend-mode: darken;">
7474
</p>
7575

7676
#### Conclusions

_posts/2023-10-09-mpp.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Our pretraining approach can be described in two steps:
3737
2. Train a single scalable transformer model to predict the next step of a spatiotemporal series based on a small number of snapshots describing the history.
3838

3939
<p align="center">
40-
<img src="/images/blog/mpp_arch_v5.png" alt="Multiphysics Pretraining" width="85%">
40+
<img src="/images/blog/mpp_arch_v5.png" alt="Multiphysics Pretraining" width="770px" style="max-width:100%">
4141
</p>
4242

4343
For step one, we first use a recent method from the time-series forecasting literature called [Reversible Instance Normalization](https://openreview.net/forum?id=cGDAkQo1C0p). This method unifies the scales of different datasets for ingestion into the network then re-injects the scale information back into the output. The normalized state variables are individually projected into a shared space with field-specific weights (right side of figure above).
@@ -61,14 +61,14 @@ After pretraining, our models are able to compete with or beat modern baselines
6161
While this parity is impressive, we still expect fine-tuned, dedicated models to outperform general ones in most cases. The real question we would like to answer is whether this pretraining process actually improves the ability of the model to learn new physics. PDEBench has a natural division in the included fluid data between incompressible flow (Incompressible Navier-Stokes, Shallow Water) and compressible flow (Compressible Navier-Stokes). To explore the question, we pretrain new models without including compressible flow at all, then choose two distinct fine-tuning datasets. We call one “near” and the other “far”.
6262

6363
<p align="center" style="margin-bottom: 10px;">
64-
<img src="/images/blog/multiphysics_ke.png" alt="Visualizing the physics gap." width="85%">
64+
<img src="/images/blog/multiphysics_ke.png" alt="Visualizing the physics gap." width="770px" style="max-width:100%">
6565
<!-- <figcaption style="padding-left:32px; padding-right:20px; line-height:1.3"> Looking at individual fields (density, in this case), the incompressible flow included in the training set (left) has strong resemblence to the compressible simulation with low mach number (center) with similar diffusion levels, but the high mach number flow (right) develops significantly more complex, small-scale features as a result of both lower diffusion and more compressible behavior. </figcaption> -->
6666
</p>
6767

6868
Both datasets are generated by a compressible flow solver, but while "near" (center) is selected to be physically very similar to the incompressible Navier-Stokes data in the training set (left), "far" (right) is generated in a different flow regime that exhibits wildly different behavior across scales. In both cases, there are still significant differences in the solver, resolution, and boundary conditions making both challenging transfer tasks.
6969

7070
<p align="center" style="margin-bottom: 10px;">
71-
<img src="/images/blog/CNS_Xfer_Both.png" alt="Results of fine-tuning experiments." width="85%">
71+
<img src="/images/blog/CNS_Xfer_Both.png" alt="Results of fine-tuning experiments." width="770px" style="max-width:100%">
7272
<!-- <figcaption style="padding-left:32px; padding-right:20px; line-height:1.3"> Normalized RMSE comparing fine-tuned and "from scratch" models over a range of available samples. </figcaption> -->
7373
</p>
7474

@@ -78,8 +78,8 @@ Here's an example of the long-term rollout after fine-tuning on only one-step-ah
7878

7979
<!-- [![Compressible Navier-Stokes](http://img.youtube.com/vi/ndyFDhs62Bo/0.jpg)](http://www.youtube.com/watch?v=ndyFDhs62Bo "Compressible Navier-Stokes Rollout") -->
8080

81-
<div style="display:flex;align-items: center;flex-direction: column; margin-bottom:30px; margin-top:30px;">
82-
<div style="width:650px;max-width:95%;"><div style="position:relative;padding-bottom:50%;">
81+
<div style="display:flex;align-items: center;flex-direction: column; margin-bottom:20px;">
82+
<div style="width:650px;max-width:95%;"><div style="position:relative;padding-bottom:56.25%;">
8383
<iframe style="width:100%;height:100%;position:absolute;left:0px;top:0px;" frameborder="0" width="100%" height="100%" allowfullscreen="" allow="autoplay" src="https://www.youtube.com/embed/NsBWGEXZC4U">
8484
</iframe>
8585
</div>

0 commit comments

Comments
 (0)