Galaxy morphology analysis involves studying galaxies based on their shapes and structures. For such studies, fundamental tasks include identifying and classifying galaxies in astronomical images, as well as retrieving visually or structurally similar galaxies through similarity search. Existing methods either directly train domain-specific foundation models on large, annotated datasets or fine-tune vision foundation models on a smaller set of images. The former is effective but costly, while the latter is more resource-efficient but often yields lower accuracy. To address these challenges, we introduce GalaxAlign, a multimodal approach inspired by how citizen scientists identify galaxies in astronomical images by following textual descriptions and matching schematic symbols. Specifically, GalaxAlign employs a tri-modal alignment framework to align three types of data during fine-tuning: (1) schematic symbols representing galaxy shapes and structures, (2) textual labels for these symbols, and (3) galaxy images. By incorporating multimodal instructions, GalaxAlign eliminates the need for expensive pretraining and enhances the effectiveness of fine-tuning. Extensive experiments on galaxy classification and similarity search demonstrate that our method effectively fine-tunes general pre-trained models for astronomical tasks by incorporating domain-specific multi-modal knowledge.
Set up the conda environment using the requirements.txt file.
Please find the public available datasets at https://astronn.readthedocs.io/en/latest/galaxy10.html and https://github.com/mwalmsley/galaxy-datasets.
Stage 1:
Modify the code to use files code/src/open_clip/loss_stage1.py and code/src/open_clip/model_stage1.py.
Run the train_model.sh script:
torchrun --nproc_per_node 2 -m open_clip_train.main \
--batch-size 256 \
--precision amp \
--workers 4 \
--report-to tensorboard \
--save-frequency 1 \
--logs="" \
--dataset-type csv \
--csv-separator="," \
--train-data \
--csv-img-key filepath \
--csv-caption-key caption \
--csv-mask-key mask \
--warmup 1000 \
--lr=5e-6 \
--wd=0.0002 \
--epochs=50 \
--model convnext_base \
--save-frequency 50 \
--pretrained laion400m_s13b_b51k
Stage 2:
Modify the code to use files code/src/open_clip/loss.py and code/src/open_clip/model.py.
Run the train_model.sh script with the --pretrained setting to the path of fine-tuned model from stage1.
Modify the model, datapath path parameter in file code/src/test_setting.py.
Run the code/src/test_setting.py.