Official repository for "PulseMind: A Multi-Modal Medical Model for Real-World Clinical Diagnosis", accepted as an Oral paper at AAAI 2026.
Datasets, models, and benchmarks for PulseMind.
This repository provides the official codebase and evaluation scripts for the PulseMind project, together with:
- π§ͺ MediScope: a large-scale multimodal medical dataset.
In this release, we provide a curated subset of ~1,000 cases (JSON + images). The full dataset is larger and will be gradually released. - π§ Models:
PulseMind-72B
- π Benchmarks:
MedDiagnoseβ 237-sample test set (JSON + images)CMtMedQA-testβ 1,000-sample test set (JSON)MedDiagnose-plusβ 937-sample extended test set (JSON + images)
β οΈ Due to size and privacy considerations, all datasets and model checkpoints are hosted externally and are not stored in this GitHub repository.
This repo mainly contains evaluation code.
-
MediScope (curated ~1k subset)
-
MedDiagnose (237 samples)
-
CMtMedQA-test (1,000 samples)
-
MedDiagnose-plus (937 samples)
- PulseMind-72B checkpoint: Download link
After downloading, please follow the recommended directory layout
(e.g., place raw data underdata/, benchmark test sets underBenchmark/,
and model checkpoints undermodel/), so that the provided evaluation scripts can run out of the box.
The GitHub repository mainly contains evaluation code and auxiliary configs:
.
βββ data/ # (empty by default) place downloaded datasets here
β
βββ Benchmark/
β βββ CMtMedQA-test/ # Folder for CMtMedQA-test data (JSON, etc.)
β βββ MedDiagnose/ # Folder for MedDiagnose data (JSON + images)
β βββ MedDiagnose-plus/ # Folder for MedDiagnose-plus data (JSON + images)
β βββ Eval/ # Optional: extra evaluation utilities / configs
β
βββ model/ # Place downloaded model checkpoints here
β
βββ README.md