Skip to content

Commit f04556a

Browse files
committed
add
1 parent 74cfc41 commit f04556a

File tree

2 files changed

+1
-0
lines changed

2 files changed

+1
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -269,6 +269,7 @@ Self-supervised learning is a machine learning method where a model learns gener
269269
|:-----------------:|:------------|
270270
| <img src="images/img51.png" width="900"> |<li> Title: <a href="https://www.nature.com/articles/s41467-024-44824-z">Segment anything in medical images</a></li> <li>Publication: Nature Communications 2024 </li> <li>Summary: Present MedSAM, a foundation model designed for bridging this gap by enabling universal medical image segmentation. The model is developed on a large-scale medical image dataset with 1,570,263 image-mask pairs, covering 10 imaging modalities and over 30 cancer types. </li> <li>Code: <a href="https://github.com/bowang-lab/MedSAM">https://github.com/bowang-lab/MedSAM</a>|
271271
| <img src="images/img54.png" width="900"> |<li> Title: <a href="https://link.springer.com/chapter/10.1007/978-3-031-73661-2_12">ScribblePrompt: Fast and Flexible Interactive Segmentation for Any Biomedical Image</a></li> <li>Publication: ECCV 2024 </li> <li>Summary: Present ScribblePrompt, a flexible neural network based interactive segmentation tool for biomedical imaging that enables human annotators to segment previously unseen structures using scribbles, clicks, and bounding box. ScribblePrompt’s success rests on a set of careful design decisions. These include a training strategy that incorporates both a highly diverse set of images and tasks, novel algorithms for simulated user interactions and labels, and a network that enables fast inference. </li> <li>Code: <a href="https://scribbleprompt.csail.mit.edu">https://scribbleprompt.csail.mit.edu</a>|
272+
| <img src="images/img61.png" width="900"> |<li> Title: <a href="https://openaccess.thecvf.com/content/CVPR2024/html/Ding_Clustering_Propagation_for_Universal_Medical_Image_Segmentation_CVPR_2024_paper.html">Clustering Propagation for Universal Medical Image Segmentation</a></li> <li>Publication: CVPR 2024 </li> <li>Summary: Introduce S2VNet a universal framework that leverages Slice-to-Volume propagation to unify automatic/interactive segmentation within a single model and one training session. S2VNet makes full use of the slice-wise structure of volumetric data by initializing cluster centers from the cluster results of the previous slice. This enables knowledge acquired from prior slices to assist in segmenting the current slice further efficiently bridging the communication between remote slices using mere 2D networks. Moreover, such a framework readily accommodates interactive segmentation with no architectural change simply by initializing centroids from user inputs. </li> <li>Code: <a href="https://github.com/dyh127/S2VNet">https://github.com/dyh127/S2VNet</a>|
272273
| <img src="images/img60.png" width="900"> |<li> Title: <a href="https://link.springer.com/chapter/10.1007/978-3-031-72111-3_70">TP-DRSeg: Improving Diabetic Retinopathy Lesion Segmentation with Explicit Text-Prompts Assisted SAM</a></li> <li>Publication: MICCAI 2024 </li> <li>Summary: Propose a framework that customizes SAM for text-prompted Diabetic Retinopathy lesion segmentation, termed TP-DRSeg, which exploits language cues to inject medical prior knowledge into the vision-only segmentation network, thereby combining the advantages of different foundation models and enhancing the credibility of segmentation. To unleash the potential of vision-language models in the recognition of medical concepts, it utlizes an explicit prior encoder that transfers implicit medical concepts into explicit prior knowledge, providing explainable clues to excavate low-level features associated with lesions. Furthermore, a prior-aligned injector is designed to inject explicit priors into the segmentation process, which can facilitate knowledge sharing across multi-modality features and allow the framework to be trained in a parameter-efficient fashion. </li> <li>Code: <a href="https://github.com/wxliii/TP-DRSeg">https://github.com/wxliii/TP-DRSeg</a>|
273274

274275
#### Few-shot/One-shot

images/img61.png

625 KB
Loading

0 commit comments

Comments
 (0)