You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download your foundation model into `./base_models/`, e.g.: [KEEP](https://huggingface.co/Astaxanthin/KEEP)[[1]](https://arxiv.org/abs/2412.13126), [CONCH](https://huggingface.co/MahmoodLab/conch)[[2]](https://www.nature.com/articles/s41591-024-02856-4), [MUSK](https://huggingface.co/xiangjx/musk)[[3]](https://www.nature.com/articles/s41586-024-08378-w), [PLIP](https://huggingface.co/vinid/plip)[[4]](https://www.nature.com/articles/s41591-023-02504-3).
89
84
90
85
> **💡 Important**: Only vision-language models with patch encoders are supported!
91
86
@@ -134,19 +129,19 @@ from transformers import AutoModel
134
129
from torchvision import transforms
135
130
fromPILimport Image
136
131
137
-
#🚀 Load your base model
132
+
# Load your base model
138
133
model = AutoModel.from_pretrained("Astaxanthin/KEEP", trust_remote_code=True)
This project builds upon amazing work from the community such as [CLAM](https://github.com/mahmoodlab/CLAM), [CoOp](https://github.com/KaiyangZhou/CoOp), [TransMIL](https://github.com/szc19990412/TransMIL). Big thanks to all the authors and developers! 👏
260
250
261
251
---
262
252
@@ -285,4 +275,5 @@ If you find our work useful, please consider citing our paper:
285
275
286
276
[](https://star-history.com/#your-username/PathPT&Date)
0 commit comments