Skip to content

Commit 08c83d3

Browse files
committed
upload zhengli publication
1 parent 835acbb commit 08c83d3

File tree

5 files changed

+65
-18
lines changed

5 files changed

+65
-18
lines changed

index.html

Lines changed: 52 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -960,6 +960,23 @@ <h2 id="experience">News</h2>
960960
<h2 id="publications">Selected Publications</h2>
961961
(* indicates equal contribution, # corresponding author)
962962

963+
<div class="paper"><img class="paper" src="./resources/paper_icon/ICCV_2025_ATPrompt.png" title="Advancing Textual Prompt Learning with Anchored Attributes.">
964+
<div><strong>Advancing Textual Prompt Learning with Anchored Attributes.</strong>
965+
<br>Zheng Li, Yibing Song, Ming-Ming Cheng, Xiang Li#, Jian Yang# <br>in ICCV, 2025<br>
966+
<a href="https://arxiv.org/abs/2412.09442">[Paper]</a>
967+
<a href="./resources/bibtex/ICCV_2025_ATPrompt.txt">[BibTex]</a>
968+
<a href="https://github.com/zhengli97/ATPrompt">[Code]</a><img src="https://img.shields.io/github/stars/zhengli97/ATPrompt?style=social"/>
969+
<a href="https://zhuanlan.zhihu.com/p/11787739769">[中文解读]</a>
970+
<a href="https://github.com/zhengli97/ATPrompt/blob/main/docs/ATPrompt_chinese_version.pdf">[中文版]</a>
971+
<br>
972+
<alert>
973+
ATPrompt introduces a new attribute-anchored prompt format that can be seamlessly integrated into existing textual prompt leraning methods and achieve general improvements.
974+
</alert>
975+
</div>
976+
<div class="spanner"></div>
977+
</div>
978+
979+
963980
<div class="paper"><img class="paper" src="./resources/paper_icon/TPAMI_2025_FGVTP.png" title="Fine-Grained Visual Text Prompting">
964981
<div><strong>Fine-Grained Visual Text Prompting</strong>
965982
<br>Lingfeng Yang, Xiang Li#, Yueze Wang, Xinlong Wang, Jian Yang#<br>in TPAMI, 2025<br>
@@ -993,22 +1010,6 @@ <h2 id="publications">Selected Publications</h2>
9931010
</div>
9941011

9951012

996-
<div class="paper"><img class="paper" src="./resources/paper_icon/NeurIPS_2023_FGVP.png" title="Fine-Grained Visual Prompting">
997-
<div><strong>Fine-Grained Visual Prompting</strong>
998-
<br>Lingfeng Yang, Yueze Wang, Xiang Li#, Xinlong Wang, Jian Yang#<br>in NeurIPS, 2023<br>
999-
<a href="https://proceedings.neurips.cc/paper_files/paper/2023/file/4e9fa6e716940a7cfc60c46e6f702f52-Paper-Conference.pdf">[Paper]</a>
1000-
<a href="./resources/bibtex/NeurIPS-2023-fine-grained-visual-prompting-Bibtex.bib">[BibTex]</a>
1001-
<a href="https://github.com/ylingfeng/FGVP">[Code]</a><img src="https://img.shields.io/github/stars/ylingfeng/FGVP?style=social"/>
1002-
<a href="https://mp.weixin.qq.com/s?search_click_id=10536340093298438394-1705732863737-1260009527&__biz=MzUxMDE4MzAzOA==&mid=2247714099&idx=1&sn=efe4d92ccece149d624d44a19f75404f&chksm=f8982f6663c6f4103967040294490fb7419803ceb6b54f2e79de728104a1858ad03f011d3fb8&scene=7&subscene=90&sessionid=1705732839&clicktime=1705732863&enterid=1705732863&ascene=65&fasttmpl_type=0&fasttmpl_fullversion=7038836-zh_CN-zip&fasttmpl_flag=0&realreporttime=1705732863790&devicetype=android-33&version=28002d3b&nettype=WIFI&abtest_cookie=AAACAA%3D%3D&lang=zh_CN&countrycode=CN&exportkey=n_ChQIAhIQTY3OsEwNdtlJy0RxUEMZyxLcAQIE97dBBAEAAAAAAJ%2F5F8UMLd0AAAAOpnltbLcz9gKNyK89dVj0fDJfc0iQOozTOSv7wroTFtyx6pfMLQW9ACiiUD2XPYTJToJQxVNxvrF5tAIC8R0SbOS35hwJULATy64LUtXxEgmsCoz6Cqv01v%2B25HzaDWybt6vi82M5Lad5HaUdHZAgh4kTKQl9Lri9nQxeptfavWT7F389xOk%2BXh7B4nHuFz%2BeaRdMmZf6lLv3kLpf10%2BJykklCd3SfLyGkE68DPfh1hmFhext2v%2BZTOids%2B0QavnzY7GPOQE%3D&pass_ticket=h3SZ5GzwbdiBvmS547xoTsCldqEAFLvligHaiMY%2BXuAaSiUHNNO2iFTVImHJqOpfAucoZ0LcWe34Hs99pbaVbA%3D%3D&wx_header=3&poc_token=HEYkVWijKQwOws52LqNI8BFkPicAMjsAOeCl7vHt">[中文解读]</a>
1003-
<a href="https://www.bilibili.com/video/BV1qw411873s/?spm_id_from=333.999.0.0&vd_source=55bfc02adba971ea9a2c7d47e95180cc">[中文视频]</a>
1004-
<br>
1005-
<alert>
1006-
FGVP is a visual prompting technique that improves referring expression comprehension by highlighting regions of interest via fine-grained segmentation, achieving better accuracy with faster inference than state-of-the-art methods.
1007-
</alert>
1008-
</div>
1009-
<div class="spanner"></div>
1010-
</div>
1011-
10121013
<div class="paper"><img class="paper" src="./resources/paper_icon/NeurIPS_2024_SARDet.png"
10131014
title="Sardet-100k: Towards open-source benchmark and toolkit for large-scale sar object detection">
10141015
<div><strong>Sardet-100k: Towards open-source benchmark and toolkit for large-scale sar object detection</strong><br>
@@ -1045,6 +1046,40 @@ <h2 id="publications">Selected Publications</h2>
10451046
<div class="spanner"></div>
10461047
</div>
10471048

1049+
1050+
<div class="paper"><img class="paper" src="./resources/paper_icon/CVPR_2024_PromptKD.png" title="PromptKD: Unsupervised Prompt Distillation for Vision-Language Models.">
1051+
<div><strong>PromptKD: Unsupervised Prompt Distillation for Vision-Language Models.</strong>
1052+
<br>Zheng Li, Xiang Li#, Xinyi Fu, Xin Zhang, Weiqiang Wang, Shuo Chen, Jian Yang#.<br>in CVPR, 2024<br>
1053+
<a href="https://arxiv.org/abs/2403.02781">[Paper]</a>
1054+
<a href="./resources/bibtex/CVPR_2024.PromptKD.txt">[BibTex]</a>
1055+
<a href="https://github.com/zhengli97/PromptKD">[Code]</a><img src="https://img.shields.io/github/stars/zhengli97/PromptKD?style=social"/>
1056+
<a href="https://zhuanlan.zhihu.com/p/684269963">[中文解读]</a>
1057+
<a href="https://github.com/zhengli97/PromptKD/blob/main/docs/PromptKD_chinese_version.pdf">[中文版]</a>
1058+
<a href="https://www.techbeat.net/talk-info?id=915">[中文视频]</a>
1059+
<br>
1060+
<alert>
1061+
PromptKD is a simple and effective prompt-driven unsupervised distillation framework for VLMs (e.g., CLIP), with state-of-the-art performance.
1062+
</alert>
1063+
</div>
1064+
<div class="spanner"></div>
1065+
</div>
1066+
1067+
<div class="paper"><img class="paper" src="./resources/paper_icon/NeurIPS_2023_FGVP.png" title="Fine-Grained Visual Prompting">
1068+
<div><strong>Fine-Grained Visual Prompting</strong>
1069+
<br>Lingfeng Yang, Yueze Wang, Xiang Li#, Xinlong Wang, Jian Yang#<br>in NeurIPS, 2023<br>
1070+
<a href="https://proceedings.neurips.cc/paper_files/paper/2023/file/4e9fa6e716940a7cfc60c46e6f702f52-Paper-Conference.pdf">[Paper]</a>
1071+
<a href="./resources/bibtex/NeurIPS-2023-fine-grained-visual-prompting-Bibtex.bib">[BibTex]</a>
1072+
<a href="https://github.com/ylingfeng/FGVP">[Code]</a><img src="https://img.shields.io/github/stars/ylingfeng/FGVP?style=social"/>
1073+
<a href="https://mp.weixin.qq.com/s?search_click_id=10536340093298438394-1705732863737-1260009527&__biz=MzUxMDE4MzAzOA==&mid=2247714099&idx=1&sn=efe4d92ccece149d624d44a19f75404f&chksm=f8982f6663c6f4103967040294490fb7419803ceb6b54f2e79de728104a1858ad03f011d3fb8&scene=7&subscene=90&sessionid=1705732839&clicktime=1705732863&enterid=1705732863&ascene=65&fasttmpl_type=0&fasttmpl_fullversion=7038836-zh_CN-zip&fasttmpl_flag=0&realreporttime=1705732863790&devicetype=android-33&version=28002d3b&nettype=WIFI&abtest_cookie=AAACAA%3D%3D&lang=zh_CN&countrycode=CN&exportkey=n_ChQIAhIQTY3OsEwNdtlJy0RxUEMZyxLcAQIE97dBBAEAAAAAAJ%2F5F8UMLd0AAAAOpnltbLcz9gKNyK89dVj0fDJfc0iQOozTOSv7wroTFtyx6pfMLQW9ACiiUD2XPYTJToJQxVNxvrF5tAIC8R0SbOS35hwJULATy64LUtXxEgmsCoz6Cqv01v%2B25HzaDWybt6vi82M5Lad5HaUdHZAgh4kTKQl9Lri9nQxeptfavWT7F389xOk%2BXh7B4nHuFz%2BeaRdMmZf6lLv3kLpf10%2BJykklCd3SfLyGkE68DPfh1hmFhext2v%2BZTOids%2B0QavnzY7GPOQE%3D&pass_ticket=h3SZ5GzwbdiBvmS547xoTsCldqEAFLvligHaiMY%2BXuAaSiUHNNO2iFTVImHJqOpfAucoZ0LcWe34Hs99pbaVbA%3D%3D&wx_header=3&poc_token=HEYkVWijKQwOws52LqNI8BFkPicAMjsAOeCl7vHt">[中文解读]</a>
1074+
<a href="https://www.bilibili.com/video/BV1qw411873s/?spm_id_from=333.999.0.0&vd_source=55bfc02adba971ea9a2c7d47e95180cc">[中文视频]</a>
1075+
<br>
1076+
<alert>
1077+
FGVP is a visual prompting technique that improves referring expression comprehension by highlighting regions of interest via fine-grained segmentation, achieving better accuracy with faster inference than state-of-the-art methods.
1078+
</alert>
1079+
</div>
1080+
<div class="spanner"></div>
1081+
</div>
1082+
10481083
<div class="paper"><img class="paper" src="./resources/paper_icon/ICCV_2023_LSKNet.png"
10491084
title="Large Selective Kernel Network for Remote Sensing Object Detection">
10501085
<div><strong>Large Selective Kernel Network for Remote Sensing Object Detection</strong><br>
@@ -1069,8 +1104,7 @@ <h2 id="publications">Selected Publications</h2>
10691104
in AAAI, 2023<br>
10701105
<a href="https://arxiv.org/pdf/2211.16231.pdf">[Paper]</a>
10711106
<a href="./resources/bibtex/AAAI_2023_CTKD.txt">[BibTex]</a>
1072-
<a href="https://github.com/zhengli97/CTKD">[Code]</a><img
1073-
src="https://img.shields.io/github/stars/zhengli97/CTKD?style=social"/>
1107+
<a href="https://github.com/zhengli97/CTKD">[Code]</a><img src="https://img.shields.io/github/stars/zhengli97/CTKD?style=social"/>
10741108
<br>
10751109
<alert>
10761110
CTKD organizes the distillation task from easy to hard through a dynamic and learnable temperature. The temperature is learned during the student’s training process with a reversed gradient that aims to maximize the distillation loss in an adversarial manner.
Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
@inproceedings{li2024promptkd,
2+
title={Promptkd: Unsupervised prompt distillation for vision-language models},
3+
author={Li, Zheng and Li, Xiang and Fu, Xinyi and Zhang, Xin and Wang, Weiqiang and Chen, Shuo and Yang, Jian},
4+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
5+
pages={26617--26626},
6+
year={2024}
7+
}
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
@inproceedings{li2025advancing,
2+
title={Advancing Textual Prompt Learning with Anchored Attributes},
3+
author={Li, Zheng and Song, Yibing and Cheng, Ming-Ming and Li, Xiang and Yang, Jian},
4+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
5+
year={2025}
6+
}
40.5 KB
Loading
269 KB
Loading

0 commit comments

Comments
 (0)