Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions data/xml/2022.coling.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1826,7 +1826,7 @@
<author><first>Jie</first><last>Yu</last></author>
<author><first>Jun</first><last>Ma</last></author>
<author><first>Huijun</first><last>Liu</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<pages>1842–1854</pages>
<abstract>Few-shot named entity recognition (NER) enables us to build a NER system for a new domain using very few labeled examples. However, existing prototypical networks for this task suffer from roughly estimated label dependency and closely distributed prototypes, thus often causing misclassifications. To address the above issues, we propose EP-Net, an Entity-level Prototypical Network enhanced by dispersedly distributed prototypes. EP-Net builds entity-level prototypes and considers text spans to be candidate entities, so it no longer requires the label dependency. In addition, EP-Net trains the prototypes from scratch to distribute them dispersedly and aligns spans to prototypes in the embedding space using a space projection. Experimental results on two evaluation tasks and the Few-NERD settings demonstrate that EP-Net consistently outperforms the previous strong models in terms of overall performance. Extensive analyses further validate the effectiveness of EP-Net.</abstract>
<url hash="01624ac6">2022.coling-1.159</url>
Expand Down Expand Up @@ -4909,7 +4909,7 @@
<title><fixed-case>PARSE</fixed-case>: An Efficient Search Method for Black-box Adversarial Text Attacks</title>
<author><first>Pengwei</first><last>Zhan</last></author>
<author><first>Chao</first><last>Zheng</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author><first>Yuxiang</first><last>Wang</last></author>
<author><first>Liming</first><last>Wang</last></author>
<author><first>Yang</first><last>Wu</last></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2022.iwslt.xml
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@
<author><first>Haitao</first><last>Tang</last></author>
<author><first>Xiaoxi</first><last>Li</last></author>
<author><first>Xinyuan</first><last>Zhou</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author><first>Jianwei</first><last>Cui</last></author>
<author><first>Pan</first><last>Deng</last></author>
<author><first>Mohan</first><last>Shi</last></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2023.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -5101,7 +5101,7 @@
<paper id="358">
<title>Contrastive Learning with Adversarial Examples for Alleviating Pathology of Language Model</title>
<author orcid="0000-0003-3724-4431"><first>Pengwei</first><last>Zhan</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences</affiliation></author>
<author><first>Jing</first><last>Yang</last><affiliation>Institute of Information Engneering, Chinese Academy of Science</affiliation></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last><affiliation>Institute of Information Engneering, Chinese Academy of Science</affiliation></author>
<author orcid="0000-0002-3136-9623"><first>Xiao</first><last>Huang</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
<author orcid="0000-0003-2686-6681"><first>Chunlei</first><last>Jing</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
<author orcid="0000-0003-3432-900X"><first>Jingying</first><last>Li</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2023.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -9302,7 +9302,7 @@
<paper id="500">
<title>Similarizing the Influence of Words with Contrastive Learning to Defend Word-level Adversarial Text Attack</title>
<author orcid="0000-0003-3724-4431"><first>Pengwei</first><last>Zhan</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences</affiliation></author>
<author><first>Jing</first><last>Yang</last><affiliation>Institute of Information Engneering, Chinese Academy of Science</affiliation></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last><affiliation>Institute of Information Engneering, Chinese Academy of Science</affiliation></author>
<author orcid="0000-0001-5800-7983"><first>He</first><last>Wang</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
<author><first>Chao</first><last>Zheng</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
<author orcid="0000-0002-3136-9623"><first>Xiao</first><last>Huang</last><affiliation>Institute of Information Engineering, Chinese Academy of Sciences</affiliation></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2024.lrec.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14383,7 +14383,7 @@
<paper id="1223">
<title>Rethinking Word-level Adversarial Attack: The Trade-off between Efficiency, Effectiveness, and Imperceptibility</title>
<author><first>Pengwei</first><last>Zhan</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author><first>He</first><last>Wang</last></author>
<author><first>Chao</first><last>Zheng</last></author>
<author><first>Liming</first><last>Wang</last></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.blackboxnlp.xml
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@
<paper id="24">
<title>Exploring Large Language Models’ World Perception: A Multi-Dimensional Evaluation through Data Distribution</title>
<author orcid="0000-0003-2447-9960"><first>Zhi</first><last>Li</last><affiliation>Tsinghua University</affiliation></author>
<author orcid="0000-0002-5918-2991"><first>Jing</first><last>Yang</last></author>
<author orcid="0000-0002-5918-2991" id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author><first>Ying</first><last>Liu</last><affiliation>Tsinghua University, Tsinghua University</affiliation></author>
<pages>415-432</pages>
<abstract>In recent years, large language models (LLMs) have achieved remarkable success across diverse natural language processing tasks. Nevertheless, their capacity to process and reflect core human experiences remains underexplored. Current benchmarks for LLM evaluation typically focus on a single aspect of linguistic understanding, thus failing to capture the full breadth of its abstract reasoning about the world. To address this gap, we propose a multidimensional paradigm to investigate the capacity of LLMs to perceive the world through temporal, spatial, sentimental, and causal aspects. We conduct extensive experiments by partitioning datasets according to different distributions and employing various prompting strategies. Our findings reveal significant differences and shortcomings in how LLMs handle temporal granularity, multi-hop spatial reasoning, subtle sentiments, and implicit causal relationships. While sophisticated prompting approaches can mitigate some of these limitations, substantial challenges persist in effectively capturing human abstract perception, highlighting the discrepancy between model reasoning and human behavior. We aspire that this work, which assesses LLMs from multiple perspectives of human understanding of the world, will guide more instructive research on the LLMs’ perception or cognition.</abstract>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.emnlp.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14681,7 +14681,7 @@
<title>Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media</title>
<author orcid="0009-0004-9659-2529"><first>Yujie</first><last>Wang</last></author>
<author orcid="0000-0003-2783-8199"><first>Yunwei</first><last>Zhao</last><affiliation>CNCERT/CC</affiliation></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author><first>Han</first><last>Han</last><affiliation>Southwest University</affiliation></author>
<author orcid="0000-0002-8348-392X"><first>Shiguang</first><last>Shan</last><affiliation>Institute of Computing Technology, Chinese Academy of Sciences</affiliation></author>
<author orcid="0000-0002-8899-3996"><first>Jie</first><last>Zhang</last><affiliation>Institute of Computing Technology, Chinese Academy of Sciences</affiliation></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.fever.xml
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@
<author><first>Premtim</first><last>Sahitaj</last></author>
<author><first>Arthur</first><last>Hilbert</last></author>
<author orcid="0000-0003-0183-9433"><first>Veronika</first><last>Solopova</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang-campinas"><first>Jing</first><last>Yang</last></author>
<author orcid="0009-0008-7408-7483"><first>Nils</first><last>Feldhus</last></author>
<author><first>Tatiana</first><last>Anikina</last><affiliation>German Research Center for AI</affiliation></author>
<author orcid="0000-0002-0899-0657"><first>Simon</first><last>Ostermann</last><affiliation>German Research Center for AI</affiliation></author>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.findings.xml
Original file line number Diff line number Diff line change
Expand Up @@ -30701,7 +30701,7 @@
<author orcid="0009-0007-5468-9711"><first>Yijia</first><last>Fan</last></author>
<author><first>Jusheng</first><last>Zhang</last></author>
<author orcid="0009-0008-5474-1206"><first>Kaitong</first><last>Cai</last></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang"><first>Jing</first><last>Yang</last></author>
<author orcid="0000-0002-7817-8306"><first>Keze</first><last>Wang</last><affiliation>SUN YAT-SEN UNIVERSITY</affiliation></author>
<pages>6243-6256</pages>
<abstract>Multi-label classification (MLC) faces persistent challenges from label imbalance, spurious correlations, and distribution shifts, especially in rare label prediction. We propose the Causal Cooperative Game (CCG) framework, which models MLC as a multi-player cooperative process. CCG integrates explicit causal discovery via Neural Structural Equation Models, a counterfactual curiosity reward to guide robust feature learning, and a causal invariance loss to ensure generalization across environments, along with targeted rare label enhancement. Extensive experiments on benchmark datasets demonstrate that CCG significantly improves rare label prediction and overall robustness compared to strong baselines. Ablation and qualitative analyses further validate the effectiveness and interpretability of each component. Our work highlights the promise of combining causal inference and cooperative game theory for more robust and interpretable multi-label learning.</abstract>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.sdp.xml
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@
<author><first>Christian</first><last>Woerle</last><affiliation>NA</affiliation></author>
<author><first>Giuseppe</first><last>Guarino</last><affiliation>NA</affiliation></author>
<author orcid="0000-0002-0032-3833"><first>Salar</first><last>Mohtaj</last><affiliation>German Research Center for AI</affiliation></author>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang-campinas"><first>Jing</first><last>Yang</last></author>
<author orcid="0000-0003-0183-9433"><first>Veronika</first><last>Solopova</last></author>
<author orcid="0000-0002-9735-6956"><first>Vera</first><last>Schmitt</last><affiliation>Technische Universität Berlin</affiliation></author>
<pages>281-287</pages>
Expand Down
2 changes: 1 addition & 1 deletion data/xml/2025.tacl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@
</paper>
<paper id="15">
<title>Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on <fixed-case>NLI</fixed-case>-related tasks</title>
<author><first>Jing</first><last>Yang</last></author>
<author id="jing-yang-campinas"><first>Jing</first><last>Yang</last></author>
<author><first>Max</first><last>Glockner</last></author>
<author><first>Anderson</first><last>Rocha</last></author>
<author><first>Iryna</first><last>Gurevych</last></author>
Expand Down
8 changes: 8 additions & 0 deletions data/yaml/name_variants.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1222,6 +1222,14 @@
- canonical: {first: Clint, last: Burfoot}
variants:
- {first: Clinton, last: Burfoot}
- canonical: {first: Jing, last: Yang}
id: jing-yang-campinas
orcid: 0000-0002-0035-3960
institution: State University of Campinas
comment: Campinas
- canonical: {first: Jing, last: Yang}
id: jing-yang
comment: May refer to several people
- canonical: {first: John D., last: Burger}
comment: MITRE
id: john-d-burger
Expand Down