diff --git a/data/xml/2022.coling.xml b/data/xml/2022.coling.xml
index 2dbc4374c6..88e774c6d8 100644
--- a/data/xml/2022.coling.xml
+++ b/data/xml/2022.coling.xml
@@ -1826,7 +1826,7 @@
JieYu
JunMa
HuijunLiu
- JingYang
+ JingYang
1842–1854
Few-shot named entity recognition (NER) enables us to build a NER system for a new domain using very few labeled examples. However, existing prototypical networks for this task suffer from roughly estimated label dependency and closely distributed prototypes, thus often causing misclassifications. To address the above issues, we propose EP-Net, an Entity-level Prototypical Network enhanced by dispersedly distributed prototypes. EP-Net builds entity-level prototypes and considers text spans to be candidate entities, so it no longer requires the label dependency. In addition, EP-Net trains the prototypes from scratch to distribute them dispersedly and aligns spans to prototypes in the embedding space using a space projection. Experimental results on two evaluation tasks and the Few-NERD settings demonstrate that EP-Net consistently outperforms the previous strong models in terms of overall performance. Extensive analyses further validate the effectiveness of EP-Net.
2022.coling-1.159
@@ -4909,7 +4909,7 @@
PARSE: An Efficient Search Method for Black-box Adversarial Text Attacks
PengweiZhan
ChaoZheng
- JingYang
+ JingYang
YuxiangWang
LimingWang
YangWu
diff --git a/data/xml/2022.iwslt.xml b/data/xml/2022.iwslt.xml
index 07610a69ea..9be813e9de 100644
--- a/data/xml/2022.iwslt.xml
+++ b/data/xml/2022.iwslt.xml
@@ -232,7 +232,7 @@
HaitaoTang
XiaoxiLi
XinyuanZhou
- JingYang
+ JingYang
JianweiCui
PanDeng
MohanShi
diff --git a/data/xml/2023.acl.xml b/data/xml/2023.acl.xml
index 21f2770765..0ba1ccc1de 100644
--- a/data/xml/2023.acl.xml
+++ b/data/xml/2023.acl.xml
@@ -5101,7 +5101,7 @@
Contrastive Learning with Adversarial Examples for Alleviating Pathology of Language Model
PengweiZhanInstitute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
- JingYangInstitute of Information Engneering, Chinese Academy of Science
+ JingYangInstitute of Information Engneering, Chinese Academy of Science
XiaoHuangInstitute of Information Engineering, Chinese Academy of Sciences
ChunleiJingInstitute of Information Engineering, Chinese Academy of Sciences
JingyingLiInstitute of Information Engineering, Chinese Academy of Sciences
diff --git a/data/xml/2023.findings.xml b/data/xml/2023.findings.xml
index 3e91e17843..dbfd7ff506 100644
--- a/data/xml/2023.findings.xml
+++ b/data/xml/2023.findings.xml
@@ -9302,7 +9302,7 @@
Similarizing the Influence of Words with Contrastive Learning to Defend Word-level Adversarial Text Attack
PengweiZhanInstitute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
- JingYangInstitute of Information Engneering, Chinese Academy of Science
+ JingYangInstitute of Information Engneering, Chinese Academy of Science
HeWangInstitute of Information Engineering, Chinese Academy of Sciences
ChaoZhengInstitute of Information Engineering, Chinese Academy of Sciences
XiaoHuangInstitute of Information Engineering, Chinese Academy of Sciences
diff --git a/data/xml/2024.lrec.xml b/data/xml/2024.lrec.xml
index ab62a39abf..c3871ca03a 100644
--- a/data/xml/2024.lrec.xml
+++ b/data/xml/2024.lrec.xml
@@ -14383,7 +14383,7 @@
Rethinking Word-level Adversarial Attack: The Trade-off between Efficiency, Effectiveness, and Imperceptibility
PengweiZhan
- JingYang
+ JingYang
HeWang
ChaoZheng
LimingWang
diff --git a/data/xml/2025.blackboxnlp.xml b/data/xml/2025.blackboxnlp.xml
index f4411bd32c..817876cec4 100644
--- a/data/xml/2025.blackboxnlp.xml
+++ b/data/xml/2025.blackboxnlp.xml
@@ -300,7 +300,7 @@
Exploring Large Language Models’ World Perception: A Multi-Dimensional Evaluation through Data Distribution
ZhiLiTsinghua University
- JingYang
+ JingYang
YingLiuTsinghua University, Tsinghua University
415-432
In recent years, large language models (LLMs) have achieved remarkable success across diverse natural language processing tasks. Nevertheless, their capacity to process and reflect core human experiences remains underexplored. Current benchmarks for LLM evaluation typically focus on a single aspect of linguistic understanding, thus failing to capture the full breadth of its abstract reasoning about the world. To address this gap, we propose a multidimensional paradigm to investigate the capacity of LLMs to perceive the world through temporal, spatial, sentimental, and causal aspects. We conduct extensive experiments by partitioning datasets according to different distributions and employing various prompting strategies. Our findings reveal significant differences and shortcomings in how LLMs handle temporal granularity, multi-hop spatial reasoning, subtle sentiments, and implicit causal relationships. While sophisticated prompting approaches can mitigate some of these limitations, substantial challenges persist in effectively capturing human abstract perception, highlighting the discrepancy between model reasoning and human behavior. We aspire that this work, which assesses LLMs from multiple perspectives of human understanding of the world, will guide more instructive research on the LLMs’ perception or cognition.
diff --git a/data/xml/2025.emnlp.xml b/data/xml/2025.emnlp.xml
index 0beb9475e3..7e845c563f 100644
--- a/data/xml/2025.emnlp.xml
+++ b/data/xml/2025.emnlp.xml
@@ -14681,7 +14681,7 @@
Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media
YujieWang
YunweiZhaoCNCERT/CC
- JingYang
+ JingYang
HanHanSouthwest University
ShiguangShanInstitute of Computing Technology, Chinese Academy of Sciences
JieZhangInstitute of Computing Technology, Chinese Academy of Sciences
diff --git a/data/xml/2025.fever.xml b/data/xml/2025.fever.xml
index 0401c14f12..2e2e8d0d24 100644
--- a/data/xml/2025.fever.xml
+++ b/data/xml/2025.fever.xml
@@ -224,7 +224,7 @@
PremtimSahitaj
ArthurHilbert
VeronikaSolopova
- JingYang
+ JingYang
NilsFeldhus
TatianaAnikinaGerman Research Center for AI
SimonOstermannGerman Research Center for AI
diff --git a/data/xml/2025.findings.xml b/data/xml/2025.findings.xml
index 8e9aac0d99..8e0fdf426a 100644
--- a/data/xml/2025.findings.xml
+++ b/data/xml/2025.findings.xml
@@ -30701,7 +30701,7 @@
YijiaFan
JushengZhang
KaitongCai
- JingYang
+ JingYang
KezeWangSUN YAT-SEN UNIVERSITY
6243-6256
Multi-label classification (MLC) faces persistent challenges from label imbalance, spurious correlations, and distribution shifts, especially in rare label prediction. We propose the Causal Cooperative Game (CCG) framework, which models MLC as a multi-player cooperative process. CCG integrates explicit causal discovery via Neural Structural Equation Models, a counterfactual curiosity reward to guide robust feature learning, and a causal invariance loss to ensure generalization across environments, along with targeted rare label enhancement. Extensive experiments on benchmark datasets demonstrate that CCG significantly improves rare label prediction and overall robustness compared to strong baselines. Ablation and qualitative analyses further validate the effectiveness and interpretability of each component. Our work highlights the promise of combining causal inference and cooperative game theory for more robust and interpretable multi-label learning.
diff --git a/data/xml/2025.sdp.xml b/data/xml/2025.sdp.xml
index 20dc3615ab..2c1b260297 100644
--- a/data/xml/2025.sdp.xml
+++ b/data/xml/2025.sdp.xml
@@ -341,7 +341,7 @@
ChristianWoerleNA
GiuseppeGuarinoNA
SalarMohtajGerman Research Center for AI
- JingYang
+ JingYang
VeronikaSolopova
VeraSchmittTechnische Universität Berlin
281-287
diff --git a/data/xml/2025.tacl.xml b/data/xml/2025.tacl.xml
index 9933a72cfd..b264506acf 100644
--- a/data/xml/2025.tacl.xml
+++ b/data/xml/2025.tacl.xml
@@ -223,7 +223,7 @@
Self-Rationalization in the Wild: A Large-scale Out-of-Distribution Evaluation on NLI-related tasks
- JingYang
+ JingYang
MaxGlockner
AndersonRocha
IrynaGurevych
diff --git a/data/yaml/name_variants.yaml b/data/yaml/name_variants.yaml
index 55b9332e91..bf3808471d 100644
--- a/data/yaml/name_variants.yaml
+++ b/data/yaml/name_variants.yaml
@@ -1222,6 +1222,14 @@
- canonical: {first: Clint, last: Burfoot}
variants:
- {first: Clinton, last: Burfoot}
+- canonical: {first: Jing, last: Yang}
+ id: jing-yang-campinas
+ orcid: 0000-0002-0035-3960
+ institution: State University of Campinas
+ comment: Campinas
+- canonical: {first: Jing, last: Yang}
+ id: jing-yang
+ comment: May refer to several people
- canonical: {first: John D., last: Burger}
comment: MITRE
id: john-d-burger