Skip to content

Commit 81a7803

Browse files
authored
Split author page for Xinpeng Wang (#5779)
- Added entries for xinpeng-wang-lmu (LMU author with ORCID 0009-0006-5213-1119) and xinpeng-wang (ambiguous) to name_variants.yaml Resolves issue #3541
1 parent f1e1f66 commit 81a7803

File tree

8 files changed

+17
-10
lines changed

8 files changed

+17
-10
lines changed

data/xml/2022.coling.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6460,7 +6460,7 @@
64606460
</paper>
64616461
<paper id="559">
64626462
<title><fixed-case>CHAE</fixed-case>: Fine-Grained Controllable Story Generation with Characters, Actions and Emotions</title>
6463-
<author><first>Xinpeng</first><last>Wang</last></author>
6463+
<author id="xinpeng-wang"><first>Xinpeng</first><last>Wang</last></author>
64646464
<author><first>Han</first><last>Jiang</last></author>
64656465
<author><first>Zhihua</first><last>Wei</last></author>
64666466
<author><first>Shanlin</first><last>Zhou</last></author>

data/xml/2023.acl.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15231,7 +15231,7 @@
1523115231
</paper>
1523215232
<paper id="157">
1523315233
<title>How to Distill your <fixed-case>BERT</fixed-case>: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives</title>
15234-
<author><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universitaet Muenchen</affiliation></author>
15234+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universitaet Muenchen</affiliation></author>
1523515235
<author><first>Leonie</first><last>Weissweiler</last><affiliation>CIS, LMU Munich</affiliation></author>
1523615236
<author><first>Hinrich</first><last>Schütze</last><affiliation>Center for Information and Language Processing, University of Munich</affiliation></author>
1523715237
<author><first>Barbara</first><last>Plank</last><affiliation>LMU Munich</affiliation></author>

data/xml/2023.emnlp.xml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1765,7 +1765,7 @@
17651765
</paper>
17661766
<paper id="126">
17671767
<title><fixed-case>ACTOR</fixed-case>: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation</title>
1768-
<author><first>Xinpeng</first><last>Wang</last></author>
1768+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last></author>
17691769
<author><first>Barbara</first><last>Plank</last></author>
17701770
<pages>2046-2052</pages>
17711771
<abstract>Label aggregation such as majority voting is commonly used to resolve annotator disagreement in dataset creation. However, this may disregard minority values and opinions. Recent studies indicate that learning from individual annotations outperforms learning from aggregated labels, though they require a considerable amount of annotation. Active learning, as an annotation cost-saving strategy, has not been fully explored in the context of learning from disagreement. We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation. By designing and evaluating acquisition functions with annotator-specific heads on two datasets, we show that group-level entropy works generally well on both datasets. Importantly, it achieves performance in terms of both prediction and uncertainty estimation comparable to full-scale training from disagreement, while saving 70% of the annotation budget.</abstract>
@@ -2998,7 +2998,7 @@
29982998
</paper>
29992999
<paper id="213">
30003000
<title><fixed-case>T</fixed-case>o<fixed-case>V</fixed-case>i<fixed-case>L</fixed-case>a<fixed-case>G</fixed-case>: Your Visual-Language Generative Model is Also An Evildoer</title>
3001-
<author><first>Xinpeng</first><last>Wang</last></author>
3001+
<author id="xinpeng-wang"><first>Xinpeng</first><last>Wang</last></author>
30023002
<author><first>Xiaoyuan</first><last>Yi</last></author>
30033003
<author><first>Han</first><last>Jiang</last></author>
30043004
<author><first>Shanlin</first><last>Zhou</last></author>

data/xml/2023.findings.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19559,7 +19559,7 @@
1955919559
<author><first>Rui</first><last>Wang</last></author>
1956019560
<author><first>Zhihua</first><last>Wei</last></author>
1956119561
<author><first>Yu</first><last>Li</last></author>
19562-
<author><first>Xinpeng</first><last>Wang</last></author>
19562+
<author id="xinpeng-wang"><first>Xinpeng</first><last>Wang</last></author>
1956319563
<pages>5641-5656</pages>
1956419564
<abstract>Opinion summarization is expected to digest larger review sets and provide summaries from different perspectives. However, most existing solutions are deficient in epitomizing extensive reviews and offering opinion summaries from various angles due to the lack of designs for information selection. To this end, we propose SubSumm, a supervised summarization framework for large-scale multi-perspective opinion summarization. SubSumm consists of a review sampling strategy set and a two-stage training scheme. The sampling strategies take sentiment orientation and contrastive information value into consideration, with which the review subsets from different perspectives and quality levels can be selected. Subsequently, the summarizer is encouraged to learn from the sub-optimal and optimal subsets successively in order to capitalize on the massive input. Experimental results on AmaSum and Rotten Tomatoes datasets demonstrate that SubSumm is adept at generating pros, cons, and verdict summaries from hundreds of input reviews. Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.</abstract>
1956519565
<url hash="6b612301">2023.findings-emnlp.375</url>

data/xml/2024.emnlp.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6187,7 +6187,7 @@
61876187
<title><fixed-case>DAMRO</fixed-case>: Dive into the Attention Mechanism of <fixed-case>LVLM</fixed-case> to Reduce Object Hallucination</title>
61886188
<author><first>Xuan</first><last>Gong</last></author>
61896189
<author><first>Tianshi</first><last>Ming</last></author>
6190-
<author><first>Xinpeng</first><last>Wang</last></author>
6190+
<author id="xinpeng-wang"><first>Xinpeng</first><last>Wang</last></author>
61916191
<author><first>Zhihua</first><last>Wei</last><affiliation>Tongji University</affiliation></author>
61926192
<pages>7696-7712</pages>
61936193
<abstract>Despite the great success of Large Vision-Language Models (LVLMs), they inevitably suffer from hallucination. As we know, both the visual encoder and the Large Language Model (LLM) decoder in LVLMs are Transformer-based, allowing the model to extract visual information and generate text outputs via attention mechanisms. We find that the attention distribution of LLM decoder on image tokens is highly consistent with the visual encoder and both distributions tend to focus on particular background tokens rather than the referred objects in the image. We attribute to the unexpected attention distribution to an inherent flaw in the visual encoder itself, which misguides LLMs to over emphasize the redundant information and generate object hallucination. To address the issue, we propose DAMRO, a novel training-free strategy that **D**ive into **A**ttention **M**echanism of LVLM to **R**educe **O**bject Hallucination. Specifically, our approach employs classification token (CLS) of ViT to filter out high-attention tokens scattered in the background and then eliminate their influence during decoding stage. We evaluate our method on LVLMs including LLaVA-1.5, LLaVA-NeXT and InstructBLIP, using various benchmarks such as POPE, CHAIR, MME and GPT-4V Aided Evaluation. The results demonstrate that our approach significantly reduces the impact of these outlier tokens, thus effectively alleviating the hallucination of LVLMs.</abstract>

data/xml/2024.findings.xml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12149,7 +12149,7 @@
1214912149
</paper>
1215012150
<paper id="441">
1215112151
<title>“My Answer is <fixed-case>C</fixed-case>”: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models</title>
12152-
<author><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
12152+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
1215312153
<author><first>Bolei</first><last>Ma</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
1215412154
<author><first>Chengzhi</first><last>Hu</last></author>
1215512155
<author><first>Leon</first><last>Weber-Genzel</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
@@ -26420,7 +26420,7 @@ and high variation in performance on the subset, suggesting our plausibility cri
2642026420
<paper id="513">
2642126421
<title>The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models</title>
2642226422
<author><first>Bolei</first><last>Ma</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
26423-
<author><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
26423+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
2642426424
<author><first>Tiancheng</first><last>Hu</last><affiliation>University of Cambridge</affiliation></author>
2642526425
<author><first>Anna-Carolina</first><last>Haensch</last><affiliation>University of Maryland, College Park and Ludwig-Maximilians-Universität München</affiliation></author>
2642626426
<author><first>Michael A.</first><last>Hedderich</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
@@ -30929,7 +30929,7 @@ hai-coaching/</abstract>
3092930929
<paper id="842">
3093030930
<title>“Seeing the Big through the Small”: Can <fixed-case>LLM</fixed-case>s Approximate Human Judgment Distributions on <fixed-case>NLI</fixed-case> from a Few Explanations?</title>
3093130931
<author><first>Beiduo</first><last>Chen</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
30932-
<author><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
30932+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
3093330933
<author><first>Siyao</first><last>Peng</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
3093430934
<author><first>Robert</first><last>Litschko</last></author>
3093530935
<author><first>Anna</first><last>Korhonen</last><affiliation>University of Cambridge</affiliation></author>

data/xml/2025.acl.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1292,7 +1292,7 @@
12921292
<author><first>Bolei</first><last>Ma</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
12931293
<author><first>Berk</first><last>Yoztyurk</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>
12941294
<author><first>Anna-Carolina</first><last>Haensch</last><affiliation>University of Maryland, College Park and Ludwig-Maximilians-Universität München</affiliation></author>
1295-
<author><first>Xinpeng</first><last>Wang</last></author>
1295+
<author id="xinpeng-wang-lmu"><first>Xinpeng</first><last>Wang</last></author>
12961296
<author><first>Markus</first><last>Herklotz</last></author>
12971297
<author><first>Frauke</first><last>Kreuter</last><affiliation>University of Maryland</affiliation></author>
12981298
<author><first>Barbara</first><last>Plank</last><affiliation>Ludwig-Maximilians-Universität München</affiliation></author>

data/yaml/name_variants.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10951,6 +10951,13 @@
1095110951
- canonical: {first: Chong, last: Zhang}
1095210952
comment: May refer to multiple people
1095310953
id: chong-zhang
10954+
- canonical: {first: Xinpeng, last: Wang}
10955+
degree: Ludwig Maximilian University of Munich (LMU)
10956+
orcid: 0009-0006-5213-1119
10957+
id: xinpeng-wang-lmu
10958+
- canonical: {first: Xinpeng, last: Wang}
10959+
comment: May refer to multiple people
10960+
id: xinpeng-wang
1095410961
- canonical: {first: Shengjie, last: Li}
1095510962
comment: University of Texas at Dallas
1095610963
id: shengjie-li

0 commit comments

Comments
 (0)