From 3f0cc054c0344a944ee783805c789a9185f95851 Mon Sep 17 00:00:00 2001 From: Azax4 Date: Fri, 14 Nov 2025 22:21:42 +0000 Subject: [PATCH] Added author page for Bin Wu --- data/xml/2022.findings.xml | 2 +- data/xml/2024.ccl.xml | 2 +- data/xml/2024.findings.xml | 4 ++-- data/xml/2025.acl.xml | 4 ++-- data/xml/2025.emnlp.xml | 2 +- data/xml/2025.findings.xml | 4 ++-- data/xml/2025.naacl.xml | 2 +- data/yaml/name_variants.yaml | 8 ++++++++ 8 files changed, 18 insertions(+), 10 deletions(-) diff --git a/data/xml/2022.findings.xml b/data/xml/2022.findings.xml index 68c320642a..fee6a61a5c 100644 --- a/data/xml/2022.findings.xml +++ b/data/xml/2022.findings.xml @@ -10360,7 +10360,7 @@ Faster and Smaller Speech Translation without Quality Compromise A Multi-Modal Knowledge Graph for Classical <fixed-case>C</fixed-case>hinese Poetry YuqingLiBeijing University of Posts and Telecommunications YuxinZhangRenmin University of China - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications Ji-RongWenRenmin University of China RuihuaSongRenmin University of China TingBaiBeijing University of Posts and Telecommunications diff --git a/data/xml/2024.ccl.xml b/data/xml/2024.ccl.xml index 1d4b586aa1..5756210e7e 100644 --- a/data/xml/2024.ccl.xml +++ b/data/xml/2024.ccl.xml @@ -513,7 +513,7 @@ JingZhang JiangmingShu江明 YuxiangZhang宇翔 - BinWu + BinWu WeiWang JianYu JitaoSang基韬 diff --git a/data/xml/2024.findings.xml b/data/xml/2024.findings.xml index 65cefd28f6..ebbf356eb5 100644 --- a/data/xml/2024.findings.xml +++ b/data/xml/2024.findings.xml @@ -20597,7 +20597,7 @@ YangfuZhu YuqingLiBeijing University of Posts and Telecommunications DiLiuBeijing University of Posts and Telecommunications - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications 1600-1617 Given the importance of ancient Chinese in capturing the essence of rich historical and cultural heritage, the rapid advancements in Large Language Models (LLMs) necessitate benchmarks that can effectively evaluate their understanding of ancient contexts. To meet this need, we present AC-EVAL, an innovative benchmark designed to assess the advanced knowledge and reasoning capabilities of LLMs within the context of ancient Chinese. AC-EVAL is structured across three levels of difficulty reflecting different facets of language comprehension: general historical knowledge, short text understanding, and long text comprehension. The benchmark comprises 13 tasks, spanning historical facts, geography, social customs, art, philosophy, classical poetry and prose, providing a comprehensive assessment framework. Our extensive evaluation of top-performing LLMs, tailored for both English and Chinese, reveals a substantial potential for enhancing ancient text comprehension. By highlighting the strengths and weaknesses of LLMs, AC-EVAL aims to promote their development and application forward in the realms of ancient Chinese language education and scholarly research. 2024.findings-emnlp.87 @@ -21847,7 +21847,7 @@ ShuaiZhong XinmingChen JinshengQi - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications 3121-3133 Video Question Answering (VideoQA) tasks require not only correct answers but also visual evidence. The “localize-then-answer” strategy, while enhancing accuracy and interpretability, faces challenges due to the lack of temporal localization labels in VideoQA datasets. Existing methods often train the models’ localization capabilities indirectly using QA labels, leading to inaccurate localization. Moreover, our experiments show that despite high accuracy, current models depend too heavily on language shortcuts or spurious correlations with irrelevant visual context. To address these issues, we propose a Question-Guided and Answer-Calibrated TRansformer (QGAC-TR), which guides and calibrates localization using question and option texts without localization labels. Furthermore, we design two self-supervised learning tasks to further enhance the model’s refined localization capabilities. Extensive experiments on three public datasets focused on temporal and causal reasoning show that our model not only achieves accuracy comparable to large-scale pretrained models but also leads in localization aspects. Code will be available on GitHub. 2024.findings-emnlp.176 diff --git a/data/xml/2025.acl.xml b/data/xml/2025.acl.xml index dd711a6ac3..b78cf66237 100644 --- a/data/xml/2025.acl.xml +++ b/data/xml/2025.acl.xml @@ -2581,7 +2581,7 @@ YutingWeiBeijing University of Posts and Telecommunications QiMengBeijing University of Posts and Telecommunications YuanxingXuBeijing University of Posts and Telecommunications - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications 3537-3550 Traditional methods for processing classical Chinese typically segment language understanding into discrete tasks, which overlook crucial background information and reduce user engagement. Large language models (LLMs) provide integrated solutions, yet they entail high computational costs and risks of generating inaccurate historical information. To tackle these challenges, we propose a novel framework, TEACH (conTrastive knowlEdge Adaptive distillation with enhanCed Historical interpretability), which focuses on classical Chinese understanding by integrating word sense disambiguation with sentence translation. This integration leverages a confidence-annotated knowledge base and a step-by-step Chain-of-Thought prompting mechanism to minimize hallucinations and improve semantic analysis. Moreover, TEACH employs contrastive distillation learning to efficiently transfer capabilities from larger models to smaller ones (e.g., Qwen2-1.5B), addressing overly liberal translations. Additionally, we introduce an innovative generation evaluation metric using iterative word alignment, enhancing LLM performance assessments by distinguishing additional information and addressing excessive translation issues. Experiments conducted on real-world datasets validate TEACH’s efficacy in classical Chinese educational scenarios. 2025.acl-long.178 @@ -16063,7 +16063,7 @@ Boosting <fixed-case>LLM</fixed-case>’s Molecular Structure Elucidation with Knowledge Enhanced Tree Search Reasoning XiangZhuang - BinWu + BinWu JiyuCui KehuaFeng XiaotongLiZhejiang University diff --git a/data/xml/2025.emnlp.xml b/data/xml/2025.emnlp.xml index 0beb9475e3..ba7bb4c454 100644 --- a/data/xml/2025.emnlp.xml +++ b/data/xml/2025.emnlp.xml @@ -3363,7 +3363,7 @@ ZhengWang YuxuanZhangBeijing University of Posts and Telecommunications BoWang - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications 4501-4520 Recent years have witnessed remarkable advances in Large Language Models (LLMs). However, in the task of social relation recognition, Large Language Models (LLMs) encounter significant challenges due to their reliance on sequential training data, which inherently restricts their capacity to effectively model complex graph-structured relationships. To address this limitation, we propose a novel low-coupling method synergizing multimodal temporal Knowledge Graphs and Large Language Models (mtKG-LLM) for social relation reasoning. Specifically, we extract multimodal information from the videos and model the social networks as spatial Knowledge Graphs (KGs) for each scene. Temporal KGs are constructed based on spatial KGs and updated along the timeline for long-term reasoning. Subsequently, we retrieve multi-scale information from the graph-structured knowledge for LLMs to recognize the underlying social relation. Extensive experiments demonstrate that our method has achieved state-of-the-art performance in social relation recognition. Furthermore, our framework exhibits effectiveness in bridging the gap between KGs and LLMs. Our code will be released after acceptance. 2025.emnlp-main.224 diff --git a/data/xml/2025.findings.xml b/data/xml/2025.findings.xml index 8e9aac0d99..5177c7237a 100644 --- a/data/xml/2025.findings.xml +++ b/data/xml/2025.findings.xml @@ -22488,7 +22488,7 @@ A Joint Optimization Framework for Enhancing Efficiency of Tool Utilization in <fixed-case>LLM</fixed-case> Agents - BinWu + BinWu EdgarMeijBloomberg EmineYilmaz 22361-22373 @@ -29933,7 +29933,7 @@ YuxuanZhangBeijing University of Posts and Telecommunications YangfuZhu HaoruiWang - BinWuBeijing University of Posts and Telecommunications + BinWuBeijing University of Posts and Telecommunications 5174-5184 Social relationship recognition, as one of the fundamental tasks in video understanding, contributes to the construction and application of multi-modal knowledge graph. Previous works have mainly focused on two aspects: generating character graphs and multi-modal fusion. However, they often overlook the impact of cultural differences on relationship recognition. Specifically, relationship recognition models are susceptible to being misled by training data from a specific cultural context. This can result in the learning of culture-specific spurious correlations, ultimately restricting the ability to recognize social relationships in different cultures. Therefore, we employ a customized causal graph to analyze the confounding effects of culture in the relationship recognition task. We propose a Cultural Causal Intervention (CCI) model that mitigates the influence of culture as a confounding factor in the visual and textual modalities. Importantly, we also construct a novel video social relation recognition (CVSR) dataset to facilitate discussion and research on cultural factors in video tasks. Extensive experiments conducted on several datasets demonstrate that the proposed model surpasses state-of-the-art methods. 2025.findings-emnlp.277 diff --git a/data/xml/2025.naacl.xml b/data/xml/2025.naacl.xml index ca8214ee1f..1fb9165e86 100644 --- a/data/xml/2025.naacl.xml +++ b/data/xml/2025.naacl.xml @@ -3234,7 +3234,7 @@ Entropy-Based Decoding for Retrieval-Augmented Large Language Models ZexuanQiuThe Chinese University of Hong Kong ZijingOuImperial College London - BinWu + BinWu JingjingLi AiweiLiuTsinghua University IrwinKing diff --git a/data/yaml/name_variants.yaml b/data/yaml/name_variants.yaml index 55b9332e91..752f7fe1a2 100644 --- a/data/yaml/name_variants.yaml +++ b/data/yaml/name_variants.yaml @@ -1158,6 +1158,14 @@ - canonical: {first: Susan E., last: Brennan} variants: - {first: Susan, last: Brennan} +- canonical: {first: Bin, last: Wu} + id: bin-wu-ucl + orcid: 0000-0002-8677-2321 + institution: University College London + comment: UCL +- canonical: {first: Bin, last: Wu} + id: bin-wu + comment: May refer to several people - canonical: {first: Xavier, last: Briffault} id: xavier-briffault - canonical: {first: Ted, last: Briscoe}