|
| 1 | +--- |
| 2 | +title: "Invited Speakers Seminar — April 28, 2025" |
| 3 | +authors: |
| 4 | + - xtra |
| 5 | +tags: [seminar, iclr2025, invited-speakers] |
| 6 | +image: ./banner.png |
| 7 | +--- |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | +We hosted an exclusive seminar featuring five researchers with papers accepted at **ICLR 2025**, covering multimodal learning, graph foundation models, federated learning, LLM agent security, and cognitive limitations of LLMs. |
| 12 | + |
| 13 | +**Date:** April 28, 2025 (Friday) |
| 14 | +**Time:** 13:00 – 18:00 |
| 15 | +**Venue:** COM2-04-02 (Executive Classroom), 15 Computing Dr, Singapore 117418 |
| 16 | + |
| 17 | +<!-- truncate --> |
| 18 | + |
| 19 | +## Speakers |
| 20 | + |
| 21 | +<div style={{display: 'grid', gridTemplateColumns: 'repeat(auto-fill, minmax(260px, 1fr))', gap: '1.5rem', margin: '1.5rem 0'}}> |
| 22 | + |
| 23 | +<div style={{textAlign: 'center'}}> |
| 24 | + <img src={require('./divyam-madaan.jpg').default} style={{width: 120, height: 120, borderRadius: '50%', objectFit: 'cover'}} /> |
| 25 | + <h3 style={{marginTop: '0.75rem', marginBottom: '0.25rem'}}>Divyam Madaan</h3> |
| 26 | + <p style={{margin: 0, color: 'var(--ifm-color-emphasis-600)'}}>PhD Student @ New York University</p> |
| 27 | +</div> |
| 28 | + |
| 29 | +<div style={{textAlign: 'center'}}> |
| 30 | + <img src={require('./zhikai-chen.jpeg').default} style={{width: 120, height: 120, borderRadius: '50%', objectFit: 'cover'}} /> |
| 31 | + <h3 style={{marginTop: '0.75rem', marginBottom: '0.25rem'}}>Zhikai Chen</h3> |
| 32 | + <p style={{margin: 0, color: 'var(--ifm-color-emphasis-600)'}}>PhD Student @ Michigan State University</p> |
| 33 | +</div> |
| 34 | + |
| 35 | +<div style={{textAlign: 'center'}}> |
| 36 | + <img src={require('./zexi-li.jpg').default} style={{width: 120, height: 120, borderRadius: '50%', objectFit: 'cover'}} /> |
| 37 | + <h3 style={{marginTop: '0.75rem', marginBottom: '0.25rem'}}>Zexi Li</h3> |
| 38 | + <p style={{margin: 0, color: 'var(--ifm-color-emphasis-600)'}}>PhD Student @ Zhejiang University</p> |
| 39 | +</div> |
| 40 | + |
| 41 | +<div style={{textAlign: 'center'}}> |
| 42 | + <img src={require('./hanrong-zhang.jpeg').default} style={{width: 120, height: 120, borderRadius: '50%', objectFit: 'cover'}} /> |
| 43 | + <h3 style={{marginTop: '0.75rem', marginBottom: '0.25rem'}}>Hanrong Zhang</h3> |
| 44 | + <p style={{margin: 0, color: 'var(--ifm-color-emphasis-600)'}}>MS @ ZJU-UIUC</p> |
| 45 | +</div> |
| 46 | + |
| 47 | +<div style={{textAlign: 'center'}}> |
| 48 | + <img src={require('./jen-tse-huang.jpeg').default} style={{width: 120, height: 120, borderRadius: '50%', objectFit: 'cover'}} /> |
| 49 | + <h3 style={{marginTop: '0.75rem', marginBottom: '0.25rem'}}>Jen-Tse Huang</h3> |
| 50 | + <p style={{margin: 0, color: 'var(--ifm-color-emphasis-600)'}}>Postdoc @ Johns Hopkins University</p> |
| 51 | +</div> |
| 52 | + |
| 53 | +</div> |
| 54 | + |
| 55 | +--- |
| 56 | + |
| 57 | +## Program |
| 58 | + |
| 59 | +| Time | Session | |
| 60 | +|------|---------| |
| 61 | +| 12:30 – 13:10 | Tea and Snacks | |
| 62 | +| 13:10 – 13:20 | Sharing by Xtra Group — Qian Wang | |
| 63 | +| 13:20 – 13:50 | Divyam Madaan | |
| 64 | +| 13:50 – 14:30 | Zhikai Chen *(Remote)* | |
| 65 | +| 14:30 – 15:00 | Zexi Li | |
| 66 | +| 15:00 – 15:30 | Break & Discussion | |
| 67 | +| 15:30 – 16:00 | Hanrong Zhang | |
| 68 | +| 16:00 – 16:30 | Jen-Tse Huang | |
| 69 | +| 16:30 – 17:00 | Break & Discussion | |
| 70 | +| 17:00 – | Dinner Buffet | |
| 71 | + |
| 72 | +--- |
| 73 | + |
| 74 | +## Talks |
| 75 | + |
| 76 | +### Multi-modal Learning: A Look Back and the Road Ahead |
| 77 | +**Divyam Madaan** · PhD Student @ NYU, advised by Sumit Chopra and Kyunghyun Cho |
| 78 | + |
| 79 | +<img src={require('./divyam-madaan.jpg').default} style={{float: 'right', width: 100, height: 100, borderRadius: '50%', objectFit: 'cover', marginLeft: '1rem', marginBottom: '0.5rem'}} /> |
| 80 | + |
| 81 | +Divyam's research focuses on models that learn from multiple modalities and generalize across distribution shifts, with an emphasis on healthcare applications. He holds an M.S. from KAIST and has published at ICML, NeurIPS, CVPR, and ICLR (oral and spotlight). |
| 82 | + |
| 83 | +Supervised multi-modal learning maps multiple modalities to a target label. Prior work captures either *inter-modality* dependencies (relationships between modalities and the label) or *intra-modality* dependencies (relationships within a single modality) in isolation — but not both. This talk presents **I2M2** (Inter & Intra-Modality Modeling), a generative-model-based framework that integrates both types of dependencies, yielding improved predictions on real-world healthcare and vision-language datasets. |
| 84 | + |
| 85 | +<div style={{clear: 'both'}} /> |
| 86 | + |
| 87 | +--- |
| 88 | + |
| 89 | +### Graph Foundation Models: Addressing Feature and Task Heterogeneity |
| 90 | +**Zhikai Chen** · PhD Student @ Michigan State University, advised by Prof. Jiliang Tang *(Remote)* |
| 91 | + |
| 92 | +<img src={require('./zhikai-chen.jpeg').default} style={{float: 'right', width: 100, height: 100, borderRadius: '50%', objectFit: 'cover', marginLeft: '1rem', marginBottom: '0.5rem'}} /> |
| 93 | + |
| 94 | +Zhikai's research lies in graph machine learning and geometric deep learning, with a focus on scalable and transferable models for structured data. He has published at ICLR, NeurIPS, ICML, and LOG. |
| 95 | + |
| 96 | +Foundation models achieve their power through *homogenization* (a unified framework) and *emergence* (capabilities that scale with size and data). Applying these ideas to graphs requires overcoming two core challenges: **feature heterogeneity** (diverse types and structures of graph data) and **task heterogeneity** (a single model excelling at multiple tasks). This talk argues that task heterogeneity is the deeper theoretical obstacle and charts a path toward true graph foundation models. |
| 97 | + |
| 98 | +<div style={{clear: 'both'}} /> |
| 99 | + |
| 100 | +--- |
| 101 | + |
| 102 | +### Foundation Models under Model Parameter Perspective: Editing, Fusion, and Generation |
| 103 | +**Zexi Li** · PhD Student @ Zhejiang University, Visiting Researcher @ University of Cambridge |
| 104 | + |
| 105 | +<img src={require('./zexi-li.jpg').default} style={{float: 'right', width: 100, height: 100, borderRadius: '50%', objectFit: 'cover', marginLeft: '1rem', marginBottom: '0.5rem'}} /> |
| 106 | + |
| 107 | +Zexi focuses on LLMs, optimization, generalization, and personalization under federated and trustworthy setups, through the lens of parameter space and learning dynamics. He has 9 co-first-author papers at ICML, ICCV, NeurIPS, Cell Press Patterns, and IEEE TMC. |
| 108 | + |
| 109 | +Understanding model parameters helps uncover the "physics" of AI. This talk covers three perspectives: (1) **lifelong editing** of LLMs' parametric memory, (2) **parameter fusion** for better federated learning, and (3) **personalized model generation** via diffusion models over parameter space. |
| 110 | + |
| 111 | +<div style={{clear: 'both'}} /> |
| 112 | + |
| 113 | +--- |
| 114 | + |
| 115 | +### Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents |
| 116 | +**Hanrong Zhang** · MS Computer Engineering @ ZJU-UIUC |
| 117 | + |
| 118 | +<img src={require('./hanrong-zhang.jpeg').default} style={{float: 'right', width: 100, height: 100, borderRadius: '50%', objectFit: 'cover', marginLeft: '1rem', marginBottom: '0.5rem'}} /> |
| 119 | + |
| 120 | +Hanrong's interests span multimodal LLMs, LLM-based agents, self-supervised learning, and trustworthy ML. |
| 121 | + |
| 122 | +LLM-based agents can use external tools and memory to solve complex tasks — but also introduce critical security vulnerabilities. **ASB** is a comprehensive framework covering 10 scenarios, 10 agents, 400+ tools, and 27 attack/defense methods. Benchmarking across 13 LLM backbones reveals attack success rates as high as **84.30%**, with current defenses showing limited effectiveness, underscoring the urgency of agent security research. |
| 123 | + |
| 124 | +<div style={{clear: 'both'}} /> |
| 125 | + |
| 126 | +--- |
| 127 | + |
| 128 | +### LLMs Do Not Have Human-Like Working Memories |
| 129 | +**Jen-Tse Huang** · Postdoc @ Johns Hopkins University (CLSP), with Prof. Mark Dredze |
| 130 | + |
| 131 | +<img src={require('./jen-tse-huang.jpeg').default} style={{float: 'right', width: 100, height: 100, borderRadius: '50%', objectFit: 'cover', marginLeft: '1rem', marginBottom: '0.5rem'}} /> |
| 132 | + |
| 133 | +Jen-Tse's research evaluates single and multi-agent LLM systems through the lens of social science. |
| 134 | + |
| 135 | +Human working memory enables not just temporary storage but active processing and utilization of information. Without it, individuals produce incoherent conversations, fail at mental reasoning, and self-contradict. Through three experiments — a Number Guessing Game, a Yes-or-No Game, and Math Magic — this talk demonstrates that current LLMs lack this cognitive ability across multiple model families, posing a fundamental challenge on the road to AGI. |
| 136 | + |
| 137 | +<div style={{clear: 'both'}} /> |
| 138 | + |
| 139 | +--- |
| 140 | + |
| 141 | +## Organizers |
| 142 | + |
| 143 | +Event organized by **Qian Wang**, **Zhaomin Wu**, **Bingqiao Luo**, and **Junyi Hou** from Xtra Computing Group. |
0 commit comments