|
| 1 | +--- |
| 2 | +layout: page |
| 3 | +permalink: /talks/ |
| 4 | +title: Talks |
| 5 | +description: Invited talks and seminars at ENCODE Lab |
| 6 | +nav: true |
| 7 | +nav_order: 4 |
| 8 | +--- |
| 9 | + |
| 10 | +<style> |
| 11 | +.talk-item { |
| 12 | + display: flex; |
| 13 | + flex-direction: row; |
| 14 | + align-items: flex-start; |
| 15 | + padding: 1rem 0; |
| 16 | + border-bottom: 1px solid var(--global-divider-color); |
| 17 | + gap: 1rem; |
| 18 | +} |
| 19 | + |
| 20 | +.talk-item:last-child { |
| 21 | + border-bottom: none; |
| 22 | +} |
| 23 | + |
| 24 | +.talk-image { |
| 25 | + width: 100px; |
| 26 | + min-width: 100px; |
| 27 | + height: 100px; |
| 28 | + object-fit: cover; |
| 29 | + border-radius: 50%; |
| 30 | +} |
| 31 | + |
| 32 | +.talk-content { |
| 33 | + flex: 1; |
| 34 | +} |
| 35 | + |
| 36 | +.talk-title { |
| 37 | + font-size: 1.1rem; |
| 38 | + font-weight: 600; |
| 39 | + margin: 0 0 0.3rem 0; |
| 40 | + color: var(--global-theme-color); |
| 41 | +} |
| 42 | + |
| 43 | +.talk-speaker { |
| 44 | + font-size: 1rem; |
| 45 | + font-weight: 500; |
| 46 | + margin: 0.3rem 0; |
| 47 | +} |
| 48 | + |
| 49 | +.talk-affiliation { |
| 50 | + font-size: 0.9rem; |
| 51 | + color: var(--global-text-color-light); |
| 52 | + margin: 0.2rem 0; |
| 53 | +} |
| 54 | + |
| 55 | +.talk-date { |
| 56 | + font-size: 0.85rem; |
| 57 | + color: var(--global-text-color-light); |
| 58 | + margin: 0.3rem 0; |
| 59 | +} |
| 60 | + |
| 61 | +.talk-meta { |
| 62 | + font-size: 0.85rem; |
| 63 | + color: var(--global-text-color-light); |
| 64 | + margin: 0.2rem 0; |
| 65 | +} |
| 66 | + |
| 67 | +.talk-abstract { |
| 68 | + font-size: 0.9rem; |
| 69 | + line-height: 1.5; |
| 70 | + margin: 0.5rem 0; |
| 71 | + color: var(--global-text-color); |
| 72 | +} |
| 73 | + |
| 74 | +.talk-links { |
| 75 | + display: flex; |
| 76 | + gap: 0.8rem; |
| 77 | + margin-top: 0.5rem; |
| 78 | +} |
| 79 | + |
| 80 | +.talk-links a { |
| 81 | + font-size: 0.85rem; |
| 82 | + color: var(--global-theme-color); |
| 83 | +} |
| 84 | + |
| 85 | +@media (max-width: 576px) { |
| 86 | + .talk-item { |
| 87 | + flex-direction: column; |
| 88 | + align-items: center; |
| 89 | + text-align: center; |
| 90 | + } |
| 91 | + |
| 92 | + .talk-links { |
| 93 | + justify-content: center; |
| 94 | + } |
| 95 | +} |
| 96 | +</style> |
| 97 | + |
| 98 | +## Upcoming Talks |
| 99 | + |
| 100 | +*No upcoming talks scheduled. Stay tuned!* |
| 101 | + |
| 102 | +--- |
| 103 | + |
| 104 | +## Past Talks |
| 105 | + |
| 106 | +<div class="talk-item"> |
| 107 | + <img class="talk-image" src="../assets/img/talks/disheng_liu.jpg" alt="Disheng Liu"/> |
| 108 | + <div class="talk-content"> |
| 109 | + <h3 class="talk-title">Spatial Intelligence in Vision-Language Models: What It Is, What Works, and What's Next</h3> |
| 110 | + <p class="talk-speaker">Disheng Liu, Ph.D. Candidate</p> |
| 111 | + <p class="talk-affiliation">Case Western Reserve University</p> |
| 112 | + <p class="talk-date"><i class="fa-regular fa-calendar"></i> December 30, 2025, 11:00-12:00</p> |
| 113 | + <p class="talk-meta"><i class="fa-solid fa-location-dot"></i> Tencent Meeting 682-127-813</p> |
| 114 | + <p class="talk-meta"><i class="fa-solid fa-user"></i> Host: Huan Wang</p> |
| 115 | + <p class="talk-abstract"> |
| 116 | + Vision-Language Models (VLMs) have achieved remarkable success but exhibit a fundamental deficiency in spatial intelligence, a critical capability for progress in embodied AI, autonomous driving, and spatially coherent generation. This talk provides a systematic review that spans the foundations of spatial intelligence in VLMs, root causes of spatial limitations, enhancement methodologies, evaluation protocols, and real-world applications. |
| 117 | + </p> |
| 118 | + <div class="talk-links"> |
| 119 | + <a href="https://dishengll.github.io/" title="Homepage"><i class="fa-solid fa-house"></i> Homepage</a> |
| 120 | + </div> |
| 121 | + </div> |
| 122 | +</div> |
| 123 | + |
| 124 | +<div class="talk-item"> |
| 125 | + <img class="talk-image" src="../assets/img/talks/mayi.png" alt="Yi Ma"/> |
| 126 | + <div class="talk-content"> |
| 127 | + <h3 class="talk-title">Pursuing the Nature of Intelligence</h3> |
| 128 | + <p class="talk-speaker">Yi Ma, Chair Professor</p> |
| 129 | + <p class="talk-affiliation">School of CDS, Hong Kong University & EECS Department, UC Berkeley</p> |
| 130 | + <p class="talk-date"><i class="fa-regular fa-calendar"></i> October 14, 2025, 15:30-17:00</p> |
| 131 | + <p class="talk-meta"><i class="fa-solid fa-location-dot"></i> E10-201, Yungu Campus</p> |
| 132 | + <p class="talk-meta"><i class="fa-solid fa-user"></i> Host: Huan Wang, Yandong Wen</p> |
| 133 | + <p class="talk-abstract"> |
| 134 | + In this talk, we will try to clarify different levels and mechanisms of intelligence from historical, scientific, mathematical, and computational perspective. From the evolution of intelligence in nature, from phylogenetic, to ontogenetic, societal, and to scientific intelligence, we will try to shed light on how to understand the true nature of the seemingly dramatic advancements in the technologies of machine intelligence in the past decade. We achieve this goal by developing a principled mathematical framework to explain deductively the practices of deep learning from the first principle of pursuing low-dimensional distributions. |
| 135 | + </p> |
| 136 | + <div class="talk-links"> |
| 137 | + <a href="https://people.eecs.berkeley.edu/~yima/" title="Homepage"><i class="fa-solid fa-house"></i> Homepage</a> |
| 138 | + </div> |
| 139 | + </div> |
| 140 | +</div> |
| 141 | + |
| 142 | +<div class="talk-item"> |
| 143 | + <img class="talk-image" src="../assets/img/talks/yiyi_liao.png" alt="Yiyi Liao"/> |
| 144 | + <div class="talk-content"> |
| 145 | + <h3 class="talk-title">3D Generative Models through the Lens of Coordinate Systems</h3> |
| 146 | + <p class="talk-speaker">Yiyi Liao, Assistant Professor</p> |
| 147 | + <p class="talk-affiliation">Zhejiang University</p> |
| 148 | + <p class="talk-date"><i class="fa-regular fa-calendar"></i> September 4, 2025, 16:00-17:00</p> |
| 149 | + <p class="talk-meta"><i class="fa-solid fa-location-dot"></i> Room E10-215, 2F, Yungu Campus</p> |
| 150 | + <p class="talk-meta"><i class="fa-solid fa-user"></i> Host: Huan Wang</p> |
| 151 | + <p class="talk-abstract"> |
| 152 | + 3D content offers advantages such as real-time interactivity and plays an important role in AR/VR applications, where both generation and compression are critical for practical use. This talk explores 3D content generation from the perspective of coordinate systems, highlighting the distinctions and applications of absolute, observer, and intrinsic coordinates. We present object generation methods developed in the intrinsic coordinate system and scene generation approaches formulated in the observer coordinate system, providing a structured view of how coordinate choices shape 3D generative modeling. |
| 153 | + </p> |
| 154 | + <div class="talk-links"> |
| 155 | + <a href="https://yiyiliao.github.io/" title="Homepage"><i class="fa-solid fa-house"></i> Homepage</a> |
| 156 | + </div> |
| 157 | + </div> |
| 158 | +</div> |
0 commit comments