Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified _cite/.cache/cache.db
Binary file not shown.
71 changes: 36 additions & 35 deletions _data/citations.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,41 @@
# DO NOT EDIT, GENERATED AUTOMATICALLY

- id: doi:10.48550/ARXIV.2506.03167
title: Distributionally Robust Wireless Semantic Communication with Large AI Models
authors:
- Long Tan Le
- Senura Hansaja Wanasekara
- Zerun Niu
- Yansong Shi
- Nguyen H. Tran
- Phuong Vo
- Walid Saad
- Dusit Niyato
- Zhu Han
- Choong Seon Hong
- H. Vincent Poor
publisher: arXiv
date: '2024-01-01'
link: https://doi.org/g9v3k4
orcid: 0009-0007-4897-8381
plugin: sources.py
file: sources.yaml
type: paper
description: A distributionally robust approach for wireless semantic communication
with large AI models that addresses uncertainty in wireless channels and semantic
information transmission, enhancing reliability in AI-powered communication systems.
buttons:
- type: source
text: ArXiv
link: https://arxiv.org/abs/2506.03167
tags:
- semantic communication
- large AI models
- distributionally robust optimization
- wireless communication
- uncertainty
- robustness
journal: arXiv preprint
- id: arxiv:2006.08848
title: Personalized Federated Learning with Moreau Envelopes
authors:
Expand Down Expand Up @@ -204,41 +240,6 @@
journal: Mobile Networks and Applications
plugin: sources.py
file: sources.yaml
- id: doi:10.48550/ARXIV.2506.03167
title: Distributionally Robust Wireless Semantic Communication with Large AI Models
authors:
- Long Tan Le
- Senura Hansaja Wanasekara
- Zerun Niu
- Yansong Shi
- Nguyen H. Tran
- Phuong Vo
- Walid Saad
- Dusit Niyato
- Zhu Han
- Choong Seon Hong
- H. Vincent Poor
publisher: arXiv
date: '2024-01-01'
link: https://doi.org/g9v3k4
type: paper
description: A distributionally robust approach for wireless semantic communication
with large AI models that addresses uncertainty in wireless channels and semantic
information transmission, enhancing reliability in AI-powered communication systems.
buttons:
- type: source
text: ArXiv
link: https://arxiv.org/abs/2506.03167
tags:
- semantic communication
- large AI models
- distributionally robust optimization
- wireless communication
- uncertainty
- robustness
journal: arXiv preprint
plugin: sources.py
file: sources.yaml
- id: arxiv:2407.07421
title: Federated PCA on Grassmann Manifold for IoT Anomaly Detection
authors:
Expand Down
3 changes: 1 addition & 2 deletions _data/orcid.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1 @@


- orcid: 0009-0007-4897-8381
2 changes: 1 addition & 1 deletion index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

{% capture text %}

We conduct cutting-edge research in artificial intelligence, federated learning, and large language models. Our work focuses on developing efficient, fair, and interpretable AI systems that can operate effectively in distributed and resource-constrained environments.
Our research integrates distributed computing, mathematical optimization, and artificial intelligence to build scalable, efficient, and interpretable learning systems. By combining rigorous theoretical foundations with practical deployment, we deliver intelligent solutions that perform reliably in real-world, resource-constrained environments.

{%
include button.html
Expand Down
4 changes: 2 additions & 2 deletions research/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ nav:

# {% include icon.html icon="fa-solid fa-microscope" %}Research

Our research focuses on cutting-edge areas in artificial intelligence, machine learning, and distributed systems. We investigate novel approaches to improve AI efficiency, fairness, and interpretability, with particular emphasis on large language models, federated learning, and edge computing.
Our research investigates the synergistic interplay of distributed computing, mathematical optimization, and artificial intelligence to create scalable, efficient, and trustworthy machine learning systems. We develop algorithms that jointly exploit distributed architectures and optimization principles to advance the efficiency, fairness, and interpretability of modern AI, with particular emphasis on large language models, federated and edge learning, and real-time collaborative inference.

Our team explores both theoretical foundations and practical applications, contributing to the advancement of intelligent systems that can operate effectively in real-world, resource-constrained environments.
At the same time, the team explores both theoretical foundations and practical applications, contributing to the advancement of intelligent systems capable of operating effectively in real-world, resource-constrained environments. This integrated approach delivers new theoretical insights and deployable frameworks for next-generation AI across heterogeneous and dynamic computational settings.

{% include section.html %}

Expand Down