Skip to content

Commit a14a0d3

Browse files
committed
akdgjg
1 parent fc9c112 commit a14a0d3

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

_pages/home.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ permalink: /
1313
<!-- Specifically, we are working on the following existing research topics: -->
1414
The detailed research topics of our lab include:
1515

16-
- Software engineering for trustworthy AI, e.g., testing, verification and repair of AI models or AI-based systems/applications;
17-
- Formal design and analysis of security protocols;
16+
- Software engineering for trustworthy AI systems, e.g., testing, verification and repair of AI models or AI-based systems/applications;
1817
- Formal analysis of system or software security;
18+
- Formal design and analysis of security protocols;
1919
- Other related topics like fuzzing, symbolic execution, concolic testing and runtime verification.
2020

2121

_pages/research.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ permalink: /research/
2020
<!-- ### ✅ Deep Learning System Security -->
2121
<br>
2222

23-
#### ***Track 1: Software Engineering for Trustworthy AI***
23+
#### ***Track 1: Software Engineering for Trustworthy AI-based Systems***
2424
<!-- **[TOSEM 22, ICSE 21, TACAS 21, ISSTA 21, ASE 20, ICECCS 20, ICSE 19]: Testing, Verifying and Enhancing the Robustness of Deep Learning Models** -->
2525

2626
*We mean safe like nuclear safety as opposed to safe as in ‘trust and safety' - Ilya Sutskever*
2727

28-
From a software engineering perspective, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models (e.g., deep neural networks) or AI-based systems (e.g., autonomous cars, drones, etc), from different dimensions such as robustness, fairness, copyright and safety. This line of research is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the safety and ethic risks in the emerging AI era.
28+
Modern systems, including emerging AI models (e.g., deep neural networks) and AI-based systems (e.g., autonomous cars, autonomous systems, etc), are mostly built upon software, making it vital to ensure its trustworthiness from a software engineering perspective. In this line of research, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models or AI-empowered systems, from different dimensions such as robustness, fairness, copyright and safety. This is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the safety and ethic risks in the new AI era.
2929

3030
<!-- including novel testing metrics correlated to robustness, test case generation methods, automatic verification and repair techniques to comprehensively test, verify and enhance the robustness of deep learning models deployed in various application scenarios, e.g., image classification, object detection and NLP. -->
3131

@@ -70,7 +70,7 @@ From a software engineering perspective, we are working towards *a systematic te
7070

7171
*Formal methods can be incorporated throughout the development process to reduce the prevalence of multiple categories of vulnerabilities - Back to the Building Blocks*
7272

73-
Software stack is the core driving force behind the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus of paramount importance to formally verify and analyze the correctness and security of their foundational software stack, such as OS kernel, compiler, security protocol and control program, for industrial safety-critical systems. In this line of research, we are working on *developing new AI-empowered logical foundations and toolkits to better model, test, verify, monitor and enforce the desired properties for different software layers (especially those commonly used in safety-critical industries).*
73+
Software stack is the core driving force behind the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus of paramount importance to formally verify and analyze the correctness and security of their foundational software stack, such as OS kernel, compiler, security protocol and control program, for industrial safety-critical systems. In this line of research, we are working on *developing new AI-empowered logical foundations and toolkits to better model, test, verify, monitor and enforce the desired properties and behaviors for different software layers (especially those commonly used in safety-critical industries).*
7474
<!-- We are building systematic methodologies and toolkits including novel testing metrics correlated to robustness, test case generation methods, automatic verification and repair techniques to comprehensively test, verify and enhance the robustness of deep learning models deployed in various application scenarios, e.g., image classification, object detection and NLP. -->
7575

7676
*Related publications: [ICSE 25, WWW 25, TSE 24, AsiaCCS/CPSS 24, TSE 23, CCS 23, CONFEST/FMICS 23, FITEE 22, IoT 22, TSE 18, ICSE 18, DSN 18, STTT 18, FM 18, FASE 17, FM 16]*

0 commit comments

Comments
 (0)