You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _pages/research.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,12 +20,12 @@ permalink: /research/
20
20
<!-- ### ✅ Deep Learning System Security -->
21
21
<br>
22
22
23
-
#### ***Track 1: Software Engineering for Trustworthy AI***
23
+
#### ***Track 1: Software Engineering for Trustworthy AI-based Systems***
24
24
<!-- **[TOSEM 22, ICSE 21, TACAS 21, ISSTA 21, ASE 20, ICECCS 20, ICSE 19]: Testing, Verifying and Enhancing the Robustness of Deep Learning Models** -->
25
25
26
26
*We mean safe like nuclear safety as opposed to safe as in ‘trust and safety' - Ilya Sutskever*
27
27
28
-
From a software engineering perspective, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models (e.g., deep neural networks) or AI-based systems (e.g., autonomous cars, drones, etc), from different dimensions such as robustness, fairness, copyright and safety. This line of research is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the safety and ethic risks in the emerging AI era.
28
+
Modern systems, including emerging AI models (e.g., deep neural networks) and AI-based systems (e.g., autonomous cars, autonomous systems, etc), are mostly built upon software, making it vital to ensure its trustworthiness from a software engineering perspective. In this line of research, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models or AI-empowered systems, from different dimensions such as robustness, fairness, copyright and safety. This is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the safety and ethic risks in the new AI era.
29
29
30
30
<!-- including novel testing metrics correlated to robustness, test case generation methods, automatic verification and repair techniques to comprehensively test, verify and enhance the robustness of deep learning models deployed in various application scenarios, e.g., image classification, object detection and NLP. -->
31
31
@@ -70,7 +70,7 @@ From a software engineering perspective, we are working towards *a systematic te
70
70
71
71
*Formal methods can be incorporated throughout the development process to reduce the prevalence of multiple categories of vulnerabilities - Back to the Building Blocks*
72
72
73
-
Software stack is the core driving force behind the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus of paramount importance to formally verify and analyze the correctness and security of their foundational software stack, such as OS kernel, compiler, security protocol and control program, for industrial safety-critical systems. In this line of research, we are working on *developing new AI-empowered logical foundations and toolkits to better model, test, verify, monitor and enforce the desired properties for different software layers (especially those commonly used in safety-critical industries).*
73
+
Software stack is the core driving force behind the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus of paramount importance to formally verify and analyze the correctness and security of their foundational software stack, such as OS kernel, compiler, security protocol and control program, for industrial safety-critical systems. In this line of research, we are working on *developing new AI-empowered logical foundations and toolkits to better model, test, verify, monitor and enforce the desired properties and behaviors for different software layers (especially those commonly used in safety-critical industries).*
74
74
<!-- We are building systematic methodologies and toolkits including novel testing metrics correlated to robustness, test case generation methods, automatic verification and repair techniques to comprehensively test, verify and enhance the robustness of deep learning models deployed in various application scenarios, e.g., image classification, object detection and NLP. -->
0 commit comments