You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _data/news.yml
+12-4Lines changed: 12 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,23 @@
1
+
- date: Nov 2024
2
+
headline: ICFEM 2025 will be in Hangzhou, consider submitting your work!
3
+
4
+
- date: Oct 2024
5
+
headline: Prof. Wang will serve as the PC member of ACM CCS 2025, consider submitting your work!
6
+
7
+
- date: Oct 2024
8
+
headline: One paper on TVM fuzzing is accepted by TOSEM, congrats to Xiangxiang!
1
9
2
10
- date: May 2024
3
11
headline: We release our safety evaluation benchmark (largest to date) for LLMs powered by automatic and adaptive test generation. See details at [Paper link](https://www.arxiv.org/abs/2405.14191), [Github link](https://github.com/IS2Lab/S-Eval), [HuggingFace Leaderboard Link](https://huggingface.co/spaces/IS2Lab/S-Eval).
4
12
5
13
- date: Apr 2024
6
-
headline: Prof. Wang will serve as the PC member of ISSTA 2025, consider submitting your best work!
14
+
headline: Prof. Wang will serve as the PC member of ISSTA 2025, consider submitting your work!
7
15
8
16
- date: Apr 2024
9
17
headline: Jianan and Prof. Wang attended ICSE 2024 and presented our paper on verification guided synthesis for repairing deep neural networks!
10
18
11
19
- date: Mar 2024
12
-
headline: Prof. Wang will serve as the PC member of ISSRE 2024 and ChinaSoft/FMAC 2024, consider submitting your best work!
20
+
headline: Prof. Wang will serve as the PC member of ISSRE 2024 and ChinaSoft/FMAC 2024, consider submitting your work!
13
21
14
22
- date: Mar 2024
15
23
headline: Xiaoxia was invited to give a talk on our survey paper on prompting frameworks for LLM at the AGI Leap Summit! Her work also won the Best Paper Award of the summit, congrats!
@@ -18,10 +26,10 @@
18
26
headline: One paper on neural network debugging is accepted by ISSTA 2024, congrats to Jialuo!
19
27
20
28
- date: Dec 2023
21
-
headline: Prof. Wang will serve as the PC member of ISSTA 2024, consider submitting your best work!
29
+
headline: Prof. Wang will serve as the PC member of ISSTA 2024, consider submitting your work!
22
30
23
31
- date: Nov 2023
24
-
headline: Prof. Wang will serve as the PC member of ICSE 2025, consider submitting your best work!
32
+
headline: Prof. Wang will serve as the PC member of ICSE 2025, consider submitting your work!
25
33
26
34
- date: Oct 2023
27
35
headline: Prof. Wang will serve as the PC member of TASE 2024, ANT 2024 and ACNS/SiMLA 2024.
Copy file name to clipboardExpand all lines: _pages/home.md
+12-10Lines changed: 12 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,16 +6,17 @@ sitemap: false
6
6
permalink: /
7
7
---
8
8
9
-
**About** The *Intelligent System Security (IS2) Lab* is a research lab led by [Prof. Jingyi Wang](https://wang-jingyi.github.io/) affiliated with the College of Control Science and Engineering, [Zhejiang University](https://www.zju.edu.cn/english/), Hangzhou, China. Our lab's work lie in the intersection of formal methods, software engineering, artificial intelligence (AI) and safety/security. Specificaclly, we aim to develop novel software engineering techniques (often from a formal methods perspective) towards building more trustworthy AI-based or safety-critical industrial systems or software.
9
+
**About** The *Intelligent System Security (IS2) Lab* is a research group led by [Prof. Jingyi Wang](https://wang-jingyi.github.io/) affiliated with the College of Control Science and Engineering, [Zhejiang University](https://www.zju.edu.cn/english/), Hangzhou, China. Our lab's research focuses on developing novel software engineering (SE) methodologies towards building more trustworthy systems or software.
10
+
<!-- developing novel software engineering techniques towards building more trustworthy AI or more secure systems and software. -->
11
+
<!-- lie in the intersection of formal methods, software engineering, artificial intelligence (AI) and safety/security. Specifically, we aim to develop novel software engineering techniques (often from a formal methods perspective) towards building more trustworthy AI-based or safety-critical industrial systems or software. -->
10
12
<!-- *provide certifiable (and ideally provable) reliability or security guarantees for practical intelligent or distributed systems* like autonomous driving car, industrial control system, blockchain system, etc. -->
11
-
<!-- Specifically, we are working on the following exiciting research topics: -->
12
-
The detailed research topics include:
13
+
<!-- Specifically, we are working on the following existing research topics: -->
14
+
The detailed research topics of our lab include:
13
15
14
-
- Quality assurance/certification (testing, verification and repair, etc) of AI models (especially *large language models*) or AI-based systems/applications;
15
-
- Model-based rigrous engineering of safety-cirtical software;
16
-
- Verification of concurrent (reactive) systems, e.g., OS kernels and distributed control systems;
17
-
- Verification of security protocols.
18
-
<!-- - Other related topics like fuzzing, symbolic execution, runtime verification, etc. -->
16
+
- SE4AI, e.g., testing, verification and repair of AI models or AI-based systems/applications;
17
+
- Verification of concurrent (reactive) systems;
18
+
- Verification of security protocols;
19
+
- Other related topics like fuzzing, symbolic execution and runtime monitoring.
@@ -48,10 +49,11 @@ The detailed research topics include:
48
49
</a>
49
50
</div>
50
51
51
-
**Collaborations** Our lab has established active collaborations with top universities like ETH, UC Berkeley, UIUC, National University of Singapore, etc. Moreover, we are also working closely with our industrial partners like Huawei, Alibaba and Ant Group to tackle real-world security challenges.
52
+
**Collaborations** Our lab aims to conduct research with practical relevance and impact by working closely with our industrial partners like Huawei, Alibaba and Ant Group. We are also actively collaborating with top universities like ETH, UC Berkeley, UIUC, University of Manchester, National University of Singapore, Singapore Management University, etc.
53
+
<!-- Moreover, we are also working closely with our industrial partners like Huawei, Alibaba and Ant Group to tackle real-world challenges. -->
52
54
53
55
54
-
**Vacencies** Our lab is always actively looking for self-motivated PostDoc/PhD/master/research assistants/research interns with competitive packages to join our group at ZJU to work on any of the above topics. Feel free to contact Prof. Wang with CV (and transcript for PhD/Master applications) if you are interested.
56
+
**Vacancies** Our lab is always actively looking for self-motivated PostDoc/PhD/master/research assistants/research interns with competitive packages to work with us at ZJU. Feel free to contact Prof. Wang with CV (and transcript for PhD/Master applications) if you are interested. We welcome candidates from diverse background to apply.
55
57
<!-- Preferred PhD candidates should be good at programming or maths, and more importantly love doing research. -->
56
58
<!-- For ZJU students, kindly check out my Google Calendar if you wish to have a talk. -->
Copy file name to clipboardExpand all lines: _pages/research.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,7 @@ permalink: /research/
20
20
<!-- ### ✅ Deep Learning System Security -->
21
21
<br>
22
22
23
-
#### ***Theme 1: Testing, Verification and Repair of AI Models or AI-based Systems***
23
+
#### ***Theme 1: SE for Trustworthy AI: Testing, Verification and Repair of AI Models or AI-based Systems***
24
24
<!-- **[TOSEM 22, ICSE 21, TACAS 21, ISSTA 21, ASE 20, ICECCS 20, ICSE 19]: Testing, Verifying and Enhancing the Robustness of Deep Learning Models** -->
25
25
26
26
For AI models (e.g., large language models or deep learning models in general) or AI-based systems (e.g., autonomous cars), we are working towards *a systematic testing>verification>repair loop to comprehensively and automatically evaluate, identify and fix the potential risks hidden in multiple dimensions, e.g., robustness, fairness, safety and copyright.* This line of research is crucial for human beings to be aware of, manage and mitigate the risks in the emergence of diverse AI models and AI-based systems.
@@ -33,10 +33,10 @@ For AI models (e.g., large language models or deep learning models in general) o
33
33
<!-- <br> -->
34
34
<br>
35
35
36
-
#### ***Theme 2: System Software Testing or Verification for Industrial Safety-critical Systems***
36
+
#### ***Theme 2: Testing, Verification and Security of Industrial Safety-critical Systems***
37
37
<!-- **[TOSEM 22, ICSE 21, TACAS 21, ISSTA 21, ASE 20, ICECCS 20, ICSE 19]: Testing, Verifying and Enhancing the Robustness of Deep Learning Models** -->
38
38
39
-
Software is the core driving force for the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus crucial to formally verify the correctness of their software foundations (e.g., OS kernel, compilers, security protocols or control programs) for industrial safety-critical systems. In this line of research, we are working on *developing new logic foundations and specifications to better model, test and verify the desired safety or security properties in different system software layers (especially those commonly used in safety-critical industries).*
39
+
Software is the core driving force for the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus crucial to formally verify their software foundations (e.g., OS kernel, compilers, security protocols or control programs) for industrial safety-critical systems. In this line of research, we are working on *developing new logical foundations and specifications to better model, test and verify the desired security properties in different system software layers (especially those commonly used in safety-critical industries).*
40
40
<!-- We are building systematic methodologies and toolkits including novel testing metrics correlated to robustness, test case generation methods, automatic verification and repair techniques to comprehensively test, verify and enhance the robustness of deep learning models deployed in various application scenarios, e.g., image classification, object detection and NLP. -->
41
41
42
42
*Related publications: [TSE 24, AsiaCCS/CPSS 24, TSE 23, CCS 23, CONFEST/FMICS 23, FITEE 22, IoT 22, TSE 21, ICSE 18, DSN 18, STTT 18, FM 18, FASE 17, FM 16]*
0 commit comments