-Modern systems, including emerging AI models (e.g., deep neural networks) and AI-based systems (e.g., autonomous cars, autonomous systems, etc), are mostly built upon software, making it vital to ensure its trustworthiness from a software engineering perspective. In this line of research, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models or AI-empowered systems, from different dimensions such as robustness, fairness, copyright and safety. This is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the safety and ethic risks in the new AI era.
0 commit comments