+AI systems, including emerging AI models (e.g., deep neural networks and large language models), AI-based control systems (e.g., self-driving cars, robots, autonomous systems, etc) and AI-based applications (e.g., AI Chatbots, LLM agents, etc), are mostly built upon software, making it vital to ensure their trustworthiness from a software engineering perspective. In this line of research, we are working towards *a systematic testing, verification and repair framework* to evaluate, identify and fix the risks hidden in the AI models or AI-empowered systems, from different dimensions such as robustness, fairness, copyright and safety. This is crucial for stakeholders and AI-empowered industries to be aware of, manage and mitigate the risks in the new AI era.
0 commit comments