“无限未来”学术论坛 | Algorithmic Perspectives on Certification of Machine Learning

发布者:何万源发布时间:2024-05-06浏览次数:15

Title: Algorithmic Perspectives on Certification of Machine Learning  

会议室:无线谷 A1319

时间:57日 10:00—11:00


Abstract: Machine learning has been proven practical in solving complex problems that cannot be solved before but was also found to be not without any shortfall. Therefore, before its adoption in safety critical applications, machine learning and machine learning enabled systems need to be certified, that is, a written assurance (i.e., a certificate) is provided to justify that it meets specific requirements. This talk will provide an overview of my group's research on the certification of machine learning, from the algorithmic perspectives in dealing with the vulnerabilities of machine learning. This includes the efforts on falsification, explanation, verification, enhancement, reliability estimation, and runtime monitoring, in dealing with known risks in the machine learning development cycle, such as generalisation, uncertainty, robustness, poisoning, backdoor, and privacy-related attacks. We will also discuss some pertinent topics including foundation models and energy efficiency.


Bio: Professor Xiaowei Huang is currently with the Department of Computer Science, University of Liverpool, UK. He founded the Trustworthy Autonomous Cyber-Physical Systems lab, part of a recent £12.7M investment by the Liverpool City Region Combined Authority. His research is concerned with the development of automated verification techniques that ensure the correctness and reliability of intelligent systems. He is leading the research direction on the verification and validation of deep neural networks. He authored the "Machine Learning Safety" book published by Springer, and has published 100+ papers, most of which appear in top conferences and journals of either Artificial Intelligence, such as the Artificial Intelligence Journal, ACM transactions on Computational Logics, NeurIPS, ICML, AAAI, IJCAI, CVPR, ECCV, etc, Formal Verification, such as CAV, TACAS, and Theoretical Computer Science, or Software Engineering, such as ICSE and ASE. He has given invited talks and served as a panellist at many leading conferences, discussing topics related to the safety and security of applying machine learning algorithms to critical applications. He co-organises a Turing Interest group on Neuro-symbolic AI, and co-chairs the AAAI and IJCAI workshop series on Artificial Intelligence Safety. He is the PI or co-PI of many Dstl (Ministry of Defence, UK), EPSRC, EU H2020, and Innovate UK projects, with projects valued more than £20M.


Baidu
map