Optimization approaches for bounding and certifying neural networks
Participer
Information Systems and Operations Management
Intervenant: Calvin Tsay (Imperial)
Salle Bernard Ramanantsoa
Abstract
Neural networks are central to many machine learning methods and engineering applications, but they generally lack guarantees on their properties/performance. Formal certification of neural networks is crucial for ensuring safety, particularly when deploying them in safety-critical domains such as autonomous vehicles. This talk outlines how mathematical optimization provides a framework to certify properties of neural networks, i.e., by solving (or bounding) a mixed-integer program based on the network. While mixed-integer programming suffers from scalability issues, we review how bound-tightening techniques and efficient relaxations can help improve tractability. Throughout this talk, we focus on two key applications: (1) certifying performance of neural network controllers, and (2) bounding the effects of data poisoning/manipulation during training. For the latter, we leverage convex relaxations to over-approximate the set of possible parameter updates, effectively bounding the set of all reachable parameters under data manipulation. We demonstrate our approach on multiple real-world datasets from applications including energy consumption, medical imaging, and autonomous driving.