- Starts: 12:30 pm on Monday, March 18, 2024
- Ends: 2:30 pm on Monday, March 18, 2024
TITLE: Neural Network Editing: Algorithms and Applications
ADVISOR: Wenchao Li (ECE, SE)
COMMITTEE: Yannis Paschalidis (ECE, BME, SE) Roberto Tron (ME, SE) Gianluca Stringhini (ECE)
CHAIR: John Bailleul (ME, SE, ECE)
ABSTRACT: Deep neural networks have demonstrated impressive performance in a wide variety of applications. However, deep neural networks are not perfect. In many cases, additional adjustments, which we call neural network editing, are essential for various objectives. In this thesis, we present three novel methodologies for neural network editing:
A novel methodology for repairing neural networks. Our approach utilizes ReLU networks' piecewise linear nature to efficiently construct a patch network targeting the linear region of the buggy input. When combined with the original network, it reliably corrects the buggy input's behavior. A new approach for repairing pretrained neural networks to satisfy global robustness and individual fairness properties. Any counterexample to a global robustness property indicates a corresponding large gradient. This facilitates the efficient identification of violating linear regions in ReLU networks. Our approach formulates and solves a robust convex optimization problem to compute a minimal weight change, ensuring repair of these violating regions. A novel approach for neural network ownership verification based on the notion of latent watermarks. Our approach to neural network ownership verification draws inspiration from backdoor-based watermarking techniques, aiming to achieve watermarking effects by decoupling the network's normal operation from its responses to watermarked inputs.- Location:
- ENG 410 (110-112 Cummington Mall)
- Hosting Professor
- Wenchao Li (ECE, SE)