CISE Seminar: Rui Gao, University of Texas at Austin

  • Starts: 3:00 pm on Friday, April 19, 2024
  • Ends: 4:00 pm on Friday, April 19, 2024

Learning Mixture Models with Neural Networks

Mixture models are fundamental tools for modeling complex phenomena in statistics, econometrics, and machine learning. However, estimating these models poses a significant challenge. Traditional nonparametric approaches often lack global convergence guarantees or rely on non-convex oracles to secure global convergence. In this talk, we explore the efficacy of a one-hidden-layer neural network model to approximate the mixing distribution. By developing new theoretical insights on the mean-field analysis of neural networks, we demonstrate that the gradient descent algorithm can converge to the global optimizer and prove that over-parameterization does not compromise generalization on unseen data. These results are validated through superior in-sample and out-of-sample performance against existing benchmarks. Based on joint works with Shuang Li and Zhi Wang.

Rui Gao is an Assistant Professor in the Department of Information, Risk, and Operations Management at the McCombs School of Business at the University of Texas at Austin. He received a Ph.D. in Operations Research from Georgia Institute of Technology in 2018, and a B.Sc. in Mathematics and Applied Mathematics from Xi’an Jiaotong University in 2013. Rui’s main research studies data-driven decision-making under uncertainty and prescriptive data analytics. His research has been recognized with several INFORMS paper competition awards, including Winner in Junior Faculty Interest Group Paper Competition (2020), Winner in Data Mining Best Paper Award (2017), Runner-up in Computing Society Student Paper Award (2017), and Finalist in George Nicholson Student Paper Competition (2016). He currently serves as an Associate Editor for Mathematical Programming.

Faculty Host: Jinglong Zhao

Student Host: Mehdi Kermanshah