Title:
Foundations of Private Optimization for Modern Machine Learning
Abstract:
How can we develop optimization algorithms for training machine learning models that preserve the privacy of individuals' training data? In this talk, I will present my work addressing this challenge through differential privacy (DP). Differential privacy offers a rigorous, quantifiable standard of privacy that limits potential leakage of training data. I will explore the fundamental limits of performance for differentially private optimization in modern machine learning, particularly within federated learning settings, and present scalable, efficient algorithms that achieve optimal accuracy under DP constraints. Additionally, these algorithms demonstrate strong empirical performance.
Bio:
Andrew Lowy is a postdoctoral Research Associate at University of Wisconsin-Madison, advised by Stephen J. Wright. He received his PhD in Applied Math at University of Southern California under the supervision of Meisam Razaviyayn, where he was awarded the 2023 Center for Applied Mathematical Sciences (CAMS) Graduate Student Prize for outstanding research. His work has been published in leading venues in optimization, machine learning, and privacy, including SIOPT, NeurIPS, ICML, ICLR, ALT, AISTATS, ACM CCS, and the Journal of Privacy and Confidentiality. Prior to his doctoral studies, he completed his undergraduate studies at Princeton University and Columbia University.
Andrew’s research focuses on optimization for private, fair, and robust machine learning. His primary area of expertise is in differentially private optimization, with an emphasis on understanding fundamental limits and developing scalable algorithms that attain these limits.