Yu Bai

Yu Bai 

About me: I am a fifth-year PhD student in Statistics at Stanford University (specializing in statistical learning theories and non-convex optimization), where I am fortunate to be advised by Prof. John Duchi. I am a member of the Statistical Machine Learning Group. Here is my CV.

My research interest lies broadly in machine learning and deep learning. I am particularly interested in optimization and generalization theories of deep neural networks, generative models, and statistical reinforcement learning.

I am delighted to complement my research with industrial experiences. I spent two wonderful summers working as research interns in the industry, at Google Research in 2016 working with Li Zhang, and at Amazon AI Palo Alto in 2018 working with Edo Liberty and Yu-Xiang Wang.

Prior to Stanford, I was an undergrad in mathematics at Peking University.


  • I will be a research fellow at the Simons Institute (Berkeley) in May - August 2019 and participate in the Foundations of Deep Learning program!


Sequoia Hall
390 Serra Mall, Stanford, CA 94305

yub (at) stanford.edu



  • Subgradient Descent Learns Orthogonal Dictionaries.
    ICLR, May 2019, New Orleans, LA.

  • ProxQuant: Quantized Neural Networks via Proximal Operators
    ICLR, May 2019, New Orleans, LA.
    Bytedance AI Lab, Dec 2018, Menlo Park, CA.
    Amazon AI, Sep 2018, East Palo Alto, CA.

  • On the Generalization and Approximation in Generative Adversarial Networks (GANs)
    ICLR, May 2019, New Orleans, LA.
    Google Brain, Nov 2018, Mountain View, CA.
    Salesforce Research, Nov 2018, Palo Alto, CA.
    Stanford ML Seminar, Oct 2018, Stanford, CA.

  • Optimization Landscape of some Non-convex Learning Problems
    Stanford Theory Seminar, Apr 2018, Stanford, CA.
    Stanford ML Seminar, Apr 2017, Stanford, CA.


  • Conference reviewing: COLT, NIPS (top 30% reviewer), ICLR, ICML, IEEE-ISIT.

  • Journal reviewing: JMLR, IEEE-TSP, SICON (SIAM Journal on Control and Optimization)