Yu Bai

Yu Bai 

Yu Bai
Email: yu.bai (at) salesforce (dot) com

Curriculum Vitae | Google Scholar Profile | Github

About Me

I am a Senior Research Scientist at Salesforce AI Research in Palo Alto, CA. My research interest lies broadly in machine learning, such as deep learning, large language models/foundation models, reinforcement learning, learning in games, and uncertainty quantification.

Before joining Salesforce, I completed my PhD in Statistics at Stanford University (specializing in machine learning) in September 2019, where I was fortunate to be advised by Prof. John Duchi and was a member of the Machine Learning Group. During my PhD I also spent times at the research labs of Google and Amazon. Prior to Stanford, I was an undergrad in mathematics at Peking University.

My current focus is on foundation models (large language models) and transformers. Our goal is to discover new capabilities and new understandings of foundation models, using a combination of experiments, machine learning theory, and potentially new tools. See our recent paper on understanding in-context learning for an example.

Besides, I am also interested in Theoretical foundations of deep learning (blog post); Reinforcement learning theory (slides on partially observable RL); Multi-agent reinforcement learning and games (blog post, slides on MARL, slides on Extensive-Form Games); and Uncertainty quantification (slides).


  • [Sep 2023] Three papers accepted at NeurIPS 2023.

  • [July 2023] I am now serving as an Action Editor for TMLR.

  • [May 2023] Invited talk at SIAM OP23, Seattle.

  • [Apr 2023] Three papers accepted at ICML 2023.

  • [Mar 2023] I will serve as an Area Chair for NeurIPS 2023.

  • [Jan 2023] Three papers accepted at ICLR 2023.

  • [Nov 2022] Excited to be giving an invited talk “Recent Progresses on the Theory of Multi-Agent Reinforcement Learning and Games” at Stanford CS332.

Recent Work

Research Focus and Selected Publications

Foundation Models and Transformers

Our goal is to discover new capabilities and new understandings of transformers and large language models.

Multi-Agent Reinforcement Learning Theory

We developed the first line of provably efficient algorithms for multi-agent reinforcement learning.

Deep Learning Theory

We developed optimization and generalization results for overparametrized neural networks beyond the Neural Tangent Kenrels (NTK) regime, and identified provable advantages over the NTK regime.

Partially Observable Reinforcement Learning

We designed sharp sample-efficient algorithms and studied the fundamental limits for partially observable reinforcement learning.

Learning in Games

We designed near-optimal algorithms for learning equilibria in various multi-player games under bandit feedback.

Uncertainty Quantification in Machine Learning

We gave precise theoretical characterizations of the calibration and coverage of vanilla machine learning algorithms, and developed new uncertainty quantificaiton algorithms with valid guarantees and improved efficiency.