Preprints
Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs. Tianyu Guo, Druv Pai, Yu Bai, Jiantao Jiao, Michael I. Jordan, Song Mei.
Unified Algorithms for RL with Decision-Estimation Coefficients: No-Regret, PAC, and Reward-Free Learning. Fan Chen, Song Mei, Yu Bai.
Publications
Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning. Ruiqi Zhang, Licong Lin, Yu Bai, Song Mei. Conference on Language Modeling (COLM) 2024.
Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective. Lei Zhao, Mengdi Wang, Yu Bai. ICML 2024.
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations. Tianyu Guo, Wei Hu, Song Mei, Huan Wang, Caiming Xiong, Silvio Savarese, Yu Bai. ICLR 2024.
Transformers as Decision Makers: Provable In-Context Reinforcement Learning via Supervised Pretraining. Licong Lin, Yu Bai, Song Mei. ICLR 2024.
Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight. Jiacheng Guo, Minshuo Chen, Huan Wang, Caiming Xiong, Mengdi Wang, Yu Bai. ICLR 2024.
Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection. Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, Song Mei. NeurIPS 2023 (Oral). [Code]
What can a Single Attention Layer Learn? A Study Through the Random Features Lens. Hengyu Fu, Tianyu Guo, Yu Bai, Song Mei. NeurIPS 2023.
Efficient RL with Impaired Observability: Learning to Act with Delayed and Missing State Observations. Minshuo Chen, Yu Bai, H. Vincent Poor, Mengdi Wang. NeurIPS 2023.
Breaking the Curse of Multiagency: Provably Efficient Decentralized Multi-Agent RL with Function Approximation. Yuanhao Wang, Qinghua Liu, Yu Bai, Chi Jin. COLT 2023.
Lower Bounds for Learning in Revealing POMDPs. Fan Chen, Huan Wang, Caiming Xiong, Song Mei, Yu Bai. ICML 2023.
Improved Online Conformal Prediction via Strongly Adaptive Online Learning. Aadyot Bhatnagar, Huan Wang, Caiming Xiong, Yu Bai. ICML 2023.
Offline Learning in Markov Games with General Function Approximation. Yuheng Zhang, Yu Bai, Nan Jiang. ICML 2023.
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms. Fan Chen, Yu Bai, Song Mei. ICLR 2023 (Notable-top-25% / “Spotlight”).
The Role of Coverage in Online Reinforcement Learning. Tengyang Xie, Dylan J. Foster, Yu Bai, Nan Jiang, Sham M. Kakade. ICLR 2023 (Notable-top-5% / “Oral”).
Learning Rationalizable Equilibria in Multiplayer Games. Yuanhao Wang, Dingwen Kong, Yu Bai, Chi Jin. ICLR 2023.
Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent. Yu Bai, Chi Jin, Song Mei, Ziang Song, Tiancheng Yu. NeurIPS 2022 (Oral).
Policy Optimization for Markov Games: Unified Framework and Faster Convergence. Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, Yu Bai. NeurIPS 2022.
Identifying Good Directions to Escape the NTK Regime and Efficiently Learn Low-Degree Plus Sparse Polynomials. Eshaan Nichani, Yu Bai, Jason D. Lee. NeurIPS 2022.
Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games. Ziang Song, Song Mei, Yu Bai. NeurIPS 2022.
Conformal Predictor for Improving Zero-Shot Text Classification Efficiency. Prafulla Kumar Choubey, Yu Bai, Chien-Sheng Wu, Wenhao Liu, Nazneen Rajani. EMNLP 2022.
Local Calibration: Metrics and Recalibration. Rachel Luo, Aadyot Bhatnagar, Yu Bai, Shengjia Zhao, Huan Wang, Caiming Xiong, Silvio Savarese, Edward Schmerling, Marco Pavone. UAI 2022.
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. Yu Bai, Chi Jin, Song Mei, Tiancheng Yu. ICML 2022.
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently? Ziang Song, Song Mei, Yu Bai. ICLR 2022.
Efficient and Differentiable Conformal Prediction with General Function Classes. Yu Bai, Song Mei, Huan Wang, Yingbo Zhou, Caiming Xiong. ICLR 2022. [Code]
Understanding the Under-Coverage Bias in Uncertainty Estimation. Yu Bai, Song Mei, Huan Wang, Caiming Xiong. NeurIPS 2021 (Spotlight).
Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning. Tengyang Xie, Nan Jiang, Huan Wang, Caiming Xiong, Yu Bai. NeurIPS 2021.
Sample-Efficient Learning of Stackelberg Equilibria in General-Sum Games. Yu Bai, Chi Jin, Huan Wang, Caiming Xiong. NeurIPS 2021.
Near-Optimal Offline Reinforcement Learning via Double Variance Reduction. Ming Yin, Yu Bai, Yu-Xiang Wang. NeurIPS 2021.
Don't Just Blame Over-parametrization for Over-confidence: Theoretical Analysis of Calibration in Binary Classification. Yu Bai, Song Mei, Huan Wang, Caiming Xiong. ICML 2021.
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models. Zitong Yang, Yu Bai, Song Mei. ICML 2021.
How Important is the Train-Validation Split in Meta-Learning? Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong. ICML 2021.
A Sharp Analysis of Model-based Reinforcement Learning with Self-Play. Qinghua Liu, Tiancheng Yu, Yu Bai, Chi Jin. ICML 2021.
Near Optimal Provable Uniform Convergence in Off-Policy Evaluation for Reinforcement Learning. Ming Yin, Yu Bai, Yu-Xiang Wang. AISTATS 2021 (Oral).
Towards Understanding Hierarchical Learning: Benefits of Neural Representations. Minshuo Chen, Yu Bai, Jason D. Lee, Tuo Zhao, Huan Wang, Caiming Xiong, Richard Socher. NeurIPS 2020.
Near-Optimal Reinforcement Learning with Self-Play. Yu Bai, Chi Jin, Tiancheng Yu. NeurIPS 2020.
Provable Self-Play Algorithms for Competitive Reinforcement Learning. Yu Bai, Chi Jin. ICML 2020.
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks. Yu Bai, Jason D. Lee. ICLR 2020.
Provably Efficient Q-Learning with Low Switching Cost. Yu Bai, Tengyang Xie, Nan Jiang, Yu-Xiang Wang. NeurIPS 2019.
Subgradient Descent Learns Orthogonal Dictionaries. Yu Bai, Qijia Jiang, Ju Sun. ICLR 2019.
ProxQuant: Quantized Neural Networks via Proximal Operators. Yu Bai, Yu-Xiang Wang, Edo Liberty. ICLR 2019. [Code]
Approximability of Discriminators Implies Diversity in GANs. Yu Bai, Tengyu Ma, Andrej Risteski. ICLR 2019.
The Landscape of Empirical Risk for Nonconvex Losses. Song Mei, Yu Bai, Andrea Montanari, 2016. The Annals of Statistics, Volume 46, Number 6A (2018), 2747-2774.
Other technical reports
Finding General Equilibria in Many-Agent Economic Simulations Using Deep Reinforcement Learning. Michael Curry, Alexander Trott, Soham Phade, Yu Bai, Stephan Zheng.
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width. Yu Bai, Ben Krause, Huan Wang, Caiming Xiong, Richard Socher. [Code]
Proximal algorithms for constrained composite optimization, with applications to solving low-rank SDPs. Yu Bai, John C. Duchi, Song Mei.
Analysis of Sequantial Quadratic Programming through the Lens of Riemannian Optimization. Yu Bai, Song Mei.
TAPAS: Two-pass Approximate Adaptive Sampling for Softmax. Yu Bai, Sally Goldman, Li Zhang.