
报告人:Rui Ai, MIT
时间:12月24日(星期三)2:00pm
地点:静园五院204
Host:陈炤桦 图灵班2017级
Abstract
With the rise of multi-agent reasoning using large language models, aggregating answers from multiple LLMs has become a central challenge. Most existing approaches rely on simple majority voting, which ignores heterogeneity and correlation across models. In this talk, I will introduce two new aggregation methods—Optimal Weight (OW) and Inverse Surprising Popularity (ISP)—that leverage both first-order and second-order information to produce more reliable collective decisions. I will present theoretical guarantees showing why these methods outperform majority voting, and demonstrate their effectiveness on synthetic data, standard LLM benchmarks, and a real-world healthcare application. Together, these results offer practical guidance for designing robust multi-agent LLM systems. This talk is based on joint work with Yuqi Pan, David Simchi-Levi, Milind Tambe and Haifeng Xu.
Biography

Rui Ai is a third-year PhD student at MIT, advised by Professor David Simchi-Levi. He earned his bachelor's degree from the School of Mathematical Sciences at Peking University in 2023. His research lies at the intersection of machine learning, especially reinforcement learning, and economics, with applications to mechanism design, data valuation and LLM post-training.