Skip to content

HKU-IDS Scholar

Dr Yue XIE
Research Assistant Professor
HKU Musketeers Foundation Institute of Data Science and
Department of Mathematics, HKU  (852) 2241 5793
Room 403, Run Run Shaw Building, HKU
Department of Mathematics
Key expertise

Mathematical Optimization, Machine Learning

About me

Dr. Yue Xie is a Research Assistant Professor in Musketeers Foundation Institute of Data Science (HKU-IDS) and Department of Mathematics at the University of Hong Kong. He was a postdoc at UW Madison working in the nonconvex optimization group led by Professor Stephen J. Wright. He received his PhD degree in Pennsylvania State University and Bachelor degree from Tsinghua University. Dr. Yue Xie has been focusing on algorithm design and analysis to address nonconvex and stochastic optimization problems with all types of applications including machine learning and data science. He has published/served as the referee of top-tier journals including Mathematical Programming, SIAM Journal on Optimization, and IEEE Transactions on Automatic Control, etc. He has delivered numerous presentations at major international conferences such as International Conference on Continuous Optimization (ICCOPT), International Symposium on Mathematical Programming (ISMP), SIAM Conference on Optimization and International Conference on Machine Learning (ICML). More details about him can be found at:

Current Research Project

Computational optimal transport

Optimal transport (OT) problem is a mathematical problem of more than 200 years’ history. It is closely related to the basic concept – Wasserstein distance that compares probability distributions. This concept is widely used in image processing, decision science, and machine learning. However, how to efficiently solve the optimal transport problem is still an open question. To approximate the solution, people often formulate and solve a large-scale linear programming (LP) problem with special structure. We aim to design reliable and fast algorithms to solve this LP. These algorithms are expected to have competitive practical performance and/or theoretical guarantees compared to existing state-of-art algorithms for optimal transport, most of which are Sinkhorn based.

In our recent work, we consider random block coordinate descent (RBCD) methods to directly solve the related LP. In each iteration, we restrict the potential large-scale optimization problem to small LP subproblems constructed via randomly chosen working sets. Using an expected Gauss-Southwell-q rule to select these working sets, we equip a vanilla version with almost sure convergence and linear convergence rate in expectation to solve a general LP problem. By further exploring the special structure of constraints in the OT problems and leveraging the theory of linear systems, we proposed several approaches to refine the random working set selection and accelerate the vanilla method. Preliminary numerical experiments verify the acceleration effects, solution sparsity and demonstrate several merits of an accelerated random block coordinate descent over the Sinkhorn’s algorithm when seeking highly accurate solutions. (

Selected Publications
Research Interests

Nonconvex/stochastic optimization, Sparse learning, Dictionary learning, Robust optimization, Variational inequality

Invited Presentations
Talk on “Complexity of Projected Newton Methods for Bound-constrained Optimization.” Tongji University, China. May 2022.