

HKU IDS appies AI to cybersecurity, enhancing detection and prevention of attacks while addressing vulnerabilities in AI models and systems. They develop methods to identify software weaknesses, detect threats through large data analysis, and patch systems post-attack.
The team also focuses on strengthening AI security against adversarial attacks, model pollution, inference leaks, and other threats, aiming to create trustworthy, robust, and explainable AI systems. Their work seeks to balance the benefits of AI with the need for resilient cybersecurity protections.
Publications & Projects
- Jingfeng Wu*, Difan Zou*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade, Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression. Proceedings of the 39th International Conference on Machine Learning. (2022) [Long Presentation]
Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham M. Kakade. The Benefit of Implicit Regularization from SGD in Least Square Problems. Conference on Advances in Neural Information Processing Systems. (2021) - Difan Zou*, Jingfeng Wu*, Vladimir Braverman, Quanquan Gu, Sham M. Kakade. Benign Overfitting of Constant-Stepsize SGD for Linear Regression. Annual Conference on Learning Theory. (2021)
- Difan Zou, Pan Xu, Quanquan Gu. Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling. International Conference on Uncertainty in Artificial Intelligence. (2021)
- Difan Zou, Quanquan Gu. On the Convergence of Hamiltonian Monte Carlo with Stochastic Gradients. International Conference on Machine Learning. (2021)