Host: HKU Musketeers Foundation Institute of Data Science
Co-host: Department of Computer Science, HKU
IDS Seminar - by Dr. Diyi Yang from Stanford University
Speaker: Dr. Diyi Yang, Assistant Professor, Department of Computer Science, Stanford University
Moderator: Dr. Tao Yu, Assistant Professor, HKU IDS / Department of Computer Science
Mode: Hybrid. Seats for on-site participants are limited. A confirmation email will be sent to participants who have successfully registered.
Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of applications and disciplines. In this talk, we discuss several approaches to enhancing human-AI and AI-AI interactions using LLMs. The first one explores how large language models transform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. We then introduce efficient machine unlearning techniques to enable LLMs to forget sensitive user data if needed, towards secure and responsible interaction. The last part looks at AI-AI interaction via a dynamic LLM agent network for multi-agent collaboration on complicated reasoning and generation tasks. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems.
Dr. Diyi Yang is an assistant professor in the Computer Science Department at Stanford University, also affiliated with the Stanford NLP Group, Stanford HCI Group, and Stanford Human-Centered Artificial Intelligence (HAI). Diyi received her PhD from Carnegie Mellon University, and her bachelor’s degree from Shanghai Jiao Tong University. Her research focuses on natural language processing, machine learning, and computational social science. Her work has received multiple best paper nominations or awards at top NLP and HCI conferences (e.g., ACL, EMNLP, SIGCHI, ICWSM, and CSCW). She is a recipient of IEEE “AI 10 to Watch” (2020), Intel Rising Star Faculty Award (2021), Samsung AI Researcher of the Year (2021), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), and an ONR Young Investigator Award (2023).
Dr. Tao Yu is an Assistant Professor in the HKU IDS and the Computer Science Department of the University of Hong Kong. He is also a Postdoctoral Research Fellow in the Department of Computer Science and Engineering at University of Washington and a co-director of the NLP group at the University of Hong Kong. His research interest is in Natural Language Processing and Deep Learning, with a focus on designing and building conversational natural language interfaces that can help humans explore and reason over data in any application (e.g., relational databases and mobile apps) in a robust and trusted manner. He has published and served in the program committee at ACL, EMNLP, ICLR, NAACL, etc. He co-organized the Interactive and Executable Semantic Parsing workshop at EMNLP 2020.
For information, please contact: