Colloquium - Machine Learning with Limited Compute and Data Resources

April 3, 2020 - 4:00pm to 5:00pm
Zoom - See emails for details
Qian (Lara) Yang
Duke University

Recent innovations in artificial intelligence and machine learning not only excel at numerous tasks in industry and academia but also keep impacting our daily lives. With the power of large datasets and the availability of compute resources such as graphics processing units (GPUs) and tensor processing units (TPUs), machine learning models have grown with increasingly better performance. However, their success relies heavily on the surge of big computing and big data. There remain challenges emerging in the real world given limited resources. This talk describes a series of new machine learning algorithms that draw on utilizing the model and data efficiently and effectively given limited resources.

For the challenges raised by limited compute resources, which is still an underexplored research area, this talk will present the first model-parallel algorithm to parallelize the training of Transformer-based language models with over a billion parameters. This talk will also consider platforms with limited memory and computation ability, such as mobile devices, and introduce different strategies to transform continuous and generic sentence embeddings into a binarized form, while preserving their rich semantic information. At the same time, in terms of limited data resources, recent success in natural language processing (NLP) tasks has been achieved for popular languages like English that have text corpora of hundreds of millions of words. However, these are only about 20 languages out of approximately 7,000 languages in the world. For low-resource language pairs, this talk will demonstrate a joint training algorithm for pivot-based neural machine translation by bridging source and target languages with a pivot language. This talk will also propose the first end-to-end conditional generative model for generating paraphrases via adversarial training. Finally, I will discuss applications of machine learning techniques in interdisciplinary research, including biomedical data science, human-computer interaction, and education.


Qian YangQian (Lara) Yang is a postdoctoral researcher at Duke University, working with Lawrence Carin. Before that, she worked with Rebecca J. Passonneau at both Columbia University and Pennsylvania State University, and Hermann Ney at RWTH Aachen University in Germany. She received her Ph.D. in Computer Science and Technology from Tsinghua University in 2017, advised by Gerard de Melo. Her research focuses on a wide range of topics in Natural Language Processing, Machine Learning, and the intersection of Machine Learning with Biomedical Data Science, Education, and Human-Computer Interaction. She is the recipient of the Educational Advances in Artificial Intelligence (EAAI) New and Future AI Educator award (funded by grants from NSF, AIJ, AAAI), the Google Women Techmakers Scholarship, an RWTH – Tsinghua Research Fellowship (provided by DAAD in Germany), and nominated for the 2019 Outstanding Postdoc at Duke University award. She has served as a session chair at AAAI and on program committees for major conferences such as NeurIPS, ICML, ACL, EMNLP, NAACL, COLING, AAAI, IJCAI, and KDD.