Colloquium - Enabling Efficient Parallelism for Critical Applications in Big Data Analytics and Machine Learning

Date: 
February 27, 2019 - 3:30pm to 4:30pm
Location: 
2217 SC
Speaker: 
Peng Jiang
The Ohio State University | Department of Computer Science and Engineering

The increasing demand for computing power in big data analytics and machine learning applications have imposed new challenges in software systems to accelerate the computations on massively parallel hardware. In this talk, I will explain the new challenges of parallel computing with two motivating examples: User-Defined Aggregations (UDAs) and Stochastic Gradient Descent (SGD), which are two important routines in data analytics and machine learning and are hard to benefit from parallel computing.

UDAs exist in many data analytics applications to summarize information. Many UDAs are extremely hard to parallelize because they have strong dependences across loop iterations. I have developed new parallelization techniques to break the dependences in UDAs and turn the sequential computations into embarrassingly parallel. I will first introduce an intuitive idea, named Enumerative Speculation, behind my techniques by parallelizing a Finite State Machine. Then, I will explain how this intuitive idea becomes powerful when generalized as the Sampling-and-Reconstruction of functions. Based on this Sampling-and-Reconstruction idea, my compiler tool can parallelize a broad class of sequential loops including UDAs that are hard to parallelize by traditional techniques.

SGD is one of the most popular algorithms for training machine learning models. The algorithm is inherently sequential, but its convergence property allows some level of data parallelism -- if we use M training samples to compute the gradient in each iteration, the algorithm will converge M times faster. This seemingly straightforward parallelization, however, is not efficient in practice because the synchronization among the parallel tasks is expensive. To overcome this performance bottleneck, I have studied the convergence property of communication-efficient SGD. I will explain the intuitions of my theoretical results and show how these results can lead to more efficient training algorithms.

 

Bio

Peng Jiang - OSUPeng Jiang is a Ph.D. candidate from the Department of Computer Science and Engineering at the Ohio State University. His research interests are broadly in the area of software systems, with a focus on Parallel Computing and Program Optimization. Performance has been the main topic of his reserach. He has worked on improving the performance of applications in various domains including graph algorithms, scientific simulations, data analysis, and machine learning. His work has been published in top conferences both in parallel computing area and machine learning area.