Final Exam - New Parallel Algorithms for Support Vector Machines and Neural Architecture Search

Date: 
April 13, 2020 - 11:30am
Location: 
This exam will be held remotely.

PhD Candidate: Jeffrey Hajewski

Abstract

Machine learning researchers have more access to data and computational resources than ever before. Taking full advantage of these resources is a challenging task requiring the researcher to not only be skilled in their respective field, but also in designing and building distributed systems that are able to scale to meet the required computational demands. This is particularly important if the researcher gets negative results because it can be unclear whether the issue is with their algorithm or with how they distributed their work across the system. This thesis proposes several new algorithms and system architectures for efficiently training and evaluating machine learning models. In the first part, we develop a new l^1 regularized SVM algorithm that is fast to train via Newton's method and we capitalize on its sparsity, provided by the l^1 regularization, to design a scalable distributed ensemble system. In the second part, we propose a distributed architecture for work queuing and several evolutionary algorithms that take advantage of this system to evolve neural network architectures. We develop two techniques that dramatically reduce the time it takes to train and evaluate neural network architecture without loss of effectiveness of the evaluation. Although this work is developed in specific fields of machine learning, these methods are widely applicable because they are generic techniques rather than algorithm-specific modifications. This means researchers can spend more time focusing on their research, and less time thinking about how to scale their systems.

Advisor: Suely Oliveira


Please contact Jeff for further details, if you wish to join his final exam.