PhD Candidate: Dat Hong
The past decade has seen a tremendous increase in machine learning research, particularly in deep learning techniques. These methods have proven to be highly effective in various domains, including health care, finance, genomics, image processing, and text analytics. However, machine learning has its limitations, particularly in the lack of transparency behind its behaviors, which leaves users with little understanding of how decisions are made by these models. To address this, researchers have been working on explanatory artificial intelligence (XAI) to provide insights into the behavior and thought processes of complex machines and algorithms.
This thesis proposes three methods to explain and understand sequential machine learning models. The first method is AdaAX, which generates a deterministic finite automaton to explain the internal behavior of any sequence model. The second method, ProtoryNet, transforms any text input into a prototypical text sequence before generating the prediction. The third method, Personalized Path Recourse (PPR), generates alternative action paths for a given path to achieve better outcomes while satisfying similarity and personalization requirements. These methods are evaluated in various settings and show promising results.