Speaker
Taylor Olson
Abstract
Artificially intelligent agents are now part of our daily lives, bringing both benefits and potential risks. For example, unguarded chatbots can spread harmful content, and virtual assistants can be intrusive. To safely integrate these agents into society, they must understand and follow social and moral norms. Progress on this front has been made in the field of Machine Ethics through classical reasoning techniques and modern learning models. However, a unified approach is needed to create true artificial moral agents (AMAs).
In this talk, I will discuss my research creating AMAs. Drawing upon moral philosophy, my approach combines norm learning with sound moral reasoning. This work provides formal theories for representing, learning, and reasoning with different types of norms. I have theoretically demonstrated various interesting and necessary properties of these theories. I have also empirically demonstrated that this unified approach improves the social and moral competence of AI systems.
Bio
Taylor Olson is a PhD Candidate at Northwestern University working in Machine ethics. His research aims to better understand human moral nature and to use this understanding to improve the moral competence of AI systems. His interdisciplinary research combines theories and techniques from moral philosophy, logic, and machine learning. His work has been published in top-tier AI venues such as AAAI, IJCAI, and AAMAS. In addition, his work has been recognized with the 2023 IBM PhD Fellowship and the 2018 Incoming Cognitive Science Fellowship from Northwestern University. Taylor is an ex-hooper, current gamer, and fan of rap & hip-hop.