Date Approved
9-13-2023
Embargo Period
9-18-2023
Document Type
Dissertation
Degree Name
Doctor of Philosophy in Electrical & Computer Engineering
Department
Electrical & Computer Engineering
College
Henry M. Rowan College of Engineering
Sponsor
U.S. Department of Education
Advisor
Ravi Ramachandran, Ph.D.
Committee Member 1
Ghulam Rasool, Ph.D.
Committee Member 2
John Schmalzel, Ph.D.
Committee Member 3
Nidhal Bouaynaya, Ph.D.
Committee Member 4
Umashanger Thayasivam, Ph.D.
Committee Member 5
Aaron Masino, Ph.D.
Keywords
Explainability AI, Healthcare, Machine Learning, Robust AI Systems
Subject(s)
Machine learning
Disciplines
Electrical and Computer Engineering | Engineering
Abstract
The intersection of machine learning and healthcare has the potential to transform medical diagnosis, treatment, and research. Machine learning models can analyze vast amounts of medical data and identify patterns that may be too complex for human analysis. However, one of the major challenges in this field is building trust between users and the model. Due to things like high false alarm rate and the black box nature of machine learning models, patients and medical professionals need to understand how the model arrives at its recommendations. In this work, we present several methods that aim to improve machine learning models in high-stakes environments like healthcare. Our work unifies two sub-fields of machine learning, explainable AI, and uncertainty quantification. First we develop a model-agnostic approach to deliver instance-level explanations using influence functions. Next, we show that these influence functions function are fairly robust across domains. Then, we develop an efficient method that reduces model uncertainty while modeling data uncertainty via Bayesian Neural Networks. Finally, we show that when combined our methods deliver significant utility beyond traditional methods while retaining a high level of performance via a real world deployment. Overall, the integration of uncertainty quantification and explainable AI can help overcome some of the major challenges of machine learning in healthcare. Together, they can provide healthcare professionals with powerful tools for improving patient outcomes and advancing medical research.
Recommended Citation
Epifano, Jacob Ryan, "BETTER MODELS FOR HIGH-STAKES TASKS" (2023). Theses and Dissertations. 3154.
https://rdw.rowan.edu/etd/3154