Date Approved
6-23-2025
Embargo Period
6-23-2025
Document Type
Dissertation
Degree Name
Ph.D. Electrical and Computer Engineering
Department
Electrical and Computer Engineering
College
Henry M. Rowan College of Engineering
Advisor
Ravi Ramachandran, Ph.D.
Committee Member 1
Nidhal Bouaynaya, Ph.D.
Committee Member 2
Ghulam Rasool, Ph.D.
Committee Member 3
Hassan Fathallah-Shaykh, M.D.
Committee Member 4
Umashanger Thayasivam, Ph.D.
Committee Member 5
Huaxia Wang, Ph.D.
Keywords
Computer Vision;Explainability;Explainable AI;Machine Learning;Robustness;Trustworthiness
Abstract
The deployment of machine learning (ML) in safety-critical domains is impeded by the black-box nature of deep learning models. Although these models excel at discerning complex patterns, their opacity undermines trust, limiting their use for high-stakes tasks. This thesis advances the field of eXplainable Artificial Intelligence (XAI) by introducing novel methods to evaluate and enhance model interpretability, ensuring explanations are both faithful (accurate to the model’s reasoning) and plausible (intelligible to human users). A key challenge in XAI is the unreliability of post-hoc explanation methods. To address this, we propose (1) quantitative evaluation metrics for assessing the faithfulness of local and global explanations and (2) enhancement frameworks that improve explainability without sacrificing faithfulness. Our qualitative and quantitative analysis reveals that simpler attribution methods reliably outperform complex alternatives in faithfulness. Leveraging these insights, we develop two novel training frameworks that integrate explainability into optimization and an explainability method utilizing fractional calculus to provide an understanding of deeper model characteristics. Through extensive experimentation, this work demonstrates that robust models and inherently explainable methods yield more trustworthy explanations than post-hoc approaches. The findings advocate for unifying XAI and ML into a singular field, where explainability is treated as a core component of model development, akin to robustness. By advancing both the evaluation and improvement of explanations, this thesis contributes to the development of transparent, reliable, and deployable ML systems for real-world applications.
Recommended Citation
Nielsen, Ian E., "Unifying Explainability and Machine Learning: Towards Trustworthy and Plausible Explanations without Post-hoc Manipulations" (2025). Theses and Dissertations. 3403.
https://rdw.rowan.edu/etd/3403