This book is a comprehensive curation, exposition and illustrative discussion of recent research tools for interpretability of deep learning models, with a focus on neural network architectures. In addition, it includes several case studies from application-oriented articles in the fields of computer vision, optics and machine learning related topic.
The book can be used as a monograph on interpretability in deep learning covering the most recent topics as well as a textbook for graduate students. Scientists with research, development and application responsibilities benefit from its systematic exposition.