Deep neural networks are becoming more and more popular due to their revolutionary success in diverse areas, such as computer vision, natural language processing, and speech recognition. However, the decision-making processes of these models are generally not interpretable to users. In various domains, such as healthcare, finance, or law, it is critical to know the reasons behind a decision made by an artificial intelligence system. Therefore, several directions for explaining neural models have recently been explored In this communication, We investigate two major directions for explaining deep neural networks. The first direction consists of feature-based post-hoc explanatory methods, that is, methods that aim to explain an already trained and fixed model (post-hoc), and that provide explanations in terms of input features, such as superpixels for images (feature-based). The second direction consists of self-explanatory neural models that generate medical imaging explanations, that is, models that have a built-in module that generates explanations for the predictions of the model.
Previous Article in event
Next Article in event
Next Article in congress
Explaining Deep Neural Networks in medical imaging context.
Published:
28 October 2021
by MDPI
in MOL2NET'21, Conference on Molecular, Biomed., Comput. & Network Science and Engineering, 7th ed.
congress USE.DAT-07: USA-Europe Data Analysis Trends Congress, Cambridge, UK-Bilbao, Basque Country-Miami, USA, 2021
Abstract:
Keywords: Decision;making Processes;Deep Neural networks;Explaining Neural Models;medical imaging.