Please login first
Unlocking neural networks: Explainability techniques for enhanced performance in automatic peripheral blood cell recognition
* 1 , 2 , 1
1  Department of Mathematics. Technical University of Catalonia. Barcelona. Spain
2  CORE Laboratory. Biochemistry and Molecular Genetics Department. Biomedical Diagnostic Center. Hospital Clinic. Barcelona. Spain
Academic Editor: Takahito Ohshiro

Abstract:

Introduction and objectives:

Automatic classification systems have significantly advanced in hematology, enabling the identification of over 80% of hematological diseases through peripheral blood cell analysis. However, their black box nature complicates adaptation to new images with variability, affecting precision and reliability. This study proposes a methodology using explainability techniques, such as LIME and Saliency Map, to enhance model performance in the identification of leukocytes and other cell types.

Methods:

A dataset of 12298 leukocyte images, labeled by clinical pathologists and divided into five classes, basophils (1218), eosinophils (3117), lymphocytes (1214), monocytes (1420), and neutrophils (3329), was used to train a VGG19 convolutional neural network, achieving 98% accuracy on the test set. The model was then evaluated on a second dataset comprising neutrophils (416), lymphocytes (104), monocytes (43), and eosinophils (10), where accuracy dropped to 83%. Analysis of the 100 best- and 100 worst-classified images from both sets revealed that, in correctly classified images, Saliency Map showed high pixel activation across the entire cell except the nucleus, whereas misclassified images focused on the nucleus. LIME indicated a dependency on image borders.

Results:

To address this, zoom-based data augmentation was applied, reducing the model's reliance on superior and inferior borders. Progressive layer unfreezing revealed that adjusting the fourth convolutional block reduced focus on the nucleus and improved cell-wide activation. After re-training, performance significantly improved, achieving 99.4% accuracy, 99.8% precision, 99.6% sensitivity, 99.9% specificity, and a 99.6% F1-score on the second dataset.

Conclusion:

The proposed approach demonstrates that integrating LIME, Saliency Map, and layer unfreezing can effectively identify and adjust specific layers impacting model interpretability and accuracy. This integration enhances adaptability and interpretability in diverse clinical contexts, supporting improved model performance under varying data conditions.

Keywords: Explainability Techniques; Convolutional Neural Networks; Blood Cell Classification; Model Interpretability; Deep Learning Adaptation
Comments on this paper
Currently there are no comments available.



 
 
Top