The "black-box" nature of deep learning models remains a critical barrier to their adoption in high-stakes fields like precision agriculture, where trust and accountability are paramount. This research addresses this challenge by developing and validating two novel Explainable AI (XAI) frameworks designed to make crop segmentation models both transparent and highly accurate.
The first framework, SpectroXAI-LLaMA, is a post hoc tool that synergizes multiple attribution methods (e.g., SHAP, LIME) and uses a Chain-of-Thought (CoT) reasoning engine to generate logical, human-readable explanations. The second, IMPACTX-GC-RS, is a self-explaining U-Net architecture trained to simultaneously predict segmentation masks and generate its own Grad-CAM explanation heatmap, thereby making interpretability an intrinsic part of the model.
The results were transformative. The SpectroXAI-LLaMA framework successfully produced detailed explanations that were faithful to model behavior and consistent with agronomic principles. Most remarkably, the IMPACTX-GC-RS model, by learning to explain its own reasoning process, became more accurate than its non-explainable baseline. The mean Intersection over Union (IoU) increased from 0.9625 to 0.975, and the model completely eliminated a key misclassification error between cereal and potato classes.
This work makes a significant contribution by demonstrating that, contrary to the assumed trade-off, integrating explainability directly into AI models can enhance their predictive performance. Our frameworks provide a vital pathway to developing accountable, verifiable, and trustworthy AI systems, accelerating their adoption for sustainable agriculture and other critical applications.
