Please login first
Understanding the Black Box of Fractional Machine Learning Models
* 1, 2 , * 2, 3 , * 2, 4
1  Department of Applied Mathematics, National University of Science and Technology POLITEHNICA Bucharest, Bucharest, Romania
2  Center for Research and Training in Innovative Techniques of Applied Mathematics in Engineering, National University of Science and Technology POLITEHNICA Bucharest, Bucharest, Romania
3  Faculty of Mathematics and Computer Science, University of Bucharest, Bucharest, Romania
4  Faculty of Applied Sciences, National University of Science and Technology POLITEHNICA Bucharest, Bucharest, Romania
Academic Editor: YangQuan Chen

Abstract:

Fractional Calculus (FC) has gained attention in machine learning (ML) due to its ability to model long-term memory (LTM), complex system behavior, and non-local dynamics. Fractional machine learning models (FMLM) demonstrate improved convergence properties across diverse applications (e.g., financial forecasting, EEG, ECG, climate and environment, and robotic control theory) by combining classical learning frameworks with memory-aware dynamics, thereby improving realism, robustness, and interpretability, particularly in time-dependent settings. Thus, these advantages pose significant challenges for model explainability and interpretability. The main goal of this paper is to analyze the explainability associated with FMLM. Unlike classical models based on integer-order derivatives, Fractional Machine Learning Models (FMLMs) exhibit non-local behavior, and their predictions are influenced by the entire historical input data or by the model's optimization process. As a result, these characteristics pose important challenges for Explainable Artificial Intelligence (XAI) approaches primarily designed for local and memoryless models. Fractional neurons contribute to modeling expressiveness but also amplify the model's black-box characteristics by introducing further interpretability issues, which can be reduced by XAI that explains input contributions and highlights the roles of fractional parameters. Moreover, the limitations of existing XAI techniques for FMLMs are investigated, and significant issues related to parameter interpretability, decision traceability, and model transparency are identified. It also proposes research directions for developing explainability frameworks based on fractional learning models. By integrating fractional-order calculus in AI/ML applications, the fractional neuron retains memory effects. At the same time, XAI makes it easier to understand the impact of historical data and fractional parameters on neuronal decisions.

Keywords: Fractional Machine Learning Models; Explainable Artificial Intelligence; Fractional Neurons; Black Box Models
Top