Introduction
The increasing use of machine learning in aerospace control systems poses critical questions related to explainability, verification, and certification. While black-box approaches can achieve high performance, their limited transparency remains a major barrier for deployment in safety-critical applications. This work addresses this gap by focusing on interpretable machine-learning methods for nonlinear control, aiming to balance learning-based performance with the explainability and analysability required in aerospace engineering.
Methods
An interpretable control framework is considered, based on a deliberate separation between optimal control generation and control-law learning. Open-loop optimal trajectories are first computed for nonlinear systems and subsequently used to identify closed-loop feedback laws via symbolic regression. Two complementary approaches are employed: (i) genetic programming with integrated continuous parameter optimisation, enabling compact symbolic controllers, and (ii) Kolmogorov–Arnold-based decompositions that reduce high-dimensional learning problems into structured combinations of univariate functions. This decomposition is particularly attractive for explainability, as it exposes the functional role of individual state variables within the control law.
Results
The approach is demonstrated on textbook nonlinear control problems, where fully interpretable feedback laws achieve performance comparable to reference optimal solutions and classical controllers. For aeronautical applications, the framework is applied to stability augmentation and tracking tasks on a nonlinear aircraft model, with emphasis on controller transparency rather than aggressive performance tuning. Results indicate that Kolmogorov–Arnold representations offer improved scalability and readability compared to direct symbolic regression, enabling meaningful inspection of control structure and sensitivities. Initial closed-loop validations and limited robustness analyses support the practical relevance of the method.
Conclusions
This work contributes to the development of explainable learning-based control for aerospace systems, offering a viable pathway toward controllers that are not only effective but also interpretable and verifiable. By prioritising transparency and structural insight, the proposed approach aligns with emerging certification and assurance needs for autonomous and highly nonlinear aerospace systems.
