The latest satellite infrastructure for data processing, transmission and reception can certainly be improved by upgrading tools used to deal with very large amounts of data from every different sensor incorporated within the space missions, in order to develop a better technique to process data, in this paper we will take an insight into multimodal data fusion using machine learning algorithms. The planned introduction of several current and future missions such as Lunar Flashlight and Lunar IceCube, EQUULEUS CubeSat (JAXA), Luna 25 (ROSCOSMOS), Chang’e 7 (CNSA) in the lunar environment will greatly benefit from cooperative data structures. The Lunar Gateway (current paradigm). This paper will discuss how machine learning models are used to recreate environments from heterogeneous, multi-modal data sets. The current lunar data environment consists of archived data from Lunar Prospector, SMART 1, LADEE, and others. In particular, for those models based on neural networks, the most important difficulty is the vast number of training objects of the connected neural network based on Convolutional Neural Networks (CNN) to avoid overfitting and underfitting of the models. The CNN is a fully connected deep neural network with architectures for multimodal deep learning fusion models, but these architectures cannot deal with high-dimensional data, so we can discuss their strengths and weaknesses to create a similar neural network but using other architectures to improve the data transmission and reception.
Previous Article in event
Next Article in event
Recreating Lunar Environments by Fusion of Multimodal Data Using Machine Learning Models.
Published:
01 November 2022
by MDPI
in 9th International Electronic Conference on Sensors and Applications
session Sensor Network and IoT
Abstract:
Keywords: Lunar missions, machine learning, data fusion