Machine-learning applications nowadays usually become a subject of data unavailability, complexity, and drift resulting from massive and rapid changes in data Volume, Velocity, and Variety (3V). Recent advances in deep learning have brought many improvements to the field providing, generative modeling, nonlinear abstractions, and adaptive learning to address these challenges respectively. In fact, deep learning aims to learn from representations that provide a consistent abstraction of the original feature space, which makes it more meaningful and less complex. However, data complexity related to different distortions such as higher levels of noise, for instance, remains difficult to overcome and challenging. In this context, recurrent expansion (RE) algorithms are recently unleashed to explore deeper representations than ordinary deep networks, providing further improvement in feature mapping. Unlike traditional deep learning, which extracts meaningful representations through inputs abstraction, RE allows entire deep networks to be merged into another one consecutively allowing exploration of Inputs, Maps, and estimated Targets (IMTs) as primary sources of learning; a total of three sources of information to provide additional information about their interaction in a deep network. Besides, RE makes it possible to study IMTs of several networks and learn significant features, improving its accuracy with each round. In this context, this paper presents a general overview of RE, its main learning rules, advantages, disadvantages, and its limits while going through an important state-of-the-art and some illustrative examples.
checking/high ratio validation requires deeper mathematics analysis, thus more contributions.