Please login first
A Novel Layer Sharing-based Incremental Learning via Bayesian Optimization
, , *
1  Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea

Abstract:

Incremental learning means the methodology that continuously uses sequential input data to extend the existing network’s knowledge. The layer sharing algorithm is one of the representative methods which leverages general knowledge by sharing some initial layers of the existing network. In this algorithm, estimating how much initial convolutional layers of the existing network can be shared as the fixed feature extractors for incremental learning should be solved. However, the existing algorithm selects the sharing configurations through not a proper optimization strategy but a brute force manner. Accordingly, it has to search for all possible sharing layer cases, leading to high computational complexity. To solve this problem, we firstly define this problem as a discrete combinatorial optimization problem. However, this problem is a non-convex and non-differential optimization problem which can not be solved using the gradient descent algorithm or other convex optimization methods, even though these are the powerful optimization techniques. Thus, we propose a novel efficient incremental learning algorithm based on Bayesian optimization, which guarantees the global convergence in a non-convex and non-differential optimization problem. And the proposed algorithm can adaptively find the optimal number of sharing layers via adjusting the threshold accuracy parameter in the proposed loss function. The proposed method produces the global optimal sharing layer number in only 6 iterations without searching for all possible layer cases in experimental results. Hence, the proposed method can find the global optimal sharing layer and achieve both high combined accuracy and low computational complexity.

Keywords: Bayesian Optimization; Incremental learning; Layer sharing algorithm
Top