Please login first
On the Compressive Power of Boolean Threshold Autoencoders
1  Bioinformatics Center, Institute for Chemical Research, Kyoto University
Academic Editor: Frank Werner

Published: 24 September 2021 by MDPI in The 1st Online Conference on Algorithms session Artificial Intelligence Algorithms
https://doi.org/10.3390/IOCA2021-10894 (registering DOI)
Abstract:

Autoencoders have been extensively studied and applied in recent studies on neural networks. An autoencoder is a layered neural network that consists of an encoder and a decoder, where the former compresses an input vector to a lower dimensional vector, and the latter transforms the low-dimensional vector back to the original input vector exactly or approximately. We study the compressive power of autoencoders using the Boolean threshold network model (i.e., multi-layer perceptron with linear threshold activation functions) by analyzing the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back exactly. We show that for any set of $n$ distinct vectors there exists a seven-layer autoencoder with the optimal compression ratio (i.e., logarithmic size), but that there exists a set of vectors for which there is no three-layer autoencoder who has a middle layer of logarithmic size. We also study the numbers of nodes and layers required only for encoding, the results of which suggest that the decoding part is the bottleneck of autoencoding.

This talk is based on joint work with A. A. Melkman, S. Guo, W-K. Ching, and P. Liu.

Keywords: Autoencoders

 
 
Top