Please login first
Classifier Module of Types of Movements based on Signal Processing and Deep Learning Techniques
* 1 , 2 , 1
1  Speech Technology Group. Information Processing and Telecomunications Center. E.T.S.I. Telecomunicación. Universidad Politécnica de Madrid.
2  E.T.S.I. Telecomunicación. Universidad Politécnica de Madrid
Academic Editor: Stefano Mariani

https://doi.org/10.3390/ecsa-8-11316 (registering DOI)
Abstract:

Human Activity Recognition (HAR) has been widely addressed by deep learning techniques. However, most prior research applied a general unique approach (signal processing and deep learning) to deal with different human activities including postures and gestures. These types of activity typically have highly diverse motion characteristics, which could be captured with wearable sensors placed on the user's body. Repetitive movements like running or cycling have repetitive patterns over time and generate harmonics in the frequency domain, while postures like sitting or lying are characterized for a fixed position with some positional changes and gestures or non-repetitive movements are based on an isolated movement usually performed by a limb. This work proposes a classifier module to perform an initial classification among these different types of movements, which would allow applying afterwards the most appropriate approach in terms of signal processing and deep learning techniques for each type of movement. This classifier is evaluated using PAMAP2 and OPPORTUNITY datasets using subject-wise and Leave-One-Subject-Out cross-validation methodologies. These datasets used inertial sensors on hands, arms chest, hip, and ankles, which could collect data in a non-intrusive way. In case of PAMAP2 and subject-wise cross-validation, the direct approach for classifying the 12 activities using 5-second windows in the frequency domain obtained an accuracy of 85.26 ± 0.25 %. However, an initial classifier module could distinguish between repetitive movements and postures using 5-second windows reaching higher performances. Afterwards, specific window size, signal format and deep learning architecture were used for each type of movement module, obtaining a final accuracy of 90.09 ± 0.35 % (an absolute improvement of 4,83%).

Keywords: Human Activity Recognition; Wearable Sensors; Classifier Module; Inertial Signals; Convolutional Neural Networks; Deep Learning; Repetitive Movements; Gestures; Postures; PAMAP2; OPPORTUNITY.
Top