Please login first
Performance Comparison of Transformer, LSTM, and ARIMA Time Series Forecasting Models: A Healthcare Application
* ,
1  Department of Systems Engineering, École de technologie supérieure (ÉTS), 1100 Notre-Dame St W, Montreal, Quebec H3C 1K3, Canada
Academic Editor: Andrea Cataldo

Abstract:

Objective: Deep learning has significantly transformed time series analysis, particularly for long and complex datasets. While traditional methods suffice for simpler and shorter time series, advanced deep learning algorithms excel in handling intricate patterns. Our study focuses on evaluating the performance of these models in analysing complex time series patterns, using vital signs during sleep as a compelling example. Monitoring vital signs with time series forecasting enables early detection of sleep disorders, leading to faster intervention and better treatment outcomes.

Methods: We evaluated three forecasting models: ARIMA (Autoregressive Integrated Moving Average), a statistical method used for forecasting time series; LSTM (Long Short-Term Memory), a recurrent neural network architecture well suited for handling sequential data; and TFT (Temporal Fusion Transformer), a state-of-the-art deep learning model utilizing attention mechanisms. Our dataset consisted of nocturnal ECGs of 35 individuals, from the Physionet Apnea-ECG database. We used the Pan--Tompkins Algorithm to extract the heart rate from the ECGs and interpolated the results for evenly spaced time series forecasting.

Results: The ARIMA, LSTM, and TFT models were compared in forecasting heart rate data derived from ECG signals during sleep. Our evaluation focused on forecasting the next two minutes of heart rate data based on the past 30 minutes of observations. The ARIMA model achieved a mean absolute error (MAE) of 6.1 beats per minute (bpm) and a root mean squared error (RMSE) of 7.8 bpm. The LSTM model outperformed ARIMA, demonstrating a lower MAE of 4.3 bpm and RMSE of 5.9 bpm. The TFT model, leveraging attention mechanisms and deep learning, showcased the best performance with an MAE of 3.8 bpm and RMSE of 4.7 bpm.

Conclusion: Comparatively, the TFT model exhibited superior forecasting accuracy over both ARIMA and LSTM models, indicating its efficacy in capturing the complex dynamics. These results underscore the potential of advanced deep learning techniques in enhancing time series forecasting.

Keywords: Transformer; LSTM; ARIMA; Time Series Forecasting; Sleep; Heart rate
Comments on this paper
Johnson Rajasingh
The data is not sufficient to the conclusion drawn by the candidate.



 
 
Top