Summary
Stress has significant effects on an individual's daily life in modern society, making its detection a topic of great interest over the decade. While numerous studies have delved into this field, the accuracy and reliability of stress detection methods still have room for improvement. In this study, we propose a multimodal multitemporal-scale fusion-based stress detection system. First, a hybrid feature extraction module is proposed, which generates a feature set from the perspective of handcrafted and deep learning (DL) analysis across multiple temporal scales. Second, a stress detection module is proposed based on multisource feature fusion of electrocardiogram (ECG) and electrodermal activity (EDA) signals, which classifies a subject's state into baseline(/normal), stress, and amusement. In addition, the proposed system is tested on an open-access dataset WESAD using leave-one-out cross validation to verify its performance. The experimental results demonstrate that the proposed system succeeds in learning person-independent features for stress detection with high accuracy.