摘要
The rapid development of big data leads to many researchers focusing on improving bearing fault clas-sification accuracy using deep learning models. However, implementing a deep learning model on a lim-ited resources platform such as the smartphone or STM32 includes two difficulties: making the model as lightweight as possible and reducing the dependence on large training samples. To this end, a self -attention ensemble lightweight model combined with the transfer learning (SLTL) method is proposed to solve these intractable problems, which are "small, light, and fast." Firstly, the raw vibration signal is constructed into time-frequency images by continuous wavelet transform (CWT). Secondly, we build a self-attention lightweight convolutional neural network (SLCNN) model by integrating a self-attention mechanism (SAM) into the optimized SqueezeNet model. Then, based on a well-trained SLCNN in ImageNet, rich parameter knowledge is transferred from the pre-trained model to the target model. Finally, the fever training samples are used to fine-tune the target model. Experimental results on two bearing datasets validate the effectiveness of the SLTL method, which achieves 99.5% of classification accuracy with fewer training samples than other conventional CNN models. More importantly, the model parameters of SLTL are 0.95 M, and the floating-point operations (FLOPs) are 0.11 M, indicating that SLTL possesses high accuracy while maintaining lightweight, which benefits the platform with limited resources.
-
单位y