摘要

Recently, deep transfer learning-based intelligent machine diagnosis has been well investigated, and the source and the target domain are commonly assumed to share the same fault categories, which can be called as the closed-set diagnosis transfer (CSDT). However, this assumption is hard to cover real engi-neering scenarios because some unknown new fault may occur unexpectedly due to the uncertainty and complexity of machinery components, which is called as the open-set diagnosis transfer (OSDT). To solve this challenging but more realistic problem, a Theory-guided Progressive Transfer Learning Network (TPTLN) is proposed in this paper. First, the upper bound of transfer learning model under open-set setting is thoroughly analyzed, which provides a theoretical insight to guide the model opti-mization. Second, a two-stage module is designed to carry out distracting unknown target samples and attracting known samples through progressive learning, which could effectively promote inter-class separability and intra-class compactness. The performance of proposed TPTLN is evaluated in two OSDT cases, where the diagnosis knowledge is transferred across bearings and gearbox running under different working conditions. Comparative results show that the proposed method achieves better robustness and diagnostic performance under different degrees of domain shift and openness variance. The source codes and links to the data can be found in the following GitHub repository: https://github.-com/phoenixdyf/Theory-guided-Progressive-Transfer-LearningNetwork.& COPY; 2023 Elsevier B.V.

  • 单位
    上海交通大学