An integrated multi-head dual sparse self-attention network for remaining useful life prediction

Committed to accident prevention, prediction of remaining useful life (RUL) plays a crucial role in prognostics health management technology. Conventional convolutional neural network and long-short-term memory network have notable limitations in the size of convolution in processing temporal data and the associations between non-adjacent data when predicting the RUL, respectively. Although the proposal of the Transformer provides an opportunity to solve the shortcomings mentioned above, Transformer still has some limitations. Precisely, the Transformer model awaits in-depth research focusing on vital local regions and decreasing computational complexity. In this sense, this paper proposes a novel integrated multi-head dual sparse self-attention network (IMDSSN) based on a modified Transformer to predict the RUL. From two sparse perspectives, the proposed IMDSSN includes a multi-head ProbSparse self-attention network (MPSN) and a multi-head LogSparse self-attention network (MLSN). Specifically, MPSN is designed to filter out the primary function of the dot product operation, thereby improving computational efficiency. Furthermore, considering the data inside the whole time window, a comprehensive logarithmic-based sparse strategy in MLSN is proposed to reduce the amount of computation. An aircraft turbofan engine dataset is used to verify the proposed IMDSSN, which demonstrates that the IMDSSN is better than some conventional approaches.

Language

  • English

Media Info

Subject/Index Terms

Filing Info

  • Accession Number: 01880862
  • Record Type: Publication
  • Files: TRIS
  • Created Date: Apr 24 2023 4:19PM