Radar Emitter Recognition Method Based on Self-supervised Joint Pre-training
-
Abstract
A self-supervised joint pre-training multi-scale dual attention network method is proposed to address the issue of insufficient recognition robustness caused by missing, distorted, and interfering radar pulse signals in non-cooperative electromagnetic environments. The pulse sequence is modeled as natural language, and the model is driven to learn deep semantic correlations and temporal patterns within the signals through the collaborative optimization of sequence semantic contrast and sequence ordering tasks. During the data pre-processing stage, word embedding and positional encoding techniques are employed to transform discrete pulse parameters into high-dimensional dynamic features incorporating temporal dependencies. Feature representation flexibility is enhanced by the multi-scale convolutional module through the decoupling of temporal and channel dimensions. Implicit temporal features are mined during the pre-training phase utilizing sequence semantic contrast tasks and sequence ordering tasks. The performance of the self-supervised training is validated on a downstream task involving the recognition of radar emitters from a total of 6 categories. Experiments demonstrate that under a 50 % pulse missing rate, the recognition accuracy of the proposed method reaches 82.0 %, outperforming RNN, CNN, and Transformer by 37.3 %, 24.9 %, and 35.7 % in accuracy, respectively. At a 20 % erroneous pulse rate, the proposed method maintains a recognition accuracy of 91.3 %, representing improvements of 8.3 %, 14.3 %, and 14.2 % over the aforementioned comparative methods. Ablation studies confirm that the self-supervised joint pre-training elevates the recognition accuracy to 96.9 %, showing a significant improvement compared to without self-supervised training.
-
-