制作了論文的思維導圖,用于學習和交流
Over the air deep learning based radio signal classification
1. 論文針對問題
1.1. 利用CV領域的深度學習網(wǎng)絡+OTA借杰、合成數(shù)據(jù)實現(xiàn)調(diào)制信號的分類識別
2. 問題的解決方法
2.1. 思路
- 利用最新深度學習的深度網(wǎng)絡resnet來訓練模型搁拙,進行調(diào)制信號的識別
2.2. 數(shù)據(jù)集
2.2.1. 標簽
- Normal Classes
OOK, 4ASK, BPSK, QPSK, 8PSK, 16QAM, AM-SSB-SC, AM-DSB-SC, FM, GMSK, OQPSK
- Difficult Classes
OOK, 4ASK, 8ASK, BPSK, QPSK, 8PSK, 16PSK, 32PSK, 16APSK, 32APSK, 64APSK, 128APSK, 16QAM, 32QAM, 64QAM, 128QAM, 256QAM, AM-SSB-WC, AM-SSB-SC, AM-DSB-WC, AM-DSB-SC, FM, GMSK, OQPSK
2.2.2. 生成方法
several simulated wireless channels generated from the model
over the air (OTA) transmission channel of clean signals with no synthetic channel impairments
channel initialization of variables
2.3. 模型
Baseline Method
- 思路
leverages the list of higher order moments and other aggregate signal behavior statistics given in table
- 方法
2.1. baseline模型 :XGBoost
數(shù)據(jù) 特征:1024樣本統(tǒng)計特征
數(shù)據(jù) 降維 :1024?2維→28維
效果:outperforms a single decision tree or support vector machine (SVM) significantly on the task
2.2. Convolutional Neural Network
思路:CNN在CV中的運用地非常好
模型: VGG
濾波器:最小 3x3
池化層:最小 2x2
效果: This represents a simple DL CNN design approach which can be readily trained and deployed to effectively accomplish many small radio signal classification tasks
優(yōu)勢: 無需手動提取特征(do not perform any expert feature extraction or other pre-processing on the raw radio signal , instead allowing the network to learn raw time-series features directly on the high dimension data)
2.3. Residual Neural Network
思路: 更深的網(wǎng)絡模型+殘差網(wǎng)絡特性奔脐,提供更好的性能
模型: ResNet
residual unit
stack of residual units
調(diào)整:全連接層激活函數(shù) the scaled exponential linear unit (SELU),a slight improvement over conventional ReLU performance,not ReLU
對比:
RestNet:236,344 參數(shù)
CNN/VGG:257,099 參數(shù)
3. SENSING PERFORMANCE ANALYSIS
A. Classification on Low Order Modulations
高SNR: VGG/CNN和ResNet差不多滞谢,ResNet 比 baseline 獲得大概 5dB 優(yōu)勢
ResNet: 99.8%, VGG: 98.3%, Baseline: 94.6%
B. Classification under AWGN conditions
數(shù)據(jù): N = 239, 616 examples
模型: L = 6 residual stacks
效果: the best performance at both high and low SNRs on the difficult dataset by a margin of 2-6 dB in improved sensitivity for equivalent classification accuracy.
C. Classification under Impairments
效果:
- ResNet
ResNet performance improves under LO offset rather than degrading.
At high SNR performance ranges from around 80% in the best case down to about 59% in the worst case.
- Baseline
in the best case at high SNR this method obtains about 61% accuracy while in the worst case it degrades to around 45% accuracy
D. Classifier performance by depth
L=5:121 layers, 229k trainable parameters
L = 0: 25 layers and 2.1M trainable parameters
E. Classification performance by modulation type
10dB SNR 可以讓所有調(diào)制類型都達到80%以上的正確率
融合矩陣中error最大的調(diào)制類型
high order phase shift keying (PSK) (16/32-PSK)
high order quadrature amplitude modulation (QAM) (64/128/256-QAM)
AM modes (confusing with-carrier (WC) and suppressed-carrier (SC)
high order QAM and PSK can be extremely difficult to tell apart through any approach
F. Classifier Training Size Requirements
樣本數(shù)量:
4-8k樣本: 模型正確率為隨機
100M樣本左右: 提升5-20%
200萬的所有數(shù)據(jù)訓練: 單個 NVIDIA V100 GPU (125Tera-FLOPS) 需要花費大約16小時
從100M提升到200M:并沒有看到非常大的性能提升;在高SNR時除抛,正確率大約都為95%
單個樣本序列長度
G. Over the air performance
利用USRP裝置狮杨,生成了1.44M的數(shù)據(jù)集
利用NVIDIA V100 GPU訓練了大約14小時
所有樣本在SNR為10dB,測試集正確率為95.6%
H. Transfer learning over-the-air performance
思路: 利用遷移學習把訓練好的模型做資源整合利用到忽;freeze 網(wǎng)絡參數(shù)橄教, 再次訓練時只更新租后的3層全連接層
方法: 利用ResNet在1.2M合成數(shù)據(jù)中訓練,然后在OTA樣本中測試
no fine-tuning:24類中喘漏,正確率為64% - 80%
fine-tuning:24類中护蝶,正確率在84%-96%