
Adversarial Distributional Training for Robust Deep LearningZhijie Deng,...
CAT: Customized Adversarial Training for Improved RobustnessMinhao Cheng...
ClustTR: Clustering Training for RobustnessMotasem Alfarra, Juan C. Pére...
對于剛接觸對抗樣本領(lǐng)域的小伙伴們來說银择,看到領(lǐng)域內(nèi)眾多文章時沪停,簡直眼花繚亂惕澎。這時候则涯,如果一篇好的綜述概括了當(dāng)前領(lǐng)域內(nèi)的主要進展,提供給我們該領(lǐng)域的...
題目:DeepFool: a simple and accurate method to fool deep neural networks地址...
題目:Towards Evaluating the Robustness of Neural Networks地址:https://arxiv....
論文題目:One pixel attack for fooling deep neural networks論文地址:https://arxiv...
自從2014年Szegedy等人提出對抗樣本以來菜谣,不斷有研究者提出新的對抗攻擊方法挖滤。本文匯總了當(dāng)前已有的絕大多數(shù)算法,以拋磚引玉用父款,并不斷更新溢谤。...
論文題目:The Limitations of Deep Learning in Adversarial Settings論文地址:https:...