剪枝論文統(tǒng)計(jì)

學(xué)術(shù)菜雞的論文統(tǒng)計(jì)弄息,請(qǐng)無(wú)視

2015年

#1.Learning both Weights and Connections for Efficient Neural Networks:2745 P3

先確定哪些連接是重要的谨胞,然后prune,在fine tune
L1正則pruning和L2正則 retrain和iterative prune效果好


image

2016

#2.Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding:4101 CA

prune/fine tune, quantize weights, Huffman coding
3-4x speedup


image

#3.Dynamic Network Surgery for Efficient DNNs:502 CA

動(dòng)態(tài)的修剪妓笙,并且加入splicing,避免不正確的prune


image

#2017

#4.Pruning Filters for Efficient ConvNets:1319 P3

prune filter,也就是prune cout


image

#5.[Pruning Convolutional Neural Networks for Resource Efficient Inference:785 T3

(1)每次prune最不重要的參數(shù)顷歌,迭代
(2)taylor展開(kāi)判斷哪個(gè)該減
(3)每層都要normalization


image

#6.Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee:79 TA

irregularize prune诫钓,先跳過(guò)

#7.Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon:135 PA

retrain少 設(shè)定一個(gè)laye-wise 的error,計(jì)算它的二階導(dǎo)蹄皱,這和paper#5的區(qū)別?

#8.Runtime Neural Pruning:186 N

根據(jù)輸入览闰,動(dòng)態(tài)的自適應(yīng)的prune cin
自上而下,逐層剪枝


image

#9.Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning:378 N

基于能耗 channel prune
每層剪枝后巷折,進(jìn)行最小二乘微調(diào)压鉴,快速回復(fù)精度。全部剪枝完后锻拘,再全局反向傳播微調(diào)


image

#10.ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression :688 CA P3

filter prune,根據(jù)下一層的統(tǒng)計(jì)信息來(lái)prune當(dāng)前層 和paper#5區(qū)別油吭?

#11.Channel Pruning for Accelerating Very Deep Neural Networks:865 CA

這個(gè)和paper#12有啥區(qū)別?署拟?待細(xì)看


image

#12.Learning Efficient Convolutional Networks Through Network Slimming:663 PA

修剪input channel婉宰,也就是cin,使用BN的scaling來(lái)做判斷卷積channel的重要性


image

image

訓(xùn)練時(shí)對(duì)channel的scale參數(shù)進(jìn)行L1正則化推穷,抑制為0

#2018

#13.Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers:136 TA P3

以前假設(shè)較小的權(quán)重或feature map是不重要的心包,該文不基于這個(gè)假設(shè)
訓(xùn)練模型使某個(gè)通道輸出恒定,然后把這個(gè)通道剪掉馒铃?

#14.To prune, or not to prune: exploring the efficacy of pruning for model compression:254 N

大的稀疏網(wǎng)絡(luò)效果優(yōu)于小的密集網(wǎng)絡(luò)
漸進(jìn)式prune,sparsity逐漸增加 訓(xùn)練時(shí)不斷稀疏

#15.Discrimination-aware Channel Pruning for Deep Neural Networks:176 TA

image.png

待細(xì)看

#16.Frequency-Domain Dynamic Pruning for Convolutional Neural Networks:27 N

image.png

#17.Learning Sparse Neural Networks via Sensitivity-Driven Regularization:21 N

量化輸出對(duì)參數(shù)的敏感性(相關(guān)性)蟹腾,引入一個(gè)正則項(xiàng)痕惋,降低敏感性參數(shù)的絕對(duì)值,直接將低于閾值的設(shè)為0
感覺(jué)之后可以用于層間只適應(yīng)娃殖,待細(xì)看next to read

#18.Amc: Automl for model compression and acceleration on mobile devices:414 T3

使用了AutoMl 有必要之后細(xì)看


image.png

#19.Data-Driven Sparse Structure Selection for Deep Neural Networks:169 MA

使用一個(gè)參數(shù)比例因子來(lái)縮放某個(gè)結(jié)構(gòu)(group block neuron)的輸出,正則化稀疏該因子值戳,使用Accelerated Proximal Gradient優(yōu)化問(wèn)題,然后刪除(感覺(jué)和paper#17有點(diǎn)像)

#20.Coreset-Based Neural Network Compression:27 PA

不需要retrain, 有量化和Huffman編碼炉爆。不知道啥玩意

#21.Constraint-Aware Deep Neural Network Compression:24 SkimCA

image.png

#22.A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers:111 CA

跳過(guò)

#23.PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning:167 PA

prune參數(shù)述寡,剩余參數(shù)用來(lái)訓(xùn)練新任務(wù)


image.png

#24.NISP: Pruning Networks using Neuron Importance Score Propagation:256 N

以前只考慮單層或者兩層的誤差,沒(méi)有考慮對(duì)整個(gè)網(wǎng)絡(luò)的影響叶洞,本文基于一個(gè)統(tǒng)一的目標(biāo),即最小化分類前倒數(shù)第二層的“最終響應(yīng)層”(FRL)中重要響應(yīng)的重構(gòu)誤差禀崖,提出了神經(jīng)元重要性評(píng)分傳播(NISP)算法衩辟,將最終響應(yīng)的重要性得分傳播到網(wǎng)絡(luò)中的每個(gè)神經(jīng)元。


image.png

#25.CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization:78 N

同時(shí)減枝和量化 跳過(guò)

#26.“Learning-Compression” Algorithms for Neural Net Pruning:61 N

自動(dòng)學(xué)習(xí)每層prune多少

#27.Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks: 209 PA

(1)不是將刪除濾波器固定為0
(2)能從頭訓(xùn)練波附,邊訓(xùn)邊減
有必要細(xì)看

#28.Accelerating Convolutional Networks via Global & Dynamic Filter Pruning:72 N

全局的動(dòng)態(tài)剪枝艺晴,還可以將誤刪的恢復(fù)

2019

#29.Network Pruning via Transformable Architecture Search:38 PA

既NAS又蒸餾

#30.Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks:39 PA

給channel設(shè)置因子gate,gate為0,則刪除,用taylor展開(kāi)判斷gate為0時(shí)對(duì)損失函數(shù)的影響 全局的剪枝 tick-tock框架

#31.Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask:64 TA

先跳過(guò)

#32.One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers:41 N

先跳過(guò)

#33.Global Sparse Momentum SGD for Pruning Very Deep Neural Networks:15 PA

全局的掸屡,找到每層稀疏比封寞;端到端訓(xùn)練;不需要retrain仅财;效果比lottery ticket好

#34.AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters:12 N

一般修剪權(quán)重會(huì)降低魯棒性狈究,或者需要先驗(yàn)的知識(shí)確定超參數(shù),本文用一個(gè)autoprune的方法盏求,輔助新的更新規(guī)則抖锥,緩解了前面的兩個(gè)問(wèn)題。還是pre-train prune fine-tune三步走

#35.Model Compression with Adversarial Robustness: A Unified Optimization Framework:19 PA

不損害魯棒性的壓縮

#36.MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning:40 PA

#37.Accelerate CNN via Recursive Bayesian Pruning: 14 PA

逐層剪枝碎罚,之后有必要細(xì)看

#38.Adversarial Robustness vs Model Compression, or Both?:15 PA

魯棒性的磅废,先跳過(guò)

#39.Learning Filter Basis for Convolutional Neural Network Compression:9 N

#40.Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration:163 PA

前面的工作認(rèn)為值小的不重要,這需要兩個(gè)前提條件:(1)fliter偏差大(2)最小的norm應(yīng)該更小 本文提出了一種基于幾何中值的filter prune

#41.Towards Optimal Structured CNN Pruning via Generative Adversarial Learning:82 PA

用GAN 跳過(guò)

#42.Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure:37 PA

跳過(guò)

#43.On Implicit Filter Level Sparsity in Convolutional Neural Networks:11 PA

跳過(guò)

#44.Structured Pruning of Neural Networks with Budget-Aware Regularization:20 N

可以控制prune的大小和速度荆烈,還用了蒸餾拯勉,跳過(guò)

#45.Importance Estimation for Neural Network Pruning:80 PA

估計(jì)神經(jīng)元對(duì)最終loss的影響,迭代的刪去最小的那個(gè)憔购。用了一階和二階的泰勒展開(kāi)宫峦,而不是每層的敏感度分析。
感覺(jué)很重要倦始,之后細(xì)看

#46.OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks:12 N

之前的工作只考慮層內(nèi)的關(guān)系斗遏,沒(méi)有考慮層間的關(guān)系,這篇文章考慮了連續(xù)層之間的關(guān)系鞋邑,當(dāng)前層的out和下一層的in

#47.Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search:35 TA

速度和精度上取折衷 跳過(guò)

#48.Variational Convolutional Neural Network Pruning:54 N

變分貝葉斯 不需要retrain

#49.The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks:204 TA

先跳過(guò)

#50.Rethinking the Value of Network Pruning:303 PA

自動(dòng)確定每層稀疏率
剪枝再fine-tune的會(huì)比從頭訓(xùn)練的網(wǎng)絡(luò)要差诵次。剪枝后的網(wǎng)絡(luò)結(jié)構(gòu)不應(yīng)該復(fù)用之前的訓(xùn)練好的模型中的權(quán)重账蓉。所以應(yīng)該從頭訓(xùn)練。

#51.Dynamic Channel Pruning: Feature Boosting and Suppression:50 TA

并不是像單純的剪枝一樣刪除結(jié)構(gòu)逾一,而是通過(guò)FBS動(dòng)態(tài)的放大重要的通道铸本,跳過(guò)不重要的通道。

#52.SNIP: Single-shot Network Pruning based on Connection Sensitivity:121 TA

不是先訓(xùn)練再減枝遵堵,而是先減枝箱玷,再?gòu)念^開(kāi)始訓(xùn)練。還是先做一個(gè)鏈接的敏感度分析陌宿,但是仍然是一個(gè)使用一階泰勒展開(kāi)锡足,然后做softmax,一次性減k個(gè)值

#53.Dynamic Sparse Graph for Efficient Deep Learning:16 CUDA3

可以用來(lái)訓(xùn)練壳坪,有空再看舶得。

#54.Collaborative Channel Pruning for Deep Networks:28 N

剪channel,分析通道對(duì)loss的影響,用ccp逼近Hessian矩陣

#55.Approximated Oracle Filter Pruning for Destructive CNN Width Optimization:28 N

oracle減枝評(píng)估filter的重要性爽蝴,但是時(shí)間復(fù)雜度高沐批,且需要給定結(jié)果寬度,本文通過(guò)近似oracle法來(lái)優(yōu)化

#56.EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis:15 PA

基于Kronecker因子特征基(KFE)的網(wǎng)絡(luò)重參數(shù)化方法蝎亚,并在此基礎(chǔ)上應(yīng)用了基于Hessian的結(jié)構(gòu)化剪枝方法九孩。

#57.EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning:1 PA

filter減枝,先跳過(guò)

#58.DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation:0 N

可以讓每層稀疏率可微发框,采用梯度的方式搜索稀疏率躺彬,而且可以train from scratch

#59.DHP: Differentiable Meta Pruning via HyperNetworks:2 PA

automl跳過(guò)

#60.Meta-Learning with Network Pruning:0 N

把剪枝用于元學(xué)習(xí),跳過(guò)

#61.Accelerating CNN Training by Pruning Activation Gradients:1 N

反向傳播中的激活梯度大部分很小缤底,使用一種隨機(jī)剪枝的方式對(duì)激活梯度進(jìn)行剪枝顾患,剪枝閾值通過(guò)分布確定,理論分析个唧。
之后細(xì)看

#62.DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search:1 N

NAS跳過(guò)

#63.Differentiable Joint Pruning and Quantization for Hardware Efficiency:0 N

聯(lián)合量化剪枝江解,跳過(guò)

#64.Channel Pruning via Automatic Structure Search:5 PA

跳過(guò)

#65.Adversarial Neural Pruning with Latent Vulnerability Suppression:3 N

跳過(guò)

#66.Proving the Lottery Ticket Hypothesis: Pruning is All You Need:14 N

加強(qiáng)的彩票假設(shè)

#67.Soft Threshold Weight Reparameterization for Learnable Sparsity:6 PA

自動(dòng)調(diào)節(jié)稀疏的閾值

#68.Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection:7 N

不是刪除網(wǎng)絡(luò)中的神經(jīng)元,而是貪婪的從空網(wǎng)絡(luò)中添加網(wǎng)絡(luò)中的神經(jīng)元

#69.Operation-Aware Soft Channel Pruning using Differentiable Masks:0 N

跳過(guò)

#70.DropNet: Reducing Neural Network Complexity via Iterative Pruning:0 N

每次刪去訓(xùn)練樣本平均后激活值最低的那個(gè)點(diǎn)

#71.Towards Efficient Model Compression via Learned Global Ranking:1 F

學(xué)習(xí)跨不同層的濾波器的全局排名徙歼,通過(guò)修剪排名靠后的濾波器來(lái)獲得一組具有不同精度/延遲權(quán)衡的結(jié)構(gòu)

#72.HRank: Filter Pruning using High-Rank Feature Map:18 PA

跳過(guò)

#73.Neural Network Pruning with Residual-Connections and Limited-Data:2 N

跳過(guò)

#74.Multi-Dimensional Pruning: A Unified Framework for Model Compression:1 N

跳過(guò)

#75.DMCP: Differentiable Markov Channel Pruning for Neural Networks:0 TA

跳過(guò)

#76.Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression:8 PA

低秩分解和剪枝一起用,全局壓縮

#77.Few Sample Knowledge Distillation for Efficient Network Compression:8 N

蒸餾犁河,跳過(guò)

#78.Discrete Model Compression With Resource Constraint for Deep Neural Networks:1 N

跳過(guò)

#79.Structured Compression by Weight Encryption for Unstructured Pruning and Quantization:2 N

對(duì)非結(jié)構(gòu)化稀疏的權(quán)重進(jìn)行加密,推理的時(shí)候用異或門(mén)解碼

#80.Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration:2 N

每一層自適應(yīng)的選擇不同的剪枝

#81.APQ: Joint Search for Network Architecture, Pruning and Quantization Policy:7

聯(lián)合NAS,prune,quantization

#82.Comparing Rewinding and Fine-tuning in Neural Network Pruning:23 TA

rewind 和 fine-tune兩種方法的對(duì)比

#83.A Signal Propagation Perspective for Pruning Neural Networks at Initialization:14 N

解釋了為什么修剪只初始化魄梯,還沒(méi)開(kāi)始訓(xùn)練的網(wǎng)絡(luò)桨螺,這種方法是有效的。

#84.ProxSGD: Training Structured Neural Networks under Regularization and Constraints:1 TA PA

跳過(guò)

#85.One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation:2 N

RNN的一次性剪枝

#86.Lookahead: A Far-sighted Alternative of Magnitude-based Pruning:5 PA

基于基于幅值的剪枝確實(shí)能使單層線性算子的Frobenius失真最小化酿秸,我們將單層優(yōu)化擴(kuò)展為多層優(yōu)化灭翔,提出了一種簡(jiǎn)單的剪枝方法,即超前剪枝

#87.Dynamic Model Pruning with Feedback:9 N

通過(guò)反饋重新激活早期刪除的權(quán)重

#89.Provable Filter Pruning for Efficient Neural Networks:9 N

跳過(guò)

#90.Data-Independent Neural Pruning via Coresets:5 N

跳過(guò)

#91.AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates:13 N

跳過(guò)

#92.DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

用所獲得的信息作為指導(dǎo)辣苏,我們首先提出了一種新的塊最大加權(quán)掩蔽(BMWM)方法肝箱,它可以有效地保留顯著的權(quán)重哄褒,同時(shí)對(duì)權(quán)重矩陣施加高度的正則性。作為進(jìn)一步的優(yōu)化煌张,我們提出了一種密度自適應(yīng)規(guī)則塊(DARB)剪枝方法

#93.Pruning from Scratch: 12 N

image.png
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末呐赡,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子骏融,更是在濱河造成了極大的恐慌链嘀,老刑警劉巖,帶你破解...
    沈念sama閱讀 212,718評(píng)論 6 492
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件档玻,死亡現(xiàn)場(chǎng)離奇詭異怀泊,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)误趴,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,683評(píng)論 3 385
  • 文/潘曉璐 我一進(jìn)店門(mén)包个,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人冤留,你說(shuō)我怎么就攤上這事∈髟睿” “怎么了纤怒?”我有些...
    開(kāi)封第一講書(shū)人閱讀 158,207評(píng)論 0 348
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)天通。 經(jīng)常有香客問(wèn)我泊窘,道長(zhǎng),這世上最難降的妖魔是什么像寒? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 56,755評(píng)論 1 284
  • 正文 為了忘掉前任烘豹,我火速辦了婚禮,結(jié)果婚禮上诺祸,老公的妹妹穿的比我還像新娘携悯。我一直安慰自己,他們只是感情好筷笨,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,862評(píng)論 6 386
  • 文/花漫 我一把揭開(kāi)白布憔鬼。 她就那樣靜靜地躺著,像睡著了一般胃夏。 火紅的嫁衣襯著肌膚如雪轴或。 梳的紋絲不亂的頭發(fā)上,一...
    開(kāi)封第一講書(shū)人閱讀 50,050評(píng)論 1 291
  • 那天仰禀,我揣著相機(jī)與錄音照雁,去河邊找鬼。 笑死答恶,一個(gè)胖子當(dāng)著我的面吹牛饺蚊,可吹牛的內(nèi)容都是我干的萍诱。 我是一名探鬼主播,決...
    沈念sama閱讀 39,136評(píng)論 3 410
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼卸勺,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼砂沛!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起曙求,我...
    開(kāi)封第一講書(shū)人閱讀 37,882評(píng)論 0 268
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤碍庵,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后悟狱,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體静浴,經(jīng)...
    沈念sama閱讀 44,330評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,651評(píng)論 2 327
  • 正文 我和宋清朗相戀三年挤渐,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了苹享。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 38,789評(píng)論 1 341
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡浴麻,死狀恐怖得问,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情软免,我是刑警寧澤宫纬,帶...
    沈念sama閱讀 34,477評(píng)論 4 333
  • 正文 年R本政府宣布,位于F島的核電站膏萧,受9級(jí)特大地震影響漓骚,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜榛泛,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 40,135評(píng)論 3 317
  • 文/蒙蒙 一蝌蹂、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧曹锨,春花似錦孤个、人聲如沸。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 30,864評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至覆享,卻和暖如春佳遂,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背撒顿。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,099評(píng)論 1 267
  • 我被黑心中介騙來(lái)泰國(guó)打工丑罪, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 46,598評(píng)論 2 362
  • 正文 我出身青樓吩屹,卻偏偏與公主長(zhǎng)得像跪另,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子煤搜,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,697評(píng)論 2 351