深度殘差收縮網(wǎng)絡(luò)Keras代碼

深度殘差收縮網(wǎng)絡(luò)事實(shí)上是一種卷積神經(jīng)網(wǎng)絡(luò)雌续,是深度殘差網(wǎng)絡(luò)(deep residual network, ResNet)的一個(gè)變種擎场。它的主體思想是灸姊,在深度學(xué)習(xí)進(jìn)行特征學(xué)習(xí)的時(shí)候,刪除冗余信息是非常重要的碎浇;這是因?yàn)樵紨?shù)據(jù)中往往存在著很多和當(dāng)前任務(wù)無(wú)關(guān)的冗余信息;軟閾值化則是一種非常靈活的璃俗、刪除冗余信息的方式奴璃。

1.深度殘差網(wǎng)絡(luò)

首先,從深度殘差網(wǎng)絡(luò)開(kāi)始講起城豁。下圖展示了深度殘差網(wǎng)絡(luò)的基本模塊苟穆,包括一些非線(xiàn)性層(殘差路徑)和一個(gè)跨層的恒等連接。恒等連接是深度殘差網(wǎng)絡(luò)的核心,是其優(yōu)異性能的一個(gè)保障雳旅。


深度殘差網(wǎng)絡(luò)的基本模塊

2.深度殘差收縮網(wǎng)絡(luò)

深度殘差收縮網(wǎng)絡(luò)剖膳,就是對(duì)深度殘差網(wǎng)絡(luò)的殘差路徑進(jìn)行收縮的一種網(wǎng)絡(luò)。這里的“收縮”指的就是軟閾值化岭辣。


深度殘差收縮網(wǎng)絡(luò)的基本模塊

軟閾值化是許多信號(hào)降噪方法的核心步驟吱晒,它是將接近于零(或者說(shuō)絕對(duì)值低于某一閾值τ)的特征置為0,也就是將[-τ, τ]區(qū)間內(nèi)的特征置為0沦童,讓其他的仑濒、距0較遠(yuǎn)的特征也朝著0進(jìn)行收縮。

如果和前一個(gè)卷積層的偏置b放在一起看的話(huà)偷遗,這個(gè)置為零的區(qū)間就變成了[-τ+b, τ+b]墩瞳。因?yàn)棣雍蚥都是可以自動(dòng)學(xué)習(xí)得到的參數(shù),這個(gè)角度看的話(huà)氏豌,軟閾值化其實(shí)是可以將任意區(qū)間的特征置為零喉酌,是一種更靈活的、刪除某個(gè)取值范圍特征的方式泵喘,也可以理解成一種更靈活的非線(xiàn)性映射泪电。

從另一個(gè)方面來(lái)看,前面的兩個(gè)卷積層纪铺、兩個(gè)批標(biāo)準(zhǔn)化和兩個(gè)激活函數(shù)相速,將冗余信息的特征,變換成接近于零的值鲜锚;將有用的特征突诬,變換成遠(yuǎn)離零的值。之后芜繁,通過(guò)自動(dòng)學(xué)習(xí)得到一組閾值旺隙,利用軟閾值化將冗余特征剔除掉,將有用特征保留下來(lái)骏令。

通過(guò)堆疊一定數(shù)量的基本模塊蔬捷,可以構(gòu)成完整的深度殘差收縮網(wǎng)絡(luò),如下圖所示:


整體結(jié)構(gòu)

3.圖像識(shí)別及Keras編程

雖然深度殘差收縮網(wǎng)絡(luò)原本是應(yīng)用在基于振動(dòng)信號(hào)的故障診斷伏社,但是深度殘差收縮網(wǎng)絡(luò)事實(shí)上是一種通用的特征學(xué)習(xí)方法抠刺,相信在很多任務(wù)(計(jì)算機(jī)視覺(jué)、語(yǔ)音摘昌、文本)中都可能有著一定的用處速妖。

下面是基于深度殘差收縮網(wǎng)絡(luò)的MNIST手寫(xiě)數(shù)字圖像識(shí)別代碼(代碼很簡(jiǎn)單,僅供參考):

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 28 23:24:05 2019

Implemented using TensorFlow 1.0.1 and Keras 2.2.1
 
M. Zhao, S. Zhong, X. Fu, et al., Deep Residual Shrinkage Networks for Fault Diagnosis, 
IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898

@author: me
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import mnist
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D
from keras.optimizers import Adam
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.layers.core import Lambda
K.set_learning_phase(1)

# Input image dimensions
img_rows, img_cols = 28, 28

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

# Noised data
x_train = x_train.astype('float32') / 255. + 0.5*np.random.random([x_train.shape[0], img_rows, img_cols, 1])
x_test = x_test.astype('float32') / 255. + 0.5*np.random.random([x_test.shape[0], img_rows, img_cols, 1])
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)


def abs_backend(inputs):
    return K.abs(inputs)

def expand_dim_backend(inputs):
    return K.expand_dims(K.expand_dims(inputs,1),1)

def sign_backend(inputs):
    return K.sign(inputs)

def pad_backend(inputs, in_channels, out_channels):
    pad_dim = (out_channels - in_channels)//2
    return K.spatial_3d_padding(inputs, padding = ((0,0),(0,0),(pad_dim,pad_dim)))

# Residual Shrinakge Block
def residual_shrinkage_block(incoming, nb_blocks, out_channels, downsample=False,
                             downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization()(residual)
        residual = Activation('relu')(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization()(residual)
        residual = Activation('relu')(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Calculate global means
        residual_abs = Lambda(abs_backend)(residual)
        abs_mean = GlobalAveragePooling2D()(residual_abs)
        
        # Calculate scaling coefficients
        scales = Dense(out_channels, activation=None, kernel_initializer='he_normal', 
                       kernel_regularizer=l2(1e-4))(abs_mean)
        scales = BatchNormalization()(scales)
        scales = Activation('relu')(scales)
        scales = Dense(out_channels, activation='sigmoid', kernel_regularizer=l2(1e-4))(scales)
        scales = Lambda(expand_dim_backend)(scales)
        
        # Calculate thresholds
        thres = keras.layers.multiply([abs_mean, scales])
        
        # Soft thresholding
        sub = keras.layers.subtract([residual_abs, thres])
        zeros = keras.layers.subtract([sub, sub])
        n_sub = keras.layers.maximum([sub, zeros])
        residual = keras.layers.multiply([Lambda(sign_backend)(residual), n_sub])
        
        # Downsampling (it is important to use the pooL-size of (1, 1))
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels (it is important to use zero padding rather than 1by1 convolution)
        if in_channels != out_channels:
            identity = Lambda(pad_backend)(identity, in_channels, out_channels)
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=input_shape)
net = Conv2D(8, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_shrinkage_block(net, 1, 8, downsample=True)
net = BatchNormalization()(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=100, epochs=5, verbose=1, validation_data=(x_test, y_test))

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

為方便對(duì)比聪黎,深度殘差網(wǎng)絡(luò)的代碼如下:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 28 23:19:03 2019

Implemented using TensorFlow 1.0 and Keras 2.2.1
K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, CVPR, 2016.

@author: me
"""

from __future__ import print_function
import numpy as np
import keras
from keras.datasets import mnist
from keras.layers import Dense, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D
from keras.optimizers import Adam
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras.layers.core import Lambda
K.set_learning_phase(1)

# input image dimensions
img_rows, img_cols = 28, 28

# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

# Noised data
x_train = x_train.astype('float32') / 255. + 0.5*np.random.random([x_train.shape[0], img_rows, img_cols, 1])
x_test = x_test.astype('float32') / 255. + 0.5*np.random.random([x_test.shape[0], img_rows, img_cols, 1])
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

def pad_backend(inputs, in_channels, out_channels):
    pad_dim = (out_channels - in_channels)//2
    return K.spatial_3d_padding(inputs, padding = ((0,0),(0,0),(pad_dim,pad_dim)))

def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                             downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization()(residual)
        residual = Activation('relu')(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization()(residual)
        residual = Activation('relu')(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling (it is important to use the pooL-size of (1, 1))
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1, 1), strides=(2, 2))(identity)
            
        # Zero_padding to match channels (it is important to use zero padding rather than 1by1 convolution)
        if in_channels != out_channels:
            identity = Lambda(pad_backend)(identity, in_channels, out_channels)
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=input_shape)
net = Conv2D(8, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 1, 8, downsample=True)
net = BatchNormalization()(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss='categorical_crossentropy', optimizer=Adam(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=100, epochs=5, verbose=1, validation_data=(x_test, y_test))

# get results
K.set_learning_phase(0)
resnet_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', resnet_train_score[0])
print('Train accuracy:', resnet_train_score[1])
resnet_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', resnet_test_score[0])
print('Test accuracy:', resnet_test_score[1])

備注:
(1)深度殘差收縮網(wǎng)絡(luò)的結(jié)構(gòu)比普通的深度殘差網(wǎng)絡(luò)復(fù)雜罕容,或許更難訓(xùn)練备恤。
(2)程序里只設(shè)置了一個(gè)基本模塊,在更復(fù)雜的數(shù)據(jù)集上锦秒,可適當(dāng)增加露泊。
(3)如果遇到這個(gè)TypeError:softmax() got an unexpected keyword argument ‘a(chǎn)xis’,就點(diǎn)開(kāi)tensorflow_backend.py旅择,將return tf.nn.softmax(x, axis=axis)中的第一個(gè)axis改成dim即可惭笑。

轉(zhuǎn)載網(wǎng)址:

https://blog.csdn.net/zmh1250329863/article/details/103761091

參考文獻(xiàn):

M. Zhao, S. Zhong, X. Fu, et al., Deep residual shrinkage networks for fault diagnosis, IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898

https://ieeexplore.ieee.org/document/8850096

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市生真,隨后出現(xiàn)的幾起案子沉噩,更是在濱河造成了極大的恐慌,老刑警劉巖柱蟀,帶你破解...
    沈念sama閱讀 217,826評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件川蒙,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡长已,警方通過(guò)查閱死者的電腦和手機(jī)畜眨,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,968評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門(mén),熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)术瓮,“玉大人康聂,你說(shuō)我怎么就攤上這事〗锔” “怎么了早抠?”我有些...
    開(kāi)封第一講書(shū)人閱讀 164,234評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵霎烙,是天一觀的道長(zhǎng)撬讽。 經(jīng)常有香客問(wèn)我,道長(zhǎng)悬垃,這世上最難降的妖魔是什么游昼? 我笑而不...
    開(kāi)封第一講書(shū)人閱讀 58,562評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮尝蠕,結(jié)果婚禮上烘豌,老公的妹妹穿的比我還像新娘。我一直安慰自己看彼,他們只是感情好廊佩,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,611評(píng)論 6 392
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著靖榕,像睡著了一般标锄。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上茁计,一...
    開(kāi)封第一講書(shū)人閱讀 51,482評(píng)論 1 302
  • 那天料皇,我揣著相機(jī)與錄音,去河邊找鬼。 笑死践剂,一個(gè)胖子當(dāng)著我的面吹牛鬼譬,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播逊脯,決...
    沈念sama閱讀 40,271評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼优质,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了军洼?” 一聲冷哼從身側(cè)響起盆赤,我...
    開(kāi)封第一講書(shū)人閱讀 39,166評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎歉眷,沒(méi)想到半個(gè)月后牺六,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,608評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡汗捡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,814評(píng)論 3 336
  • 正文 我和宋清朗相戀三年淑际,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片扇住。...
    茶點(diǎn)故事閱讀 39,926評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡春缕,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出艘蹋,到底是詐尸還是另有隱情锄贼,我是刑警寧澤,帶...
    沈念sama閱讀 35,644評(píng)論 5 346
  • 正文 年R本政府宣布女阀,位于F島的核電站宅荤,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏浸策。R本人自食惡果不足惜冯键,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,249評(píng)論 3 329
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望庸汗。 院中可真熱鬧惫确,春花似錦、人聲如沸蚯舱。這莊子的主人今日做“春日...
    開(kāi)封第一講書(shū)人閱讀 31,866評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)枉昏。三九已至陈肛,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間凶掰,已是汗流浹背燥爷。 一陣腳步聲響...
    開(kāi)封第一講書(shū)人閱讀 32,991評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工蜈亩, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人前翎。 一個(gè)月前我還...
    沈念sama閱讀 48,063評(píng)論 3 370
  • 正文 我出身青樓稚配,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親港华。 傳聞我的和親對(duì)象是個(gè)殘疾皇子道川,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,871評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容