論文:CBAM: Convolutional Block Attention Module
收錄于:ECCV 2018
摘要
論文提出了Convolutional Block Attention Module(CBAM),這是一種為卷積神將網(wǎng)絡(luò)設(shè)計(jì)的踱阿,簡單有效的注意力模塊(Attention Module)。對于卷積神經(jīng)網(wǎng)絡(luò)生成的feature map,CBAM從通道和空間兩個維度計(jì)算feature map的attention map,然后將attention map與輸入的feature map相乘來進(jìn)行特征的自適應(yīng)學(xué)習(xí)纯出。CBAM是一個輕量的通用模塊抛虫,可以將其融入到各種卷積神經(jīng)網(wǎng)絡(luò)中進(jìn)行端到端的訓(xùn)練。
主要思想
對于一個中間層的feature map:允跑,CBAM將會順序推理出1維的channel attention map 以及2維的spatial attention map ,整個過程如下所示:
其中為element-wise multiplication搪柑,首先將channel attention map與輸入的feature map相乘得到聋丝,之后計(jì)算的spatial attention map,并將兩者相乘得到最終的輸出工碾。下圖為CBAM的示意圖:
Channel attention module
feature map 的每個channel都被視為一個feature detector弱睦,channel attention主要關(guān)注于輸入圖片中什么(what)是有意義的。為了高效地計(jì)算channel attention渊额,論文使用最大池化和平均池化對feature map在空間維度上進(jìn)行壓縮况木,得到兩個不同的空間背景描述:和垒拢。使用由MLP組成的共享網(wǎng)絡(luò)對這兩個不同的空間背景描述進(jìn)行計(jì)算得到channel attention map:。計(jì)算過程如下:
其中火惊,求类,后使用了Relu作為激活函數(shù)。
Spatial attention module.
與channel attention不同屹耐,spatial attention主要關(guān)注于位置信息(where)尸疆。為了計(jì)算spatial attention,論文首先在channel的維度上使用最大池化和平均池化得到兩個不同的特征描述和张症,然后使用concatenation將兩個特征描述合并仓技,并使用卷積操作生成spatial attention map 。計(jì)算過程如下:
其中俗他,表示7*7的卷積層
下圖為channel attention和spatial attention的示意圖:
代碼
環(huán)境:tensorflow 1.9
"""
@Time : 2018/10/19
@Author : Li YongHong
@Email : lyh_robert@163.com
@File : test.py
"""
import tensorflow as tf
import numpy as np
slim = tf.contrib.slim
def combined_static_and_dynamic_shape(tensor):
"""Returns a list containing static and dynamic values for the dimensions.
Returns a list of static and dynamic values for shape dimensions. This is
useful to preserve static shapes when available in reshape operation.
Args:
tensor: A tensor of any type.
Returns:
A list of size tensor.shape.ndims containing integers or a scalar tensor.
"""
static_tensor_shape = tensor.shape.as_list()
dynamic_tensor_shape = tf.shape(tensor)
combined_shape = []
for index, dim in enumerate(static_tensor_shape):
if dim is not None:
combined_shape.append(dim)
else:
combined_shape.append(dynamic_tensor_shape[index])
return combined_shape
def convolutional_block_attention_module(feature_map, index, inner_units_ratio=0.5):
"""
CBAM: convolution block attention module, which is described in "CBAM: Convolutional Block Attention Module"
Architecture : "https://arxiv.org/pdf/1807.06521.pdf"
If you want to use this module, just plug this module into your network
:param feature_map : input feature map
:param index : the index of convolution block attention module
:param inner_units_ratio: output units number of fully connected layer: inner_units_ratio*feature_map_channel
:return:feature map with channel and spatial attention
"""
with tf.variable_scope("cbam_%s" % (index)):
feature_map_shape = combined_static_and_dynamic_shape(feature_map)
# channel attention
channel_avg_weights = tf.nn.avg_pool(
value=feature_map,
ksize=[1, feature_map_shape[1], feature_map_shape[2], 1],
strides=[1, 1, 1, 1],
padding='VALID'
)
channel_max_weights = tf.nn.max_pool(
value=feature_map,
ksize=[1, feature_map_shape[1], feature_map_shape[2], 1],
strides=[1, 1, 1, 1],
padding='VALID'
)
channel_avg_reshape = tf.reshape(channel_avg_weights,
[feature_map_shape[0], 1, feature_map_shape[3]])
channel_max_reshape = tf.reshape(channel_max_weights,
[feature_map_shape[0], 1, feature_map_shape[3]])
channel_w_reshape = tf.concat([channel_avg_reshape, channel_max_reshape], axis=1)
fc_1 = tf.layers.dense(
inputs=channel_w_reshape,
units=feature_map_shape[3] * inner_units_ratio,
name="fc_1",
activation=tf.nn.relu
)
fc_2 = tf.layers.dense(
inputs=fc_1,
units=feature_map_shape[3],
name="fc_2",
activation=None
)
channel_attention = tf.reduce_sum(fc_2, axis=1, name="channel_attention_sum")
channel_attention = tf.nn.sigmoid(channel_attention, name="channel_attention_sum_sigmoid")
channel_attention = tf.reshape(channel_attention, shape=[feature_map_shape[0], 1, 1, feature_map_shape[3]])
feature_map_with_channel_attention = tf.multiply(feature_map, channel_attention)
# spatial attention
channel_wise_avg_pooling = tf.reduce_mean(feature_map_with_channel_attention, axis=3)
channel_wise_max_pooling = tf.reduce_max(feature_map_with_channel_attention, axis=3)
channel_wise_avg_pooling = tf.reshape(channel_wise_avg_pooling,
shape=[feature_map_shape[0], feature_map_shape[1], feature_map_shape[2],
1])
channel_wise_max_pooling = tf.reshape(channel_wise_max_pooling,
shape=[feature_map_shape[0], feature_map_shape[1], feature_map_shape[2],
1])
channel_wise_pooling = tf.concat([channel_wise_avg_pooling, channel_wise_max_pooling], axis=3)
spatial_attention = slim.conv2d(
channel_wise_pooling,
1,
[7, 7],
padding='SAME',
activation_fn=tf.nn.sigmoid,
scope="spatial_attention_conv"
)
feature_map_with_attention = tf.multiply(feature_map_with_channel_attention, spatial_attention)
return feature_map_with_attention
#example
feature_map = tf.constant(np.random.rand(2,8,8,32), dtype=tf.float16)
feature_map_with_attention = convolutional_block_attention_module(feature_map, 1)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
result = sess.run(feature_map_with_attention)
print(result.shape)