SaGAN:Generative Adversarial Network with Spatial Attention for Face Attribute Editing

Problem

  1. Traditional GAN methods directly operate on the whole image, and inevitably change the attribute-irrelevant regions
  2. The performance of traditional regression methods heavily depends on the paired training data, which are however quite difficult to acquire

Relative work

  • ResGAN: learning the residual image avoids changing the attribute-irrelevant region by restraining most regions of the residual image as zero.

  • Improvement:This work is quite insightful to enforce the manipulation mainly concentrate on local areas especially for those local attributes.

  • Drawback: the location and the appearance of target attributes are modeled in single sparse residual image which is actually hard for a favorable optimization than modeling them separately

Method

  1. SaGAN: only alter the attribute specific region and keep the rest unchanged
  2. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-speciic region which restricts the alternation of AMN within this region.

Contribution

  1. The spatial attention is introduced to the GAN framework, forming an end-to-end generative model for face attribute editing (referred to as SaGAN),which can only alter those attribute-speciic region and keep the rest irrelevant region remain the same.
  2. The proposed SaGAN adopts single generator with attribute as conditional signal rather than two dual ones for two inverse face attribute editing.
  3. The proposed SaGAN achieves quite promising results especially for those local attributes with the attribute-irrelevant details well preserved. Besides, our approach also benefits the face recognition by data augmentation.

Generative Adversarial Network with Spatial Attention

SaGAN
notation meaning
I input image
\hat{I} output image
I_a an edited face image output by AMN
c attribute value
c_g ground truth attribute label of the real image I
D_{src}(I) probability of an image I to be a real one
D_{cls}(c|I) probability of an image I with the attribute c
F_m an attribute manipulation network (AMN)
F_a a spatial attention network(SAN)
b a spatial attention mask, used to restrict the alternation of AMN within this region
\lambda_1 balance parameters
\lambda_2 balance parameters
λ_{gp} hyper-parameters control the gradient penalty, default = 10
  • the goal of face attribute editing is to translate I into an
    new image \hat{I}, which should be realistic, with attribute c and look the same as the input image excluding the attribute-specific region

Discriminator

  • Two objectives, one to distinguish the generated images from the real ones, and another to classify the attributes of the generated and real images
  • The two classifiers are both designed as a CNN with softmax function, denoted as D_{src} and D_{cls} respectively.
  • The two networks can share the first few convolutional layers followed by distinct fully-connected layers for different classifications
    \mathcal{L}_{src}^D = \mathbb{E}_I[logD_{src}(I)]+\mathbb{E}_\hat{I}(log(1-D_{src}(\hat{I}))]\tag{1}
    \mathcal{L}_{cls}^D = \mathbb{E}_{I,c^g}[-logD_{cls}(c^g|I)]
    discriminator D:
    \min \limits_{D_{src},D_{cls}} \mathcal{L}_D = \mathcal{L}_{src}^D+\mathcal{L}_{cls}^D

Generator

  • G contains two modules, an attribute manipulation network(AMN) and a spatial attention network(SAN)
  • AMN focuses on how to manipulate and SAN focuses on where to manipulate.
  • The attribute manipulation network takes a face image I and an attribute value c as input, and outputs an edited face image I_a
    I_a = F_m(I,c)
  • The spatial attention network takes the face image I as input, and predict a spatial attention mask b, which is used to restrict the alternation of AMN within this region
  • Ideally, the attribute-specific region of b should be 1, and the rest regions should be 0.
  • Regions with non-zeros attention values are all regarded as attribute-specific region, and the rest with zero attention values are regarded as attribute-irrelevant region
    b = F_a(I)
  • the attribute-specfiic regions are manipulated towards the target attribute while the rest regions remain the same
    \hat{I} = G(I,c) = I_a \cdot b + I \cdot (1-b)
  1. To make the edited face image \hat{I} photo-realistic: an adversarial loss is designed to confuse the real/fake classifier
    \mathcal{L}^G_{src} = \mathbb{E}_{\hat{I}}[[-logD_{src}(\hat{I})]]\tag{2}
  2. To make \hat{I} be correctly with target attribute c: an attribute classification loss is designed to enforce the attribute prediction of \hat{I} from the attribute classifier approximates the target value c
    \mathcal{L}_{cls}^G = \mathbb{E}_\hat{I}[-logD_{cls}(c|\hat{I})]
  3. To keep the attribute-irrelevant region unchanged: a reconstruction loss is employed similar as CycleGAN and StarGAN
    \mathcal{L}_{rec}^G = \lambda_1\mathbb{E}_{I,c,c^g}[(||I-G(G(I,c),c^g)||_1]+\lambda_2\mathbb{E}_{I,c^g}[(||I-G(I,c^g)||_1]
  4. generator G
    \min \limits_{F_m,F_a} \mathcal{L}_G = \mathcal{L}_{adv}^G+\mathcal{L}_{cls}^G+\mathcal{L}_{rec}^G

Implementation

Optimization

To optimize the adversarial real/fake classification more stably, in all experiments the objectives in Eq.(1) and Eq.(2) is optimized by using WGAN-GP
\mathcal{L}_{src}^D = -\mathbb{E}_I[D_{src}(I)]+\mathbb{E}_\hat{I}[D_{src}(\hat{I})]+\lambda_{gp}\mathbb{E}_\tilde{I}[(||\nabla_\tilde{I}D_{src}(\tilde{I})||_2-1)^2]

\tilde{I} is sampled uniformly along a straight line between the edited images \hat{I} and the real images I

Network Architecture

  • For the generator, the two networks of AMN and SAN share the same network architecture except slight difference in the input and output:
Network Input Output Activation function
AMN 4-channel input, an input image and a attribute 3-channel RGB image Tanh
SAN 3-channel input, an input image 1-channel attention mask image Sigmoid
  • For the discriminator, the same architecture as PatchGAN, is used considering its promising performance.
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末杂抽,一起剝皮案震驚了整個濱河市八匠,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌雏搂,老刑警劉巖渗勘,帶你破解...
    沈念sama閱讀 207,113評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件殃饿,死亡現(xiàn)場離奇詭異茄唐,居然都是意外死亡张惹,警方通過查閱死者的電腦和手機晋柱,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,644評論 2 381
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來诵叁,“玉大人雁竞,你說我怎么就攤上這事∨《睿” “怎么了碑诉?”我有些...
    開封第一講書人閱讀 153,340評論 0 344
  • 文/不壞的土叔 我叫張陵,是天一觀的道長侥锦。 經(jīng)常有香客問我进栽,道長,這世上最難降的妖魔是什么恭垦? 我笑而不...
    開封第一講書人閱讀 55,449評論 1 279
  • 正文 為了忘掉前任快毛,我火速辦了婚禮,結(jié)果婚禮上番挺,老公的妹妹穿的比我還像新娘唠帝。我一直安慰自己,他們只是感情好玄柏,可當(dāng)我...
    茶點故事閱讀 64,445評論 5 374
  • 文/花漫 我一把揭開白布襟衰。 她就那樣靜靜地躺著,像睡著了一般粪摘。 火紅的嫁衣襯著肌膚如雪瀑晒。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,166評論 1 284
  • 那天徘意,我揣著相機與錄音苔悦,去河邊找鬼。 笑死椎咧,一個胖子當(dāng)著我的面吹牛玖详,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播邑退,決...
    沈念sama閱讀 38,442評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼竹宋,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了地技?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 37,105評論 0 261
  • 序言:老撾萬榮一對情侶失蹤秒拔,失蹤者是張志新(化名)和其女友劉穎莫矗,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,601評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡作谚,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,066評論 2 325
  • 正文 我和宋清朗相戀三年三娩,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片妹懒。...
    茶點故事閱讀 38,161評論 1 334
  • 序言:一個原本活蹦亂跳的男人離奇死亡雀监,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出眨唬,到底是詐尸還是另有隱情会前,我是刑警寧澤,帶...
    沈念sama閱讀 33,792評論 4 323
  • 正文 年R本政府宣布匾竿,位于F島的核電站瓦宜,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏岭妖。R本人自食惡果不足惜临庇,卻給世界環(huán)境...
    茶點故事閱讀 39,351評論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望昵慌。 院中可真熱鬧假夺,春花似錦、人聲如沸斋攀。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,352評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽蜻韭。三九已至悼尾,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間肖方,已是汗流浹背闺魏。 一陣腳步聲響...
    開封第一講書人閱讀 31,584評論 1 261
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留俯画,地道東北人析桥。 一個月前我還...
    沈念sama閱讀 45,618評論 2 355
  • 正文 我出身青樓,卻偏偏與公主長得像艰垂,于是被迫代替她去往敵國和親泡仗。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 42,916評論 2 344

推薦閱讀更多精彩內(nèi)容