StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Problem

  • existing models are both inefficient and ineffective in such multi-domain image translation tasks
  • incapable of jointly training domains from different datasets

New method

  • Stargan, a novel and scalable approch that can perform image-to-image translations for multiple domains using only a single model
  • A mask vector to domain label enables joint training between domains of different datasets

Star Generative Adversarial Networks

Star Generative Adversarial Networks

1. Multi-Domain Image-to-Image Translation

notation meaning
x input image
y output image
c target domain label
c' original domain label
Dsrc(x) a probability distribution over sources given by D
Dcls(c'|x) a probability distribution over domain labels computed by D
λcls hyper-parameters that control the relative importance of domain classification and reconstruction losses
λrec hyper-parameters control the relative importance of reconstruction losses
m a mask vector
[\cdot] concatenation
c_i a vector for the labels of the i-th dataset
\hat{x} sampled uniformly along a straight line between a pair of a real and a generated images
λ_{gp} hyper-parameters control the gradient penalty
  • Goals:To train a single generator G that learns mappings among multiple domains
  • train G to translate an input image x into an output image y conditioned on the target domain label c, G(x, c) → y
  • Discriminator produces probability distributions over both sources and domain labels, D : x → {Dsrc(x), Dcls(x)}, in order to allows a single discriminator to control multiple domains.

Adversarial Loss

\mathcal{L}_{adv} = \mathbb{E}_x [log D_{src}(x)] + \mathbb{E}_{x,c}[log (1- D_{src}(G(x, c))]\tag{1}

Dsrc(x) as a probability distribution over sources given by D. The generator G tries to minimize this objective, while the discriminator D tries to maximize it

Domain Classification Loss

  • add an auxiliary classifier on top of D and impose the domain classification loss when optimizing both D and G
  • decompose the objective into two terms: a domain classification loss of
    real images used to optimize D, and a domain classification loss of fake images used to optimize G
    \mathcal{L}_{cls}^r = \mathbb{E}_{x,c'}[-log D_{cls}(c'|x)]\tag{2}
    \mathcal{L}_{cls}^f = \mathbb{E}_{x,c}[-log D_{cls}(c|G(x,c))]\tag{3}

Reconstruction Loss

  • problem: minimizing the losses(Eqs. (1) and (3)) does not guarantee that translated images preserve the content of its input images while changing only the domain-related part of the inputs
  • method: apply a cycle consistency loss to the generator
    \mathcal{L}_{rec} = \mathbb{E}_{x,c,c'}[||x-G(G(x,c), c')||_1]
    G takes in the translated image G(x, c) and the original domain label c' as input and tries to reconstruct the original image x. We adopt the L1 norm as our reconstruction loss.
    Note that we use a single generator twice, first to translate an original image into an image in the target domain and then to reconstruct the original image from the translated image.

Full Objective

\mathcal{L}_D = -\mathcal{L}_{adv} + \lambda_{cls}\mathcal{L}_{cls}^r
\mathcal{L}_G = \mathcal{L}_{adv}+\lambda_{cls}\mathcal{L}_{cls}^f+\lambda_{rec}\mathcal{L}_{rec}

We use λ_{cls} = 1 and λ_{rec} = 10 in all of our experiments

2. Training with Multiple Datasets

  • Problem:the complete information on the label vector c' is required when reconstructing the input image x from the translated image G(x, c)

Mask Vector

  • introduce a mask vector m that allows StarGAN to ignore unspecified
    labels and focus on the explicitly known label provided by a particular dataset.
  • use an n-dimensional one-hot vector to represent m, with n being the number of datasets. In addition, we define a unified version of the label as a vector

\tilde{c} = [c_1,c_2...c_n,m]
For the remaining n-1 unknown labels we simply assign zero values

Training Strategy

  • use the domain label \tilde{c} as input to the generator
  • the generator learns to ignore the unspecified labels, which are zero vectors, and focus on the explicitly given label
  • extend the auxiliary classifier of the discriminator to generate probability distributions over labels for all datasets
  • train the model in a multi-task learning setting, where the discriminator tries to minimize only the classification error associated to the known label
  • Under these settings, by alternating between CelebA and RaFD the discriminator learns all of the discriminative features for both datasets, and the generator learns to control all the labels in both datasets.

Implementation

Improved GAN Training

  • replace Eq. (1) with Wasserstein GAN objective with gradient penalty defined as

\mathcal{L}_{adv} = \mathbb{E}_x[D_{src}(x)]-\mathbb{E}_{x,c}[D_{src}(G(x,c))]-\lambda_{gp}\mathbb{E}_\hat{x}[||\nabla_\hat{x}D_{src}(\hat{x})||_2-1)^2]

where \hat{x} is sampled uniformly along a straight line between a pair of a real and a generated images. We use λ_{gp} = 10 for all experiments

Network Architecture

  • generator network composed of two convolutional layers with the stride size of two for downsampling, six residual blocks, and two transposed convolutional layers with the stride size of two for upsampling.
  • use instance normalization for the generator but no normalization for
    the discriminator.
  • leverage PatchGANs for the discriminator network, which classifies whether local image patches are real or fake.
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末列吼,一起剝皮案震驚了整個(gè)濱河市幽崩,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌寞钥,老刑警劉巖慌申,帶你破解...
    沈念sama閱讀 219,188評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異理郑,居然都是意外死亡太示,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,464評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門香浩,熙熙樓的掌柜王于貴愁眉苦臉地迎上來类缤,“玉大人,你說我怎么就攤上這事邻吭〔腿酰” “怎么了?”我有些...
    開封第一講書人閱讀 165,562評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵囱晴,是天一觀的道長膏蚓。 經(jīng)常有香客問我,道長畸写,這世上最難降的妖魔是什么驮瞧? 我笑而不...
    開封第一講書人閱讀 58,893評(píng)論 1 295
  • 正文 為了忘掉前任,我火速辦了婚禮枯芬,結(jié)果婚禮上论笔,老公的妹妹穿的比我還像新娘采郎。我一直安慰自己,他們只是感情好狂魔,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,917評(píng)論 6 392
  • 文/花漫 我一把揭開白布蒜埋。 她就那樣靜靜地躺著,像睡著了一般最楷。 火紅的嫁衣襯著肌膚如雪整份。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,708評(píng)論 1 305
  • 那天籽孙,我揣著相機(jī)與錄音烈评,去河邊找鬼。 笑死犯建,一個(gè)胖子當(dāng)著我的面吹牛讲冠,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播胎挎,決...
    沈念sama閱讀 40,430評(píng)論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢(mèng)啊……” “哼忆家!你這毒婦竟也來了犹菇?” 一聲冷哼從身側(cè)響起,我...
    開封第一講書人閱讀 39,342評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤芽卿,失蹤者是張志新(化名)和其女友劉穎揭芍,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體卸例,經(jīng)...
    沈念sama閱讀 45,801評(píng)論 1 317
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡称杨,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,976評(píng)論 3 337
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了筷转。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片姑原。...
    茶點(diǎn)故事閱讀 40,115評(píng)論 1 351
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖呜舒,靈堂內(nèi)的尸體忽然破棺而出锭汛,到底是詐尸還是另有隱情,我是刑警寧澤袭蝗,帶...
    沈念sama閱讀 35,804評(píng)論 5 346
  • 正文 年R本政府宣布唤殴,位于F島的核電站,受9級(jí)特大地震影響到腥,放射性物質(zhì)發(fā)生泄漏朵逝。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,458評(píng)論 3 331
  • 文/蒙蒙 一乡范、第九天 我趴在偏房一處隱蔽的房頂上張望配名。 院中可真熱鬧啤咽,春花似錦、人聲如沸段誊。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,008評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽连舍。三九已至没陡,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間索赏,已是汗流浹背盼玄。 一陣腳步聲響...
    開封第一講書人閱讀 33,135評(píng)論 1 272
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留潜腻,地道東北人埃儿。 一個(gè)月前我還...
    沈念sama閱讀 48,365評(píng)論 3 373
  • 正文 我出身青樓,卻偏偏與公主長得像融涣,于是被迫代替她去往敵國和親童番。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,055評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容