GAN綜述|Generative Adversarial Networks: A Survey and Taxonomy

2019.7

論文地址:https://arxiv.org/abs/1906.01529v1

項(xiàng)目地址:https:// github.com/sheqi/ GAN Review


Abstrac

Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably the revolutionary techniques are in the area of computer vision such as plausible image generation, image to image translation, facial attribute manipulation and similar domains. Despite the significant success achieved in the computer vision field, applying GANs to real-world problems still poses significant challenges, three of which we focus on here: (1) High quality image generation; (2) Diverse image generation; and (3) Stable training. Through an in-depth review of GAN-related research in the literature, we provide an account of the architecture-variants and loss-variants, which have been proposed to handle these three challenges from two perspectives. We propose loss-variants and architecture-variants for classifying the most popular GANs, and discuss the potential improvements with focusing on these two aspects. While several reviews for GANs have been presented to date, none have focused on the review of GAN-variants based on their handling the challenges mentioned above. In this paper, we review and critically discuss 7 architecture-variant GANs and 9 loss-variant GANs for remedying those three challenges. The objective of this review is to provide an insight on the footprint that current GANs research focuses on the performance improvement. Code related to GAN-variants studied in this work is summarized on https:// github.com/ sheqi/ GAN Review.

在過去幾年中辛友,生成對(duì)抗網(wǎng)絡(luò)(GAN)已被廣泛研究好港。可以說慷蠕,革命性技術(shù)屬于計(jì)算機(jī)視覺領(lǐng)域,如合理的圖像生成碴犬,圖像到圖像的轉(zhuǎn)換胡嘿,面部屬性操作和類似的領(lǐng)域。盡管在計(jì)算機(jī)視覺領(lǐng)域取得了顯著成功照棋,但將GAN應(yīng)用于現(xiàn)實(shí)問題仍然帶來了重大挑戰(zhàn),其中三個(gè)我們關(guān)注的重點(diǎn):(1)高質(zhì)量的圖像生成; (2)多樣化的圖像生成; (3)穩(wěn)定的訓(xùn)練武翎。通過對(duì)文獻(xiàn)中與GAN相關(guān)的研究的深入回顧烈炭,我們提供了體系結(jié)構(gòu)變體和損失函數(shù)變體的說明,已經(jīng)提出從兩個(gè)角度處理這三個(gè)挑戰(zhàn)宝恶。我們提出了損失變量和架構(gòu)變體符隙,用于對(duì)最受歡迎的GAN進(jìn)行分類,并通過關(guān)注這兩個(gè)方面來討論潛在的改進(jìn)垫毙。雖然迄今為止已經(jīng)提交了一些關(guān)于GAN的綜述霹疫,但沒有一個(gè)綜述基于他們處理上述挑戰(zhàn)的GAN變體的評(píng)論。在本文中综芥,我們回顧并批判性地討論了7種架構(gòu)變體GAN和9種損失變體GAN丽蝎,以彌補(bǔ)這三個(gè)挑戰(zhàn)。本次審查的目的是提供對(duì)當(dāng)前GAN研究側(cè)重于績(jī)效改進(jìn)的足跡的見解膀藐。與本研究中研究的GAN變體相關(guān)的代碼總結(jié)在https:// github.com/sheqi/GAN Review上屠阻。

1 INTRODUCTION

Ging growing interests in the deep learning commu- ENERATIVE adversarial networks (GANs) are attractnity [1]–[6]. GANs have been applied to various domains such as computer vision [7]–[14], natural language processing [15]–[18], time series synthesis [19]–[23], semantic segmentation [24]–[28] etc. GANs belong to the family of generative models. Comparing to other generative models e.g., variational autoencoders, GANs offer advantages such as an ability to handle sharp estimated density functions, efficiently generating desired samples, eliminating deterministic bias and good compatibility with the internal neural architecture. These properties have allowed GANs to enjoy success especially in the computer vision field e.g., plausible image generation [29]–[33], image to image translation [2], [34]–[40], image super-resolution [26], [41]–[44] and image completion [45]–[49].

生成 - 對(duì)抗網(wǎng)絡(luò)(GAN)是深度學(xué)習(xí)領(lǐng)域越來越吸引人們興趣的方向[1] - [6]红省。 GAN已應(yīng)用于各種領(lǐng)域,如計(jì)算機(jī)視覺[7] - [14]栏笆,自然語言處理[15] - [18]类腮,時(shí)間序列綜合[19] - [23]臊泰,語義分割[24] - [28] GAN屬于生成模型的家族蛉加。 與其他生成模型(例如變分自動(dòng)編碼器)相比,GAN提供諸如處理尖銳估計(jì)密度函數(shù)的能力缸逃,有效生成所需樣本针饥,消除確定性偏差以及與內(nèi)部神經(jīng)結(jié)構(gòu)的良好兼容性等優(yōu)點(diǎn)。 這些特性使GAN成功需频,特別是在計(jì)算機(jī)視覺領(lǐng)域丁眼,例如,合理的圖像生成[29] - [33]昭殉,圖像到圖像的翻譯[2]苞七,[34] - [40],圖像超分辨率[26] 挪丢,[41] - [44]和圖像完成[45] - [49]蹂风。

However, GANs suffer challenges from two aspects: (1) Hard to train — It is non-trivial for discriminator and generator to achieve Nash equilibrium during the training and the generator cannot learn the distribution of the full datasets well, which is known as mode collapse. Lots of work has been carried out in this area [50]–[53]; and (2) Hard to evaluate — the evaluation of GANs can be considered as an effort to measure the dissimilarity between the real distribution pr and the generated distribution pg. Unfortunately, the accurate estimation of pr is not possible. Thus, it is challenging to produce good estimations of the correspondence between pr and pg. Previous work has introduced evaluation metrics for GANs [54]–[62]. The first aspect concerns the performance for GANs directly e.g., image quality, image diversity and stable training. In this work. we are going to study existing GAN-variants that handle this aspect in the area of computer vision while those readers interested in the second aspect can consult [54], [62].

然而,GAN受到兩個(gè)方面的挑戰(zhàn):(1)難以訓(xùn)練 - G和N在訓(xùn)練期間達(dá)到納什均衡是非常重要的乾蓬,并且G無法很好地學(xué)習(xí)完整數(shù)據(jù)集的分布惠啄,這被稱為模式坍方。在這方面已經(jīng)開展了大量工作[50] - [53]; (2)難以評(píng)估 - 對(duì)GAN的評(píng)估可以被認(rèn)為是衡量實(shí)際分布pr與生成的分布pg之間的不相似性的努力任内。不幸的是撵渡,pr的準(zhǔn)確估計(jì)是不可能的。因此死嗦,對(duì)pr和pg之間的對(duì)應(yīng)關(guān)系進(jìn)行良好估計(jì)是具有挑戰(zhàn)性的趋距。以前的工作引入了GAN的評(píng)估指標(biāo)[54] - [62]。第一方面涉及GAN的性能越除,例如圖像質(zhì)量棚品,圖像分集和穩(wěn)定訓(xùn)練。在這項(xiàng)工作中廊敌。我們將研究在計(jì)算機(jī)視覺領(lǐng)域處理這方面的現(xiàn)有GAN變體铜跑,而那些對(duì)第二方面感興趣的讀者可以參考[54],[62]骡澈。

Current GANs research focuses on two directions: (1) Improving the training for GANs; and (2) Deployment of GANs to real-world applications. The former seeks to improve GANs performance and is therefore a foundation for the latter aspect. Considering numerous research work in the literature, we give a brief review on the GANvariants that focus on improving training in this paper. The improvement of the training process provides benefits in terms of GANs performance as follows: (1) Improvements in generated image diversity (also known as mode diversity); (2) Increases in generated image quality; and (3) More stable training such as remedying the vanishing gradient for the generator. In order to improve the performance as mentioned above, modification for GANs can be done from either the architectural side or the loss perspective. We will study the GAN-variants coming from both sides that improve the performance for GANs. The rest of the paper is organized as follows: (1) We introduce the search strategy and part of the results (complete results are illustrated in Supplementary material) for the existing GANs papers in the area of computer vision; (2) We introduce related review work for GANs and illustrate the difference between those reviews and this work; (3) We give a brief introduction to GANs; (4) We review the architecture-variant GANs in the literature; (5) We review the loss-variant GANs in the literature; (6) We summarize the GAN-variants in this study and illustrate their difference and relationships; and (7) We conclude this review and preview likely future research work in the area of GANs.

目前的GAN研究主要集中在兩個(gè)方向:(1)改進(jìn)GAN的訓(xùn)練; (2)將GAN部署到實(shí)際應(yīng)用程序中锅纺。前者旨在提高GAN的性能,因此是后一方面的基礎(chǔ)肋殴《陲保考慮到文獻(xiàn)中的大量研究工作坦弟,我們簡(jiǎn)要回顧了本文關(guān)注改進(jìn)訓(xùn)練的GAN變體。訓(xùn)練過程的改進(jìn)在GAN性能方面提供如下益處:(1)產(chǎn)生的圖像多樣性的改進(jìn)(也稱為模式多樣性); (2)生成圖像質(zhì)量的提高; (3)更穩(wěn)定的訓(xùn)練官地,例如糾正生成器的消失梯度酿傍。為了改善如上所述的性能,可以從架構(gòu)方面或損失方面對(duì)GAN進(jìn)行修改驱入。我們將研究來自雙方的GAN變體赤炒,以改善GAN的性能。本文的其余部分安排如下:(1)我們?cè)谟?jì)算機(jī)視覺領(lǐng)域的現(xiàn)有GAN論文中介紹了搜索策略和部分結(jié)果(完整結(jié)果在補(bǔ)充材料中說明); (2)我們介紹了GAN的相關(guān)審查工作亏较,并說明了這些審查與這項(xiàng)工作之間的區(qū)別; (3)我們簡(jiǎn)要介紹一下GAN; (4)我們回顧了文獻(xiàn)中的體系結(jié)構(gòu)變體GAN; (5)我們回顧了文獻(xiàn)中的損失變量GAN; (6)我們總結(jié)了本研究中的GAN變體莺褒,并說明了它們的差異和關(guān)系; (7)我們總結(jié)了這篇評(píng)論,并預(yù)測(cè)了GAN領(lǐng)域未來可能的研究工作雪情。

Many GAN-variants have been proposed in the literature to improve performance. These can be divided into two types: (1) Architecture-variants. The first proposed GAN used fully-connected neural networks [1] so specific types of architecture may be beneficial for specific applications e.g., convolutional neural networks (CNNs) for images and recurrent neural networks (RNNs) for time series data; and (2) Loss-variants. Here different variations of the loss function are explored (1) to enable more stable learning of G

在文獻(xiàn)中已經(jīng)提出了許多GAN變體以改善性能遵岩。 這些可以分為兩種類型:(1)架構(gòu)變體。 第一個(gè)提出的GAN使用完全連接的神經(jīng)網(wǎng)絡(luò)[1]巡通,因此特定類型的架構(gòu)可能對(duì)特定應(yīng)用有益尘执,例如,用于圖像的卷積神經(jīng)網(wǎng)絡(luò)(CNN)和用于時(shí)間序列數(shù)據(jù)的遞歸神經(jīng)網(wǎng)絡(luò)(RNN); (2)損失變體宴凉。 這里探討了損失函數(shù)的不同變化(1)以使得能夠更穩(wěn)定地學(xué)習(xí)G

2 SEARCH STRATEGY AND RESULTS

pass

3 RELATED WORK

There has been previous GANs review papers for example in terms of reviewing GANs performance [63]. That work focuses on the experimental validation across different types of GANs benchmarking on LSUN-BEDROOM [64], CELEBA-HQ-128 [65] and the CIFAR10 [66] image datasets. The results suggest that the original GAN [1] with spectral normalization [67] is a good starting choice when applying GANs to a new dataset. A limitation of that review is that the benchmark datasets do not consider diversity in a significant way. Thus the benchmark results tend to focus more on evaluation of the image quality, which may ignore GANs efficacy in producing diverse images. Work [68] surveys different GANs architectures and their evaluation metrics. A further comparison on different architecture-variants’ performance, applications, complexity and so on needs to be explored. Papers [69]–[71] focus on the investigation of the newest development treads and the applications of GANs. They compare GAN-variants through different applications. Comparing this review to the current review literature, we emphasize an introduction to GAN-variants based on their performance including their ability to produce high quality and diverse images, stable training, ability for handling the vanishing gradient problem, etc. This is all done through the taking of a perspective based on architecture and loss function considerations. This work also provides the comparison and analysis in terms of pros and cons across GAN-variants presented in this paper.

例如誊锭,在審查GAN表現(xiàn)方面,以前的GAN綜述文章已經(jīng)有了[63]跪解。這項(xiàng)工作的重點(diǎn)是在LSUN-BEDROOM [64]炉旷,CELEBA-HQ-128 [65]和CIFAR10 [66]圖像數(shù)據(jù)集上對(duì)不同類型的GAN進(jìn)行基準(zhǔn)測(cè)試的實(shí)驗(yàn)驗(yàn)證。結(jié)果表明叉讥,在將GAN應(yīng)用于新數(shù)據(jù)集時(shí)窘行,具有譜歸一化[67]的原始GAN [1]是一個(gè)很好的起始選擇。該審查的局限性在于基準(zhǔn)數(shù)據(jù)集不會(huì)以顯著的方式考慮多樣性图仓。因此罐盔,基準(zhǔn)測(cè)試結(jié)果傾向于更多地關(guān)注圖像質(zhì)量的評(píng)估,這可能忽略GAN在產(chǎn)生不同圖像方面的功效救崔。工作[68]調(diào)查了不同的GAN架構(gòu)及其評(píng)估指標(biāo)惶看。需要進(jìn)一步比較不同架構(gòu)變體的性能,應(yīng)用六孵,復(fù)雜性等纬黎。論文[69] - [71]著重研究最新的發(fā)展趨勢(shì)和GAN的應(yīng)用。他們通過不同的應(yīng)用程序比較GAN變體劫窒。將此評(píng)論與當(dāng)前的評(píng)論文獻(xiàn)進(jìn)行比較本今,我們強(qiáng)調(diào)基于其表現(xiàn)的GAN變體的介紹,包括它們產(chǎn)生高質(zhì)量和多樣化圖像的能力,穩(wěn)定的訓(xùn)練冠息,處理消失梯度問題的能力等挪凑。這一切都是通過基于架構(gòu)和損失函數(shù)考慮的觀點(diǎn)。這項(xiàng)工作還提供了本文介紹的GAN變體的優(yōu)缺點(diǎn)的比較和分析逛艰。

4 GENERATIVE ADVERSARIAL NETWORKS

Figure. 1 demonstrates the architecture of a typical GAN. The architecture comprises two components, one of which is a discriminator (D) distinguishing between real images and generated images while the other one is a generator (G) creating images to fool the discriminator. Given a distribution z ~ pz, G defines a probability distribution pg as the distribution of the samples G(z). The objective of a GAN is to learn the generator’s distribution pg that approximates the real data distribution pr. Optimization of a GAN is performed with respect to a joint loss function for D and G?

圖1展示了典型GAN的架構(gòu)躏碳。 該體系結(jié)構(gòu)包括兩個(gè)組件,其中一個(gè)是區(qū)分真實(shí)圖像和生成圖像的鑒別器(D)散怖,而另一個(gè)是生成圖像以欺騙鑒別器的生成器(G)菇绵。 給定分布z~pz,G將概率分布pg定義為樣本G(z)的分布杭抠。 GAN的目標(biāo)是學(xué)習(xí)近似于實(shí)際數(shù)據(jù)分布的生成器分布pg脸甘。 針對(duì)D和G的關(guān)節(jié)損失函數(shù)執(zhí)行GAN的優(yōu)化

GANs, as a member of the deep generative model (DGM) family, has attracted exponentially growing interest in the deep learning community because of some advantages comparing to the tradition DGMs: (1) GANs are able to produce better output than other DGMs. Comparing to the most well-known DGMs—variational autoencoder (VAE), GANs are able to produce any type of probability density while VAE is not able to generate sharp images; (2) The GAN framework can train any type of generator network. Other DGMs may have pre-requirements for the generator e.g., the output layer of generator is Gaussian; (3) There is no restriction on the size of the latent variable. These advantages have led GANs to achieve the state of art performance on producing synthetic data especially for image data.

GAN作為深度生成模型(DGM)家族的一員恳啥,由于與傳統(tǒng)DGM相比具有一些優(yōu)勢(shì)偏灿,因此引起了對(duì)深度學(xué)習(xí)社區(qū)的興趣:(1)GAN能夠產(chǎn)生比其他DGM更好的輸出。 與最知名的DGM變分自動(dòng)編碼器(VAE)相比钝的,GAN能夠產(chǎn)生任何類型的概率密度翁垂,而VAE無法產(chǎn)生清晰的圖像。 (2)GAN框架可以訓(xùn)練任何類型的G網(wǎng)絡(luò)硝桩。 其他DGM可能對(duì)發(fā)生器有預(yù)先要求沿猜,例如,發(fā)生器的輸出層是高斯分布; (3)潛變量的大小沒有限制碗脊。 這些優(yōu)點(diǎn)使GAN在生成合成數(shù)據(jù)(尤其是圖像數(shù)據(jù))方面達(dá)到了最佳性能啼肩。


5 ARCHITECTURE-VARIANT GANS

There are many types of architecture-variants proposed in the literature (see Fig.2) [33], [34], [72]–[74]. Architecturevariant GANs are mainly proposed for the purpose of different applications e.g., image to image transfer [34], image super resolution [41], image completion [75], and text-toimage generation [76]. In this section, we provide a review on architecture-variants that helps improve the performance for GANs from three aspects mentioned before, namely improving image diversity, improving image quality and more stable training. Review for those architecture-variants for different applications can be referred to work [68], [70].

文獻(xiàn)中提出了許多類型的架構(gòu)變體(見圖2)[33],[34]衙伶,[72] - [74]祈坠。 Architecturevariant GAN主要用于不同的應(yīng)用,例如圖像到圖像傳變換[34]矢劲,圖像超分辨率[41]赦拘,圖像完成[75]和文本圖像生成[76]。 在本節(jié)中芬沉,我們提供了一個(gè)體系結(jié)構(gòu)變體的評(píng)論躺同,它有助于從前面提到的三個(gè)方面改善GAN的性能,即改善圖像多樣性丸逸,提高圖像質(zhì)量和更穩(wěn)定的培訓(xùn)蹋艺。 對(duì)于不同應(yīng)用的架構(gòu)變體的評(píng)論可以參考工作[68],[70]黄刚。

5.1 Fully-connected GAN (FCGAN)

The original GAN paper [1] uses fully-connected neural networks for both generator and discriminator. This architecture-variant is applied for some simple image datasets i.e., MNIST [77], CIFAR-10 [66] and Toronto Face Dataset. It does not demonstrate good generalization performance for more complex image types.

原始的GAN論文[1]使用完全連接的神經(jīng)網(wǎng)絡(luò)來生成發(fā)生器和鑒別器捎谨。 該架構(gòu)變體適用于一些簡(jiǎn)單的圖像數(shù)據(jù)集,即MNIST [77],CIFAR-10 [66]和Toronto Face Dataset侍芝。 對(duì)于更復(fù)雜的圖像類型研铆,它沒有表現(xiàn)出良好的泛化性能。

5.2 Laplacian Pyramid of Adversarial Networks (LAPGAN)

LAPGAN is proposed for the production of higher resolution images from lower resolution input GAN [78]. Figure. 3 demonstrates the up-sampling process of generator in LAPGAN from right to left. LAPGAN utilizes a cascade of CNNs within a Laplacian pyramid framework [80] to generate high quality images.

LAPGAN被提議用于從較低分辨率輸入GAN [78]產(chǎn)生更高分辨率的圖像州叠。 圖3從右到左演示了LAPGAN中發(fā)生器的上采樣過程棵红。 LAPGAN利用拉普拉斯金字塔框架[80]內(nèi)的級(jí)聯(lián)CNN來生成高質(zhì)量圖像。

5.3 Deep Convolutional GAN (DCGAN)

DCGAN is the first work that applied a deconvolutional neural networks architecture for G [72]. Figure. 4 illustrates the proposed architecture for G. Deconvolution is proposed to visualize the features for a CNN and has shown good performance for CNNs visualization [81]. DCGAN deploys the spatial up-sampling ability of the deconvolution operation for G, which enables the generation of higher resolution images using GANs.

DCGAN是第一個(gè)將解卷積神經(jīng)網(wǎng)絡(luò)架構(gòu)應(yīng)用于G [72]的工作咧栗。 數(shù)字逆甜。 圖4示出了用于G的所提出的體系結(jié)構(gòu)。提出了解卷積以可視化CNN的特征并且已經(jīng)示出了CNN可視化的良好性能[81]致板。 DCGAN為G的解卷積操作部署了空間上采樣能力交煞,這使得能夠使用GAN生成更高分辨率的圖像。

5.4 Boundary Equilibrium GAN (BEGAN)

BEGAN uses an autoencoder architecture for the discriminator which was first proposed in EBGAN [82] (see Fig. 5). Compared to traditional optimization, the BEGAN matches the autoencoder loss distributions using a loss derived from the Wasserstein distance instead of matching data distributions directly. This modification helps G to generate easyto-reconstruct data for the autoencoder at the beginning because the generated data is close to 0 and the real data distribution has not been learned accurately yet, which prevents D easily winning G at the early training stage.

BEGAN使用自動(dòng)編碼器架構(gòu)作為鑒別器斟或,這在EBGAN [82]中首次提出(見圖5)素征。 與傳統(tǒng)優(yōu)化相比,BEGAN使用從Wasserstein距離導(dǎo)出的損失而不是直接匹配數(shù)據(jù)分布來匹配自動(dòng)編碼器損失分布萝挤。 這種修改有助于G在開始時(shí)為自動(dòng)編碼器生成易于重建的數(shù)據(jù)御毅,因?yàn)樯傻臄?shù)據(jù)接近0并且尚未準(zhǔn)確地學(xué)習(xí)實(shí)際數(shù)據(jù)分布,這阻止了D在早期訓(xùn)練階段輕松贏得G.

5.5 Progressive GAN (PROGAN)

PROGAN involves progressive steps toward the expansion of the network architecture [74]. This architecture uses the idea of progressive neural networks first proposed in [83]. This technology does not suffer from forgetting and can leverage prior knowledge via lateral connections to previously learned features. Consequently it is widely applied for learning complex task sequences. Figure. 6 demonstrates the training process for PROGAN. Training starts with low resolution 4 × 4 pixels image. Both G and D start to grow with the training progressing. Importantly, all variables remain trainable throughout this growing process. This progressive training strategy enables substantially more stable learning for both networks. By increasing the resolution little by little, the networks are continuously asked a much simpler question comparing to the end goal of discovering a mapping from latent vectors. All current state-of-the-art GANs employ this type of training strategy and it has resulted in impressive, plausible images [29], [74], [84].

PROGAN涉及擴(kuò)展網(wǎng)絡(luò)架構(gòu)的漸進(jìn)步驟[74]怜珍。該架構(gòu)使用了[83]中首次提出的漸進(jìn)神經(jīng)網(wǎng)絡(luò)的思想端蛆。該技術(shù)不會(huì)遺忘,并且可以通過橫向連接利用先前知識(shí)來學(xué)習(xí)先前學(xué)習(xí)的特征酥泛。因此今豆,它被廣泛應(yīng)用于學(xué)習(xí)復(fù)雜的任務(wù)序列。圖6展示了PROGAN的培訓(xùn)過程柔袁。訓(xùn)練從低分辨率4×4像素圖像開始呆躲。隨著訓(xùn)練的進(jìn)行,G和D都開始增長(zhǎng)瘦馍。重要的是歼秽,在整個(gè)增長(zhǎng)過程中,所有變量都可以訓(xùn)練情组。這種漸進(jìn)式培訓(xùn)策略可以為兩個(gè)網(wǎng)絡(luò)提供更穩(wěn)定的學(xué)習(xí)燥筷。通過逐漸增加分辨率,與從潛在向量發(fā)現(xiàn)映射的最終目標(biāo)相比院崇,網(wǎng)絡(luò)不斷被問到一個(gè)更簡(jiǎn)單的問題肆氓。所有當(dāng)前最先進(jìn)的GAN都采用這種類型的培訓(xùn)策略,并產(chǎn)生了令人印象深刻的合理圖像[29]底瓣,[74]谢揪,[84]蕉陋。

5.6 Self-attention GAN (SAGAN)

Traditional CNNs can only capture local spatial information and the receptive field may not cover enough structure, which causes CNN-based GANs to have difficulty in learning multi-class image datasets (e.g., ImageNet) and the key components in generated images may shift e.g., the nose in a face-generated image may not appear in right position. Self-attention mechanism have been proposed to ensure large receptive field and without sacrificing computational efficiency for CNNs [85]. SAGAN deploys a self-attention mechanism in the design of the discriminator and generator architectures for GANs [86] (see Fig. 7). Benefiting from the self-attention mechanism, SAGAN is able to learn global, long-range dependencies for generating images. It has achieved great performance on multi-class image generation based on the ImageNet datasets.

傳統(tǒng)的CNN只能捕獲局部空間信息,并且感知域可能無法覆蓋足夠的結(jié)構(gòu)拨扶,這導(dǎo)致基于CNN的GAN難以學(xué)習(xí)多類圖像數(shù)據(jù)集(例如凳鬓,ImageNet),并且生成的圖像中的關(guān)鍵組件可能會(huì)移位患民,例如缩举, 臉部生成的圖像中的鼻子可能不會(huì)出現(xiàn)在正確的位置。 已經(jīng)提出自我關(guān)注機(jī)制以確保大的感受野并且不犧牲CNN的計(jì)算效率[85]匹颤。 SAGAN在GAN的鑒別器和發(fā)生器架構(gòu)的設(shè)計(jì)中采用了自我關(guān)注機(jī)制[86](見圖7)仅孩。 受益于自我關(guān)注機(jī)制,SAGAN能夠?qū)W習(xí)生成圖像的全局遠(yuǎn)程依賴性印蓖。 它在基于ImageNet數(shù)據(jù)集的多類圖像生成方面取得了很好的性能辽慕。

5.7 BigGAN

BigGAN [84] has also achieved state-of-the-art performance on the ImageNet datasets. Its design is based on SAGAN and it has been demonstrated that the increase in batch size and the model complexity can dramatically improve GANs performance with respect to complex image datasets.

BigGAN [84]也在ImageNet數(shù)據(jù)集上實(shí)現(xiàn)了最先進(jìn)的性能。 它的設(shè)計(jì)基于SAGAN赦肃,并且已經(jīng)證明溅蛉,批量大小和模型復(fù)雜性的增加可以顯著提高GAN在復(fù)雜圖像數(shù)據(jù)集方面的性能。

5.8 Summary

We have provided an overview of architecture-variant GANs which aim to improve performance based on the three key challenges: (1) Image quality; (2) Mode diversity; and (3) Vanishing gradient. An illustration of relative performance can be found in Fig. 8. All proposed architecture- variants are able to improve image quality. SAGAN is proposed for improving the capacity of multi-class learning in GANs, the goal of which is to produce more diverse images. Benefiting from the SAGAN architecture, BigGAN is designed for improving both image quality and image diversity. It should be noted that both PROGAN and BigGAN are able to produce high resolution images. BigGAN realizes this higher resolution by increasing the batch size and the authors mention that a progressive growing [74] operation is unnecessary when the batch size is large enough (2048 used in the original paper [84]). However, a progressive growing operation is still needed when GPU memory is limited (a large batch size is hungry for GPU memory). Benefiting from spectrum normalization (SN), which will be discussed in loss-variant GANs part, both SAGAN and BigGAN is effective for the vanishing gradient challenge. These milestone architecture-variants indicate a strong advantage of GANs — compatibility, where a GAN is open to any type of neural architecture. This property enables GANs to be applied to many different applications.

我們提供了架構(gòu)變體GAN的概述摆尝,旨在基于以下三個(gè)主要挑戰(zhàn)來提高性能:(1)圖像質(zhì)量; (2)模式多樣性; (3)消失梯度温艇∫虮可以在圖8中找到相對(duì)性能的圖示堕汞。所有提出的架構(gòu)變體都能夠改善圖像質(zhì)量。 SAGAN被提議用于提高GAN中多類學(xué)習(xí)的能力晃琳,其目標(biāo)是產(chǎn)生更多樣化的圖像讯检。受益于SAGAN架構(gòu),BigGAN旨在提高圖像質(zhì)量和圖像多樣性卫旱。應(yīng)該注意人灼,PROGAN和BigGAN都能夠產(chǎn)生高分辨率圖像。 BigGAN通過增加批量大小來實(shí)現(xiàn)這種更高的分辨率顾翼,作者提到當(dāng)批量大小足夠大時(shí)投放,不需要逐步增長(zhǎng)[74]操作(原始論文中使用了2048 [84])。然而适贸,當(dāng)GPU內(nèi)存有限時(shí)(GPU內(nèi)存需要大批量大芯姆肌),仍然需要逐步增長(zhǎng)的操作拜姿。受益于頻譜歸一化(SN)烙样,其將在損耗變量GAN部分中討論,SAGAN和BigGAN都對(duì)消失的梯度挑戰(zhàn)有效蕊肥。這些里程碑式架構(gòu)變體表明了GAN的強(qiáng)大優(yōu)勢(shì) - 兼容性谒获,其中GAN對(duì)任何類型的神經(jīng)架構(gòu)都是開放的。此屬性使GAN可以應(yīng)用于許多不同的應(yīng)用程序。

Regarding the improvements achieved by different architecture-variant GANs, we next present an analysis on the interconnections and comparisons between the architecture-variants presented here. Starting with the FCGAN described in the original GAN literature, this architecture-variant can only generate simple image datasets. Such a limitation is caused by the network architecture, where the capacity of FC networks is very limited. Research on improving the performance of GANs starts from designing more complex architectures for GANs. A more complex image datasets (e.g., ImageNet) has higher resolution and diversity comparing to simple image datasets (e.g., MNIST) and needs accordingly more sophisticated approaches.

關(guān)于不同架構(gòu)變體GAN所實(shí)現(xiàn)的改進(jìn)批狱,我們接下來將對(duì)這里介紹的架構(gòu)變體之間的互連和比較進(jìn)行分析裸准。 從原始GAN文獻(xiàn)中描述的FCGAN開始,該體系結(jié)構(gòu)變體只能生成簡(jiǎn)單的圖像數(shù)據(jù)集赔硫。 這種限制是由網(wǎng)絡(luò)架構(gòu)引起的狼速,其中FC網(wǎng)絡(luò)的容量非常有限。 提高GAN性能的研究始于為GAN設(shè)計(jì)更復(fù)雜的架構(gòu)卦停。 與簡(jiǎn)單圖像數(shù)據(jù)集(例如向胡,MNIST)相比,更復(fù)雜的圖像數(shù)據(jù)集(例如惊完,ImageNet)具有更高的分辨率和多樣性僵芹,因此需要更復(fù)雜的方法。

In the context of producing higher resolution images, one obvious approach is to increase the size of generator. LAPGAN and DCGAN up-sample the generator based on such a perspective. Benefiting from the concise deconvolutional up-sampling process and easy generalization of DCGAN, the architecture in DCGAN is more widely used in the GANs literature. It should be noticed that most GANs in the computer vision area use the deconvolutional neural network as the generator, which is first used in DCGAN. Therefore, DCGAN is one of the classical GAN-variants in the literature.

在產(chǎn)生更高分辨率圖像的背景下小槐,一種顯而易見的方法是增加生成器的尺寸拇派。 LAPGAN和DCGAN基于這樣的觀點(diǎn)對(duì)G進(jìn)行上采樣。 受益于簡(jiǎn)明的去卷積上采樣過程和DCGAN的簡(jiǎn)單泛化凿跳,DCGAN中的架構(gòu)在GAN文獻(xiàn)中得到了更廣泛的應(yīng)用件豌。 應(yīng)該注意的是,計(jì)算機(jī)視覺領(lǐng)域中的大多數(shù)GAN使用反卷積神經(jīng)網(wǎng)絡(luò)作為生成器控嗜,其首先在DCGAN中使用茧彤。 因此,DCGAN是文獻(xiàn)中經(jīng)典的GAN變體之一疆栏。

The ability to produce high quality images is an important aspect of GANs clearly. This can be improved through judicious choice of architecture. BEGAN and PROGAN demonstrate approaches from this perspective. With the same architecture used for the generator in DCGAN, BEGAN redesigns the discriminator by including encoder and decoder, where the discriminator tries to distinguish the difference between the generated and autoencoded images in pixel space. Image quality has been improved in this case. Based on DCGAN, PROGAN demonstrates a progressive approach that incrementally trains an architecture similar to DCGAN. This novel approach cannot only improve image quality but also produce higher resolution images.

生產(chǎn)高質(zhì)量圖像的能力顯然是GAN的一個(gè)重要方面曾掂。 通過明智地選擇架構(gòu)可以改善這一點(diǎn)。 BEGAN和PROGAN從這個(gè)角度展示了方法壁顶。 利用與DCGAN中的發(fā)生器相同的架構(gòu)珠洗,BEGAN通過包括編碼器和解碼器來重新設(shè)計(jì)鑒別器,其中鑒別器試圖區(qū)分像素空間中生成的和自動(dòng)編碼的圖像之間的差異若专。 在這種情況下许蓖,圖像質(zhì)量得到了改善。 基于DCGAN调衰,PROGAN展示了一種漸進(jìn)式方法膊爪,可逐步訓(xùn)練類似于DCGAN的架構(gòu)。 這種新穎的方法不僅可以提高圖像質(zhì)量窖式,還可以產(chǎn)生更高分辨率的圖像蚁飒。

Producing diverse images is the most challenging task for GANs and it is very difficult for GANs to successfully produce images such as those represented in the ImageNet sets. It is difficult for traditional CNNs to learn global and long-range dependencies from images. Thanks to selfattention mechanism though, approaches such as those in SAGAN integrate self-mechanisms to both discriminator and generator, which helps GANs a lot in terms of learning multi-class images. Moreover, BigGAN, which can be considered an extension of SAGAN, introduces a deeper GAN architecture with a very large batch size, which produces high quality and diverse images as in ImageNet and is the current state-of-the-art.

生成各種圖像對(duì)于GAN來說是最具挑戰(zhàn)性的任務(wù),GAN很難成功生成ImageNet集中表示的圖像萝喘。 傳統(tǒng)的CNN很難從圖像中學(xué)習(xí)全局和遠(yuǎn)程依賴淮逻。 由于自我保護(hù)機(jī)制琼懊,SAGAN中的方法將自我機(jī)制集成到鑒別器和生成器兩者,這有助于GAN在學(xué)習(xí)多類圖像方面有很多爬早。 此外哼丈,BigGAN可以被認(rèn)為是SAGAN的擴(kuò)展,它引入了更深的GAN架構(gòu)筛严,具有非常大的批量大小醉旦,可以像ImageNet一樣產(chǎn)生高質(zhì)量和多樣化的圖像,并且是當(dāng)前最先進(jìn)的技術(shù)桨啃。

6 LOSS-VARIANT GANS

pass

7 DISCUSSION

We have introduced the most significant problems present in the original GAN design, which are mode collapse and vanishing gradient for updating G. We have surveyed significant GAN-variants that remedy these problems through two design considerations: (1) Architecture-variants. This aspect focuses on architectural options for GANs. This approach enables GANs to be successfully applied to different applications, however, it is not able to fully solve the problems mentioned above; (2) Loss-variant. We have provided a detail explanation why these problems arise in the original GAN. These problems are essentially caused by the loss function in the original GAN. Thus, modifying this loss function can solve this problem. It should be noted that the loss function may change for some architecturevariants. However, this loss function is changed according to the architecture thus it is architecture-specific loss. It is not able to generalize to other architectures.?

我們已經(jīng)介紹了原始GAN設(shè)計(jì)中存在的最重要的問題车胡,即用于更新G的模式崩潰和消失梯度。我們已經(jīng)調(diào)查了通過兩個(gè)設(shè)計(jì)考慮來解決這些問題的重要GAN變體:(1)架構(gòu)變體照瘾。 這方面?zhèn)戎赜贕AN的架構(gòu)選項(xiàng)匈棘。 這種方法使GAN能夠成功應(yīng)用于不同的應(yīng)用,但是析命,它無法完全解決上述問題; (2)損失變量主卫。 我們已經(jīng)詳細(xì)解釋了原始GAN中出現(xiàn)這些問題的原因。 這些問題主要是由原始GAN中的丟失功能引起的鹃愤。 因此簇搅,修改此損失函數(shù)可以解決此問題。 應(yīng)該注意软吐,對(duì)于一些架構(gòu)變量瘩将,損失函數(shù)可能會(huì)改變。 但是关噪,這種損失函數(shù)根據(jù)體系結(jié)構(gòu)而改變鸟蟹,因此它是體系結(jié)構(gòu)特定的丟失。 它無法概括為其他架構(gòu)使兔。

Through a comparison of the different architectural approaches surveyed in this work, it is clear that the modification of the GAN architecture has significant impact on the generated images quality and their diversity. Recent research shows that the capacity and performance of GANs are related to the network size and batch size [84], which indicates that a well designed architecture is critical for good GANs performance. However, modifications to the architecture only is not able to eliminate all the inherent training problems for GANs. Redesign of the loss function including regularization and normalization can help yield 12 more stable training for GANs. This work introduced various approaches to the design of the loss function for GANs. Based on the comparison for each loss-variant, we find that spectral normalization as first demonstrated in the SN-GAN brings lots of benefits including ease of implementation, relatively light computational requirements and the ability to work well for almost all GANs. We suggest that researchers, who seek to apply GANs to real-world problems, include spectral normalization to the discriminator.?

通過對(duì)本工作中調(diào)查的不同架構(gòu)方法的比較,很明顯GAN架構(gòu)的修改對(duì)生成的圖像質(zhì)量及其多樣性具有顯著影響藤韵。最近的研究表明虐沥,GAN的容量和性能與網(wǎng)絡(luò)規(guī)模和批量大小有關(guān)[84],這表明精心設(shè)計(jì)的架構(gòu)對(duì)于良好的GAN性能至關(guān)重要泽艘。但是欲险,對(duì)架構(gòu)的修改不能消除GAN的所有固有訓(xùn)練問題。重新設(shè)計(jì)損失函數(shù)包括正則化和歸一化可以幫助為GAN產(chǎn)生個(gè)更穩(wěn)定的訓(xùn)練匹涮。這項(xiàng)工作介紹了設(shè)計(jì)GAN損失函數(shù)的各種方法天试。基于每個(gè)損耗變量的比較然低,我們發(fā)現(xiàn)SN-GAN中首次展示的頻譜歸一化帶來了許多好處喜每,包括易于實(shí)現(xiàn)务唐,相對(duì)較輕的計(jì)算要求以及幾乎所有GAN都能很好地工作的能力。我們建議尋求將GAN應(yīng)用于現(xiàn)實(shí)世界問題的研究人員將鑒別器的光譜歸一化包括在內(nèi)带兜。

There is no answer to the question of which GAN is the best. The selection of a specific GAN type depends on the application. For instance, if an application requires the production of natural scenes images (this requires generation of images which are very diverse). DCGAN with spectrum normalization applied, SAGAN and BigGAN can be good choices here. BigGAN is able to produce the most realistic images compared to the other two. However, BigGAN is much more computationally intensive. Thus it depends on the actual computational requirements set by the real-world application.

對(duì)哪個(gè)GAN最好的問題沒有答案枫笛。 選擇特定的GAN類型取決于應(yīng)用程序。 例如刚照,如果應(yīng)用程序需要生成自然場(chǎng)景圖像(這需要生成非常多樣化的圖像)刑巧。 應(yīng)用頻譜歸一化的DCGAN,SAGAN和BigGAN在這里可以是很好的選擇无畔。 與其他兩個(gè)相比啊楚,BigGAN能夠產(chǎn)生最逼真的圖像。 但是浑彰,BigGAN的計(jì)算密集程度要高得多特幔。 因此,它取決于實(shí)際應(yīng)用程序設(shè)置的實(shí)際計(jì)算要求闸昨。

7.1 Interconnections Between Architecture and Loss

In this paper, we highlight the problems inherent in the original GAN design. In highlighting how subsequent researchers have remedied those problems, we explored architecture-variants and loss-variants in GAN designs separately. However, it should be noted that there are interconnections between these two types of GAN-variants. As mentioned before, loss functions are easily integrated to different architectures. Benefit from improved convergence and stabilization through a redesigned loss function, architecture-variants are able to achieve better performance and accomplish solutions to more difficult problems. For examples, BEGAN and PROGAN use Wasserstein distance instead of JS divergence. SAGAN and BigGAN deploy spectral normalization, where they achieve good performance based on multi-class image generation. These two types of variants equally contribute to the progress of GANs.?

在本文中暑劝,我們強(qiáng)調(diào)了原始GAN設(shè)計(jì)中固有的問題。 在強(qiáng)調(diào)后續(xù)研究人員如何解決這些問題時(shí)句惯,我們分別探討了GAN設(shè)計(jì)中的體系結(jié)構(gòu)變體和損失變體镐捧。 但是,應(yīng)該注意循诉,這兩種類型的GAN變體之間存在互連横辆。 如前所述,損失函數(shù)可以輕松集成到不同的體系結(jié)構(gòu)中茄猫。 通過重新設(shè)計(jì)的損耗功能從改進(jìn)的收斂和穩(wěn)定中受益狈蚤,架構(gòu)變體能夠?qū)崿F(xiàn)更好的性能并完成解決更難的問題。 例如划纽,BEGAN和PROGAN使用Wasserstein距離而不是JS散度脆侮。 SAGAN和BigGAN部署頻譜歸一化,在此基礎(chǔ)上實(shí)現(xiàn)了基于多類圖像生成的良好性能勇劣。 這兩種變體同樣有助于GAN的發(fā)展靖避。

7.2 Future Directions

?GANs were originally proposed to produce plausible synthetic images and have achieved exciting performance in the computer vision area. GANs have been applied to some other fields, (e.g., time series generation [20], [21], [103] and natural language processing [15], [104]–[106]) with some success. Compared to computer vision, GANs research in other areas is still somewhat limited. The limitation is caused by the different properties inherent in image versus non-image data. For instance, GANs work to produce continuous value data but natural language are based on discrete values like words, characters, bytes, etc., so it is hard to apply GANs for natural language applications. Future research of course is being carried out for applying GANs to other areas.?

GAN最初被提出用于產(chǎn)生合理的合成圖像,并且已經(jīng)在計(jì)算機(jī)視覺領(lǐng)域中實(shí)現(xiàn)了令人興奮的性能比默。 GAN已經(jīng)應(yīng)用于其他一些領(lǐng)域(例如幻捏,時(shí)間序列生成[20],[21]命咐,[103]和自然語言處理[15]篡九,[104] - [106])并取得了一些成功。 與計(jì)算機(jī)視覺相比醋奠,GAN在其他領(lǐng)域的研究仍然有限榛臼。 該限制是由圖像與非圖像數(shù)據(jù)中固有的不同屬性引起的伊佃。 例如,GAN用于生成連續(xù)值數(shù)據(jù)讽坏,但自然語言基于離散值锭魔,如單詞,字符路呜,字節(jié)等迷捧,因此很難將GAN應(yīng)用于自然語言應(yīng)用程序。 正在進(jìn)行的未來研究將GAN應(yīng)用于其他領(lǐng)域胀葱。

8 CONCLUSION

In this paper, we review GAN-variants based on performance improvement offered in terms of higher image quality, more diverse images and more stable training. We review the current state of GAN-related research from an architecture and loss basis. Current state-of-art GANs models such as BigGAN and PROGAN are able to produce high quality images and diverse images in the computer vision field. However, research that applies GANs to video is limited. Moreover, GAN-related research in other areas such as time series generation and natural language processing lags that for computer vision in terms of performance and capability. We conclude that there are clearly opportunities for future research and application in these fields in particular.

在本文中漠秋,我們回顧了基于性能改進(jìn)的GAN變體,提供了更高的圖像質(zhì)量抵屿,更多樣化的圖像和更穩(wěn)定的培訓(xùn)庆锦。 我們從架構(gòu)和損失的基礎(chǔ)上回顧了與GAN相關(guān)的研究的現(xiàn)狀。 當(dāng)前最先進(jìn)的GAN模型(例如BigGAN和PROGAN)能夠在計(jì)算機(jī)視覺領(lǐng)域中產(chǎn)生高質(zhì)量圖像和各種圖像轧葛。 但是搂抒,將GAN應(yīng)用于視頻的研究是有限的。 此外尿扯,在時(shí)間序列生成和自然語言處理等其他領(lǐng)域的GAN相關(guān)研究在性能和能力方面落后于計(jì)算機(jī)視覺求晶。 我們得出結(jié)論,特別是在這些領(lǐng)域有明確的未來研究和應(yīng)用的機(jī)會(huì)衷笋。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末芳杏,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子辟宗,更是在濱河造成了極大的恐慌爵赵,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,682評(píng)論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件泊脐,死亡現(xiàn)場(chǎng)離奇詭異空幻,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)晨抡,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,277評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門氛悬,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人耘柱,你說我怎么就攤上這事」飨郑” “怎么了调煎?”我有些...
    開封第一講書人閱讀 165,083評(píng)論 0 355
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)己肮。 經(jīng)常有香客問我士袄,道長(zhǎng)悲关,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,763評(píng)論 1 295
  • 正文 為了忘掉前任娄柳,我火速辦了婚禮寓辱,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘赤拒。我一直安慰自己秫筏,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,785評(píng)論 6 392
  • 文/花漫 我一把揭開白布挎挖。 她就那樣靜靜地躺著这敬,像睡著了一般。 火紅的嫁衣襯著肌膚如雪蕉朵。 梳的紋絲不亂的頭發(fā)上崔涂,一...
    開封第一講書人閱讀 51,624評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音始衅,去河邊找鬼冷蚂。 笑死,一個(gè)胖子當(dāng)著我的面吹牛汛闸,可吹牛的內(nèi)容都是我干的蝙茶。 我是一名探鬼主播,決...
    沈念sama閱讀 40,358評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼蛉拙,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼尸闸!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起孕锄,我...
    開封第一講書人閱讀 39,261評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤吮廉,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后畸肆,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體宦芦,經(jīng)...
    沈念sama閱讀 45,722評(píng)論 1 315
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,900評(píng)論 3 336
  • 正文 我和宋清朗相戀三年轴脐,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了调卑。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,030評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡大咱,死狀恐怖恬涧,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情碴巾,我是刑警寧澤溯捆,帶...
    沈念sama閱讀 35,737評(píng)論 5 346
  • 正文 年R本政府宣布,位于F島的核電站厦瓢,受9級(jí)特大地震影響提揍,放射性物質(zhì)發(fā)生泄漏啤月。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,360評(píng)論 3 330
  • 文/蒙蒙 一劳跃、第九天 我趴在偏房一處隱蔽的房頂上張望谎仲。 院中可真熱鬧,春花似錦刨仑、人聲如沸郑诺。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,941評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽间景。三九已至,卻和暖如春艺智,著一層夾襖步出監(jiān)牢的瞬間倘要,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,057評(píng)論 1 270
  • 我被黑心中介騙來泰國(guó)打工十拣, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留封拧,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,237評(píng)論 3 371
  • 正文 我出身青樓夭问,卻偏偏與公主長(zhǎng)得像泽西,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子缰趋,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,976評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容