VAE模型:在生成對(duì)抗網(wǎng)絡(luò)中的重要角色

1.背景介紹

生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)都是深度學(xué)習(xí)領(lǐng)域的重要技術(shù)孕蝉,它們?cè)趫D像生成县匠、圖像分類肥缔、自然語言處理等方面都有廣泛的應(yīng)用。然而巩掺,這兩種模型在理論和實(shí)踐上存在一些區(qū)別和聯(lián)系偏序,這篇文章將深入探討 VAE 模型在生成對(duì)抗網(wǎng)絡(luò)中的重要角色,并揭示它們之間的關(guān)系胖替。

2.核心概念與聯(lián)系

2.1生成對(duì)抗網(wǎng)絡(luò)(GANs)

生成對(duì)抗網(wǎng)絡(luò)(GANs)是由Goodfellow等人在2014年提出的一種深度學(xué)習(xí)模型研儒,它由生成器(Generator)和判別器(Discriminator)兩部分組成。生成器的目標(biāo)是生成與真實(shí)數(shù)據(jù)類似的樣本独令,判別器的目標(biāo)是區(qū)分生成器生成的樣本和真實(shí)樣本端朵。GANs通過這種競(jìng)爭(zhēng)的方式實(shí)現(xiàn)數(shù)據(jù)生成和分類的學(xué)習(xí)。

2.2變分自動(dòng)編碼器(VAEs)

變分自動(dòng)編碼器(VAEs)是由Kingma和Welling在2013年提出的一種深度學(xué)習(xí)模型燃箭,它是一種概率模型冲呢,用于學(xué)習(xí)低維的表示,從而實(shí)現(xiàn)數(shù)據(jù)壓縮和生成招狸。VAEs通過將數(shù)據(jù)編碼為低維的隨機(jī)變量敬拓,并學(xué)習(xí)一個(gè)解碼器來重構(gòu)數(shù)據(jù),從而實(shí)現(xiàn)數(shù)據(jù)生成和表示的學(xué)習(xí)裙戏。

2.3聯(lián)系

雖然GANs和VAEs在理論和實(shí)踐上有所不同乘凸,但它們之間存在一些聯(lián)系。首先累榜,它們都是深度學(xué)習(xí)模型营勤,使用了類似的神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)和優(yōu)化算法。其次信柿,它們都涉及到數(shù)據(jù)生成和表示的學(xué)習(xí)冀偶,盡管GANs通過競(jìng)爭(zhēng)的方式實(shí)現(xiàn)醒第,而VAEs通過概率模型的學(xué)習(xí)實(shí)現(xiàn)渔嚷。最后,它們都可以用于圖像生成稠曼、圖像分類等應(yīng)用領(lǐng)域形病。

3.核心算法原理和具體操作步驟以及數(shù)學(xué)模型公式詳細(xì)講解

3.1生成對(duì)抗網(wǎng)絡(luò)(GANs)

3.1.1算法原理

生成對(duì)抗網(wǎng)絡(luò)(GANs)的核心思想是通過生成器和判別器的競(jìng)爭(zhēng)來學(xué)習(xí)數(shù)據(jù)生成和分類。生成器的目標(biāo)是生成與真實(shí)數(shù)據(jù)類似的樣本霞幅,判別器的目標(biāo)是區(qū)分生成器生成的樣本和真實(shí)樣本漠吻。這種競(jìng)爭(zhēng)的方式使得生成器和判別器在訓(xùn)練過程中相互推動(dòng),從而實(shí)現(xiàn)更好的數(shù)據(jù)生成和分類司恳。

3.1.2具體操作步驟

  1. 訓(xùn)練生成器:生成器接收隨機(jī)噪聲作為輸入途乃,并生成與真實(shí)數(shù)據(jù)類似的樣本。生成器的輸出被輸入判別器扔傅,以便判別器區(qū)分生成器生成的樣本和真實(shí)樣本耍共。
  2. 訓(xùn)練判別器:判別器接收生成器生成的樣本和真實(shí)樣本作為輸入烫饼,并學(xué)習(xí)區(qū)分它們的特征。判別器的輸出是一個(gè)概率值试读,表示樣本來自生成器還是真實(shí)數(shù)據(jù)杠纵。
  3. 更新生成器和判別器的權(quán)重,使得生成器生成更接近真實(shí)數(shù)據(jù)的樣本钩骇,同時(shí)使得判別器更難區(qū)分生成器生成的樣本和真實(shí)樣本比藻。

3.1.3數(shù)學(xué)模型公式詳細(xì)講解

G(z) \sim p_{g}(z) \\ D(x) \sim p_ks2eicu(x) \\ D(G(z)) \sim p_ayw4m2e(G(z))

其中,G(z) 表示生成器生成的樣本倘屹,D(x) 表示判別器對(duì)樣本 x 的輸出银亲,p_{g}(z) 表示隨機(jī)噪聲的概率分布,p_sa4co4g(x) 表示真實(shí)樣本的概率分布纽匙,p_4qseugc(G(z)) 表示生成器生成的樣本的概率分布群凶。

3.2變分自動(dòng)編碼器(VAEs)

3.2.1算法原理

變分自動(dòng)編碼器(VAEs)是一種概率模型,用于學(xué)習(xí)低維的表示哄辣,從而實(shí)現(xiàn)數(shù)據(jù)壓縮和生成请梢。VAEs通過將數(shù)據(jù)編碼為低維的隨機(jī)變量,并學(xué)習(xí)一個(gè)解碼器來重構(gòu)數(shù)據(jù)力穗,從而實(shí)現(xiàn)數(shù)據(jù)生成和表示的學(xué)習(xí)毅弧。

3.2.2具體操作步驟

  1. 編碼器接收輸入樣本,并將其編碼為低維的隨機(jī)變量当窗。
  2. 解碼器接收編碼器生成的隨機(jī)變量够坐,并重構(gòu)輸入樣本。
  3. 通過最小化重構(gòu)誤差和變分Lower Bound來更新編碼器和解碼器的權(quán)重崖面。

3.2.3數(shù)學(xué)模型公式詳細(xì)講解

q_{\phi}(z|x) = p(z|x;\phi) \\ p_{\theta}(x|z) = p(x|z;\theta) \\ \log p(x) \geq \mathbb{E}_{q_{\phi}(z|x)}[\log p_{\theta}(x|z)] - D_{KL}(q_{\phi}(z|x)||p(z))

其中元咙,q_{\phi}(z|x) 表示編碼器生成的隨機(jī)變量的概率分布,p_{\theta}(x|z) 表示解碼器重構(gòu)樣本的概率分布巫员,D_{KL}(q_{\phi}(z|x)||p(z)) 表示熵差庶香,是一個(gè)非負(fù)值,表示編碼器生成的隨機(jī)變量與真實(shí)隨機(jī)變量之間的差距简识。

4.具體代碼實(shí)例和詳細(xì)解釋說明

4.1生成對(duì)抗網(wǎng)絡(luò)(GANs)

4.1.1Python代碼實(shí)例

import tensorflow as tf
from tensorflow.keras.layers import Dense, Reshape, Flatten
from tensorflow.keras.models import Sequential

# 生成器
generator = Sequential([
    Dense(128, input_dim=100, activation='relu'),
    Reshape((7, 7, 1)),
    Dense(7 * 7 * 256, activation='relu'),
    Reshape((7, 7, 256)),
    Dense(7 * 7 * 256, activation='relu'),
    Reshape((7, 7, 256)),
    Dense(3, activation='tanh')
])

# 判別器
discriminator = Sequential([
    Flatten(input_shape=(28, 28, 1)),
    Dense(128, activation='relu'),
    Dense(1, activation='sigmoid')
])

# 生成器和判別器的共享權(quán)重
shared_weights = generator.get_weights()
discriminator.set_weights(shared_weights)

# 優(yōu)化器
optimizer = tf.keras.optimizers.Adam(0.0002, 0.5)

# 訓(xùn)練
for epoch in range(10000):
    noise = np.random.normal(0, 1, (128, 100))
    img = np.random.randint(0, 255, (128, 28, 28))

    noise = noise.reshape(128, 100)
    img = img.reshape(128, 28, 28)

    noise = np.expand_dims(noise, axis=0)
    img = np.expand_dims(img, axis=0)

    noise = generator.predict(noise)
    noise = noise.reshape(128, 7, 7, 1)

    img = discriminator.predict(img)
    noise = discriminator.predict(noise)

    img = img.flatten()
    noise = noise.flatten()

    noise_loss = -np.mean(img) + np.mean(noise)

    optimizer.zero_grad()
    noise_loss.backward()
    optimizer.step()

4.1.2詳細(xì)解釋說明

這個(gè)Python代碼實(shí)例使用TensorFlow和Keras實(shí)現(xiàn)了一個(gè)簡(jiǎn)單的生成對(duì)抗網(wǎng)絡(luò)(GANs)赶掖。生成器和判別器都使用了兩層全連接層和ReLU激活函數(shù),生成器的輸出是一個(gè)77的圖像七扰,用于生成2828的圖像奢赂。判別器的輸入是28*28的圖像,輸出是一個(gè)概率值颈走,表示樣本來自生成器還是真實(shí)數(shù)據(jù)膳灶。共享權(quán)重表示生成器和判別器的部分權(quán)重是相同的,這有助于訓(xùn)練的穩(wěn)定性立由。優(yōu)化器使用Adam算法轧钓,訓(xùn)練次數(shù)為10000次司致。

4.2變分自動(dòng)編碼器(VAEs)

4.2.1Python代碼實(shí)例

import tensorflow as tf
from tensorflow.keras.layers import Dense, ReLU, Input
from tensorflow.keras.models import Model

# 編碼器
encoder_input = Input(shape=(28, 28, 1))
encoded = Dense(128, activation=ReLU)(encoder_input)
encoded = Dense(64, activation=ReLU)(encoded)

# 解碼器
decoder_input = tf.keras.layers.Input(shape=(64,))
decoder_output = Dense(128, activation=ReLU)(decoder_input)
decoder_output = Dense(256, activation=ReLU)(decoder_output)
decoder_output = Dense(7 * 7 * 256, activation='relu')(decoder_output)
decoder_output = tf.keras.layers.Reshape((7, 7, 256))(decoder_output)
decoder_output = Dense(7 * 7 * 256, activation='relu')(decoder_output)
decoder_output = tf.keras.layers.Reshape((7, 7, 256))(decoder_output)
decoder_output = Dense(7 * 7 * 256, activation='relu')(decoder_output)
decoder_output = tf.keras.layers.Reshape((7, 7, 256))(decoder_output)
decoder_output = Dense(7 * 7 * 256, activation='relu')(decoder_output)
decoder_output = tf.keras.layers.Reshape((7, 7, 256))(decoder_output)
decoder_output = Dense(7 * 7 * 256, activation='relu')(decoder_output)
decoder_output = tf.keras.layers.Reshape((7, 7, 256))(decoder_output)
decoder_output = Dense(3, activation='tanh')(decoder_output)

# 變分自動(dòng)編碼器模型
vae = Model(encoder_input, decoder_output)

# 編譯模型
vae.compile(optimizer='rmsprop', loss='binary_crossentropy')

# 訓(xùn)練
vae.fit(x_train, x_train, epochs=100, batch_size=256, shuffle=True, validation_data=(x_test, x_test))

4.2.2詳細(xì)解釋說明

這個(gè)Python代碼實(shí)例使用TensorFlow和Keras實(shí)現(xiàn)了一個(gè)簡(jiǎn)單的變分自動(dòng)編碼器(VAEs)。編碼器和解碼器都使用了兩層全連接層和ReLU激活函數(shù)聋迎。編碼器的輸出是一個(gè)64維的隨機(jī)變量脂矫,解碼器的輸入是這個(gè)隨機(jī)變量,通過多層全連接層和ReLU激活函數(shù)重構(gòu)輸入樣本霉晕。變分自動(dòng)編碼器模型使用二進(jìn)制交叉熵作為損失函數(shù)庭再,優(yōu)化器使用RMSprop算法,訓(xùn)練次數(shù)為100次牺堰。

5.未來發(fā)展趨勢(shì)與挑戰(zhàn)

生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)在圖像生成拄轻、圖像分類等應(yīng)用領(lǐng)域取得了顯著的成功,但它們?nèi)匀幻媾R著一些挑戰(zhàn)伟葫。未來的研究方向和挑戰(zhàn)包括:

  1. 訓(xùn)練穩(wěn)定性:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)的訓(xùn)練過程容易出現(xiàn)收斂性問題恨搓,如模型震蕩、模式崩潰等筏养。未來的研究應(yīng)該關(guān)注如何提高這兩種模型的訓(xùn)練穩(wěn)定性斧抱。

  2. 模型解釋性:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)的模型結(jié)構(gòu)相對(duì)復(fù)雜,難以解釋渐溶。未來的研究應(yīng)該關(guān)注如何提高這兩種模型的解釋性辉浦,以便更好地理解其生成和表示的過程。

  3. 數(shù)據(jù)生成質(zhì)量:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)生成的樣本質(zhì)量有限茎辐,難以達(dá)到真實(shí)數(shù)據(jù)的水平宪郊。未來的研究應(yīng)該關(guān)注如何提高這兩種模型生成樣本的質(zhì)量,以便更好地應(yīng)用于實(shí)際問題解決拖陆。

  4. 多模態(tài)和多任務(wù)學(xué)習(xí):生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)主要應(yīng)用于單模態(tài)和單任務(wù)學(xué)習(xí)弛槐。未來的研究應(yīng)該關(guān)注如何拓展這兩種模型到多模態(tài)和多任務(wù)學(xué)習(xí)領(lǐng)域,以便更廣泛地應(yīng)用于實(shí)際問題解決依啰。

6.附錄常見問題與解答

  1. Q:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)有哪些主要的區(qū)別乎串?
    A:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)在理論和實(shí)踐上有一些區(qū)別。生成對(duì)抗網(wǎng)絡(luò)(GANs)通過競(jìng)爭(zhēng)的方式實(shí)現(xiàn)數(shù)據(jù)生成和分類孔飒,而變分自動(dòng)編碼器(VAEs)通過概率模型的學(xué)習(xí)實(shí)現(xiàn)數(shù)據(jù)生成和表示灌闺。

  2. Q:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)在應(yīng)用中有哪些區(qū)別艰争?
    A:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)在應(yīng)用中有一些區(qū)別坏瞄。生成對(duì)抗網(wǎng)絡(luò)(GANs)主要應(yīng)用于圖像生成、圖像分類等應(yīng)用領(lǐng)域甩卓,而變分自動(dòng)編碼器(VAEs)主要應(yīng)用于數(shù)據(jù)壓縮鸠匀、生成和表示等應(yīng)用領(lǐng)域。

  3. Q:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)的訓(xùn)練過程有哪些挑戰(zhàn)逾柿?
    A:生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)的訓(xùn)練過程面臨一些挑戰(zhàn)缀棍,如訓(xùn)練穩(wěn)定性宅此、模型解釋性、數(shù)據(jù)生成質(zhì)量等爬范。未來的研究應(yīng)該關(guān)注如何解決這些挑戰(zhàn)父腕,以便更好地應(yīng)用這兩種模型。

  4. Q:未來的研究方向和挑戰(zhàn)有哪些青瀑?
    A:未來的研究方向和挑戰(zhàn)包括提高訓(xùn)練穩(wěn)定性璧亮、提高模型解釋性、提高數(shù)據(jù)生成質(zhì)量斥难、拓展到多模態(tài)和多任務(wù)學(xué)習(xí)等枝嘶。這些研究方向和挑戰(zhàn)將有助于更廣泛地應(yīng)用生成對(duì)抗網(wǎng)絡(luò)(GANs)和變分自動(dòng)編碼器(VAEs)。

參考文獻(xiàn)

[1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks. In Advances in Neural Information Processing Systems (pp. 2671-2680).

[2] Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In Proceedings of the 28th International Conference on Machine Learning and Systems (pp. 1199-1207).

[3] Radford, A., Metz, L., & Chintala, S. (2020). DALL-E: Creating Images from Text. OpenAI Blog. Retrieved from https://openai.com/blog/dalle-2/

[4] Chen, Z., Zhang, H., & Chen, Y. (2018). VAE-GAN: Unsupervised Representation Learning with a Variational Autoencoder and a Generative Adversarial Network. In Proceedings of the 31st International Conference on Machine Learning and Applications (Vol. 127, pp. 1094-1103).

[5] Liu, F., Chen, Z., & Chen, Y. (2017). Style-Based Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (pp. 4390-4399).

[6] Brock, O., Donahue, J., Krizhevsky, A., & Karlinsky, M. (2018). Large-scale GANs with Spectral Normalization. In Proceedings of the 35th International Conference on Machine Learning (pp. 6167-6176).

[7] Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. In Proceedings of the 34th International Conference on Machine Learning (pp. 4674-4683).

[8] Huszár, F. (2015). On the Stability of Training Generative Adversarial Networks. arXiv preprint arXiv:1512.04894.

[9] Makhzani, M., Rezende, D. J., Salakhutdinov, R. R., & Hinton, G. E. (2015). Adversarial Autoencoders. In Proceedings of the 32nd International Conference on Machine Learning (pp. 1989-2000).

[10] Dhariwal, P., & Karras, T. (2020). SimPL: Simple and Scalable Image Generation with Pretrained Latent Diffusion Models. OpenAI Blog. Retrieved from https://openai.com/blog/simpl/

[11] Ramesh, A., Zhang, H., Chintala, S., Chen, Y., & Chen, Z. (2021). DALL-E: Creating Images from Text. OpenAI Blog. Retrieved from https://openai.com/blog/dalle-2/

[12] Liu, F., Chen, Z., & Chen, Y. (2020). StyleGAN 2: A Generative Adversarial Network for Better Manipulation and Representation Learning. In Proceedings of the 37th International Conference on Machine Learning (pp. 7652-7662).

[13] Karras, T., Aila, T., Laine, S., & Lehtinen, T. (2018). Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Proceedings of the 35th International Conference on Machine Learning (pp. 6177-6186).

[14] Zhang, H., Liu, F., & Chen, Y. (2019). Progressive Growing of GANs for Large-scale Image Synthesis. In Proceedings of the 36th International Conference on Machine Learning (pp. 5789-5799).

[15] Zhang, H., Liu, F., & Chen, Y. (2020). CoGAN: Unsupervised Learning of Cross-Domain Image Synthesis with Adversarial Training. In Proceedings of the 38th International Conference on Machine Learning (pp. 5024-5034).

[16] Mordvintsev, A., Narayanan, S., & Parikh, D. (2017). Inceptionism: Going Deeper into Neural Networks. In Proceedings of the 29th International Conference on Neural Information Processing Systems (pp. 1-10).

[17] Dauphin, Y., Cha, B., & Ranzato, M. (2014). Identifying and Mitigating the Causes of Slow Training in Deep Neural Networks. In Proceedings of the 32nd International Conference on Machine Learning (pp. 1269-1278).

[18] Rezende, D. J., Mohamed, S., & Salakhutdinov, R. R. (2014). Sequence Generation with Recurrent Neural Networks: A View from the Inside. In Advances in Neural Information Processing Systems (pp. 2496-2504).

[19] Bengio, Y., Courville, A., & Schmidhuber, J. (2009). Learning Deep Architectures for AI. In Proceedings of the 26th International Conference on Machine Learning (pp. 610-618).

[20] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks. In Advances in Neural Information Processing Systems (pp. 2671-2680).

[21] Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In Proceedings of the 28th International Conference on Machine Learning and Systems (pp. 1199-1207).

[22] Welling, M., & Teh, Y. W. (2002). Learning the Parameters of a Generative Model. In Proceedings of the 19th International Conference on Machine Learning (pp. 107-114).

[23] Bengio, Y., Courville, A., & Schmidhuber, J. (2009). Learning Deep Architectures for AI. In Proceedings of the 26th International Conference on Machine Learning (pp. 610-618).

[24] Radford, A., Metz, L., & Chintala, S. (2020). DALL-E: Creating Images from Text. OpenAI Blog. Retrieved from https://openai.com/blog/dalle-2/

[25] Liu, F., Chen, Z., & Chen, Y. (2017). Style-Based Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (pp. 4390-4399).

[26] Brock, O., Donahue, J., Krizhevsky, A., & Karlinsky, M. (2018). Large-scale GANs with Spectral Normalization. In Proceedings of the 35th International Conference on Machine Learning (pp. 6167-6176).

[27] Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. In Proceedings of the 34th International Conference on Machine Learning (pp. 4674-4683).

[28] Huszár, F. (2015). On the Stability of Training Generative Adversarial Networks. arXiv preprint arXiv:1512.04894.

[29] Makhzani, M., Rezende, D. J., Salakhutdinov, R. R., & Hinton, G. E. (2015). Adversarial Autoencoders. In Proceedings of the 32nd International Conference on Machine Learning (pp. 1989-2000).

[30] Dhariwal, P., & Karras, T. (2020). SimPL: Simple and Scalable Image Generation with Pretrained Latent Diffusion Models. OpenAI Blog. Retrieved from https://openai.com/blog/simpl/

[31] Ramesh, A., Zhang, H., Chintala, S., Chen, Y., & Chen, Z. (2021). DALL-E: Creating Images from Text. OpenAI Blog. Retrieved from https://openai.com/blog/dalle-2/

[32] Liu, F., Chen, Z., & Chen, Y. (2020). StyleGAN 2: A Generative Adversarial Network for Better Manipulation and Representation Learning. In Proceedings of the 37th International Conference on Machine Learning (pp. 7652-7662).

[33] Karras, T., Aila, T., Laine, S., & Lehtinen, T. (2018). Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Proceedings of the 35th International Conference on Machine Learning (pp. 6177-6186).

[34] Zhang, H., Liu, F., & Chen, Y. (2019). Progressive Growing of GANs for Large-scale Image Synthesis. In Proceedings of the 36th International Conference on Machine Learning (pp. 5789-5799).

[35] Zhang, H., Liu, F., & Chen, Y. (2020). CoGAN: Unsupervised Learning of Cross-Domain Image Synthesis with Adversarial Training. In Proceedings of the 38th International Conference on Machine Learning (pp. 5024-5034).

[36] Mordvintsev, A., Narayanan, S., & Parikh, D. (2017). Inceptionism: Going Deeper into Neural Networks. In Proceedings of the 29th International Conference on Neural Information Processing Systems (pp. 1-10).

[37] Dauphin, Y., Cha, B., & Ranzato, M. (2014). Identifying and Mitigating the Causes of Slow Training in Deep Neural Networks. In Proceedings of the 32nd International Conference on Machine Learning (pp. 1269-1278).

[38] Rezende, D. J., Mohamed, S., & Salakhutdinov, R. R. (2014). Sequence Generation with Recurrent Neural Networks: A View from the Inside. In Advances in Neural Information Processing Systems (pp. 2496-2504).

[39] Bengio, Y., Courville, A., & Schmidhuber, J. (2009). Learning Deep Architectures for AI. In Proceedings of the 26th International Conference on Machine Learning (pp. 610-618).

[40] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Networks. In Advances in Neural Information Processing Systems (pp. 2671-2680).

[41] Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In Proceedings of the 28th International Conference on Machine Learning and Systems (pp. 1199-1207).

[42] Welling, M., & Teh, Y. W. (2002). Learning the Parameters of a Generative Model. In Proceedings of the 19th International Conference on Machine Learning (pp. 107-114).

[43] Bengio, Y., Courville, A., & Schmidhuber, J. (2009). Learning Deep Architectures for AI. In Proceedings of the 26th International Conference on Machine Learning (pp. 610-618).

[44] Radford, A., Metz, L., & Chintala, S. (2020). DALL-E: Creating Images from Text. OpenAI Blog. Retrieved from https://openai.com/blog/dalle-2/

[45] Liu, F., Chen, Z., & Chen, Y. (2017). Style-Based Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning (pp. 4390-4399).

[46] Brock, O., Donahue, J., Krizhevsky, A., & Karlinsky, M. (2018). Large-scale GANs with Spectral Normalization. In Proceedings of the 35th International Conference on Machine Learning (pp. 6167-6176).

[47] Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. In Proceedings of the 34th International Conference on Machine Learning (pp. 4674-4683).

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末哑诊,一起剝皮案震驚了整個(gè)濱河市群扶,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌镀裤,老刑警劉巖竞阐,帶你破解...
    沈念sama閱讀 221,576評(píng)論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異暑劝,居然都是意外死亡馁菜,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,515評(píng)論 3 399
  • 文/潘曉璐 我一進(jìn)店門铃岔,熙熙樓的掌柜王于貴愁眉苦臉地迎上來汪疮,“玉大人,你說我怎么就攤上這事毁习≈侨拢” “怎么了?”我有些...
    開封第一講書人閱讀 168,017評(píng)論 0 360
  • 文/不壞的土叔 我叫張陵纺且,是天一觀的道長盏道。 經(jīng)常有香客問我,道長载碌,這世上最難降的妖魔是什么猜嘱? 我笑而不...
    開封第一講書人閱讀 59,626評(píng)論 1 296
  • 正文 為了忘掉前任,我火速辦了婚禮嫁艇,結(jié)果婚禮上朗伶,老公的妹妹穿的比我還像新娘。我一直安慰自己步咪,他們只是感情好论皆,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,625評(píng)論 6 397
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般点晴。 火紅的嫁衣襯著肌膚如雪感凤。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,255評(píng)論 1 308
  • 那天粒督,我揣著相機(jī)與錄音陪竿,去河邊找鬼。 笑死屠橄,一個(gè)胖子當(dāng)著我的面吹牛萨惑,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播仇矾,決...
    沈念sama閱讀 40,825評(píng)論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼庸蔼,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來了贮匕?” 一聲冷哼從身側(cè)響起姐仅,我...
    開封第一講書人閱讀 39,729評(píng)論 0 276
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎刻盐,沒想到半個(gè)月后掏膏,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,271評(píng)論 1 320
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡敦锌,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,363評(píng)論 3 340
  • 正文 我和宋清朗相戀三年馒疹,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片乙墙。...
    茶點(diǎn)故事閱讀 40,498評(píng)論 1 352
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡颖变,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出听想,到底是詐尸還是另有隱情腥刹,我是刑警寧澤,帶...
    沈念sama閱讀 36,183評(píng)論 5 350
  • 正文 年R本政府宣布汉买,位于F島的核電站衔峰,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏蛙粘。R本人自食惡果不足惜垫卤,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,867評(píng)論 3 333
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望出牧。 院中可真熱鬧穴肘,春花似錦、人聲如沸崔列。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,338評(píng)論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽赵讯。三九已至盈咳,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間边翼,已是汗流浹背鱼响。 一陣腳步聲響...
    開封第一講書人閱讀 33,458評(píng)論 1 272
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留组底,地道東北人丈积。 一個(gè)月前我還...
    沈念sama閱讀 48,906評(píng)論 3 376
  • 正文 我出身青樓,卻偏偏與公主長得像债鸡,于是被迫代替她去往敵國和親江滨。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,507評(píng)論 2 359

推薦閱讀更多精彩內(nèi)容