對抗網(wǎng)絡(luò)專題文獻(xiàn)集
第一篇論文
[生成對抗網(wǎng)](關(guān)于它的第一篇文章)
[紙張]:HTTPS://arxiv.org/abs/1406.2661
[代碼]:HTTPS://github.com/goodfeli/adversarial
未分類
[使用對角網(wǎng)絡(luò)的拉普拉斯金字塔的深度生成圖像模型]
[紙張] https://arxiv.org/abs/1506.05751
[代碼] https://github.com/facebook/eyescream
(具有深卷積生成對抗網(wǎng)絡(luò)的無監(jiān)督表示學(xué)習(xí))(Gan與卷積網(wǎng)絡(luò))(ICLR)
[紙張] https://arxiv.org/abs/1511.06434
[代碼] https://github.com/jacobgil/keras-dcgan
[對抗自動編碼器]
[紙張] http://arxiv.org/abs/1511.05644
[代碼] https://github.com/musyoku/adversarial-autoencoder
[基于深度網(wǎng)絡(luò)生成具有感知相似性度量的圖像]
[紙張] https://arxiv.org/pdf/1602.02644v2.pdf
[生成具有復(fù)發(fā)性對抗網(wǎng)絡(luò)的圖像]
[紙張] https://arxiv.org/abs/1602.05110
[代碼] https://github.com/ofirnachum/sequence_gan
[自然圖像歧管的生成視覺操作]
[紙張] https://people.eecs.berkeley.edu/%7Ejunyanz/projects/gvm/eccv16_gvm.pdf
[代碼] https://github.com/junyanz/iGAN
[生成對象文本到圖像合成]
[紙張] https://arxiv.org/abs/1605.05396
[代碼] https://github.com/reedscot/icml2016
[代碼] https://github.com/paarthneekhara/text-to-image
[學(xué)習(xí)什么和在哪里畫]
[紙張] http://www.scottreed.info/files/nips2016.pdf
[代碼] https://github.com/reedscot/nips2016
[草圖檢索對抗培訓(xùn)]
[紙張] http://link.springer.com/chapter/10.1007/978-3-319-46604-0_55
[使用風(fēng)格和結(jié)構(gòu)對抗網(wǎng)絡(luò)的生成圖像建模]
[紙張] https://arxiv.org/pdf/1603.05631.pdf
[代碼] https://github.com/xiaolonw/ss-gan
[生成對抗網(wǎng)絡(luò)作為能量模型的變化訓(xùn)練](ICLR 2017)
[紙張] http://www.mathpubs.com/detail/1611.01799v1/Generative-Adversarial-Networks-as-Variational-Training-of-Energy-Based-Models
[半監(jiān)督文本分類對抗培訓(xùn)方法](Ian Goodfellow Paper)
[紙張] https://arxiv.org/abs/1605.07725
[注意] https://github.com/dennybritz/deeplearning-papernotes/blob/master/notes/adversarial-text-classification.md
[通過對抗訓(xùn)練從模擬和無監(jiān)督的圖像學(xué)習(xí)](蘋果論文)
[紙張] https://arxiv.org/abs/1612.07828
[代碼] https://github.com/carpedm20/simulated-unsupervised-tensorflow
[通過深層發(fā)電機(jī)網(wǎng)絡(luò)合成神經(jīng)網(wǎng)絡(luò)神經(jīng)元的首選輸入]
[紙張] https://arxiv.org/pdf/1605.09304v5.pdf
[代碼] https://github.com/Evolving-AI-Lab/synthesizing
[SalGAN:Visual Saliency Prediction with Generative Adversarial Networks]
[紙張] https://arxiv.org/abs/1701.01081
[代碼] https://github.com/imatge-upc/saliency-salgan-2017
[對抗特征學(xué)習(xí)]
[紙張] https://arxiv.org/abs/1605.09782
[使用循環(huán)一致性對抗網(wǎng)絡(luò)的無圖像到圖像轉(zhuǎn)換]
[紙張] https://junyanz.github.io/CycleGAN/
[代碼] https://github.com/junyanz/CycleGAN
合奏
[AdaGAN:Boosting Generative Models](Google Brain)
[紙張] https://arxiv.org/abs/1701.02386
聚類
[使用生成對抗訓(xùn)練和聚類的無監(jiān)督學(xué)習(xí)](ICLR)
[紙張] https://openreview.net/forum?id=SJ8BZTjeg?eId=SJ8BZTjeg
[代碼] https://github.com/VittalP/UnsupGAN
[無監(jiān)督和半監(jiān)督學(xué)習(xí)與分類生成對抗網(wǎng)絡(luò)](ICLR)
[紙張] https://arxiv.org/abs/1511.06390
圖像修復(fù)
[感知和語境損失的語義圖像修復(fù)]
[紙張] https://arxiv.org/abs/1607.07539
[代碼] https://github.com/bamos/dcgan-completion.tensorflow
[上下文編碼器:通過修復(fù)進(jìn)行功能學(xué)習(xí)]
[紙張] https://arxiv.org/abs/1604.07379
[代碼] https://github.com/jazzsaxmafia/Inpainting
[上下文有條件生成對抗網(wǎng)絡(luò)的半監(jiān)督學(xué)習(xí)]
[紙張] https://arxiv.org/abs/1611.06430v1
聯(lián)合概率
[對峙學(xué)習(xí)推論]
[紙張] https://arxiv.org/abs/1606.00704
[代碼] https://github.com/IshmaelBelghazi/ALI
超分辨率
[通過深度學(xué)習(xí)的圖像超分辨率](僅面向數(shù)據(jù)集)
[代碼] https://github.com/david-gpu/srez
[使用生成對抗網(wǎng)絡(luò)的照片逼真單圖像超分辨率](使用深度殘差網(wǎng)絡(luò))
[紙張] https://arxiv.org/abs/1609.04802
[代碼] https://github.com/leehomyc/Photo-Realistic-Super-Resoluton
[EnhanceGAN]
[文件] https://medium.com/@richardherbert/faces-from-noise-super-enhancing-8x8-images-with-enhancegan-ebda015bb5e0#.io6pskvin
去除遮蔽
[強(qiáng)大的LSTM自動編碼器在野外面部遮擋]
[紙張] https://arxiv.org/abs/1612.08534
語義分割
[使用對話網(wǎng)絡(luò)的語義分割](soumith的論文)
[紙張] https://arxiv.org/abs/1611.08408
對象檢測
[用于小物體檢測的感知生成對抗網(wǎng)絡(luò)](提交)
[A-Fast-RCNN:通過對象檢測的對手的硬正產(chǎn)生](CVPR2017)
[紙] http://abhinavsh.info/papers/pdfs/adversarial_object_detection.pdfRNN
[C-RNN-GAN:具有對抗性訓(xùn)練的連續(xù)循環(huán)神經(jīng)網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1611.09904
[代碼] https://github.com/olofmogren/c-rnn-gan
有條件的對抗
[有條件生成對抗網(wǎng)]
[紙張] https://arxiv.org/abs/1411.1784
[代碼] https://github.com/zhangqianhui/Conditional-Gans
[InfoGAN:Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets]
[紙張] https://arxiv.org/abs/1606.03657
[代碼] https://github.com/buriburisuri/supervised_infogan
[使用條件對抗網(wǎng)絡(luò)的圖像到圖像翻譯]
[紙張] https://arxiv.org/pdf/1611.07004v1.pdf
[代碼] https://github.com/phillipi/pix2pix
[代碼] https://github.com/yenchenlin/pix2pix-tensorflow
[使用輔助分類器GAN的條件圖像合成](GoogleBrain ICLR 2017)
[紙張] https://arxiv.org/abs/1610.09585
[代碼] https://github.com/buriburisuri/ac-gan
[像素級域名轉(zhuǎn)移]
[紙張] https://arxiv.org/pdf/1603.07442v2.pdf
[代碼] https://github.com/fxia22/pldtgan
[圖像編輯的可逆條件GAN]
[紙張] https://arxiv.org/abs/1611.06355
[代碼] https://github.com/Guim3/IcGAN
[即插即用生成網(wǎng)絡(luò):潛在空間中的條件迭代生成圖像]
[紙張] https://arxiv.org/abs/1612.00005v1
[代碼] https://github.com/Evolving-AI-Lab/ppgn
[StackGAN:文本到具有堆疊生成對話網(wǎng)絡(luò)的照片逼真圖像合成]
[紙張] https://arxiv.org/pdf/1612.03242v1.pdf
[代碼] https://github.com/hanzhanggit/StackGAN
[無監(jiān)督的圖像到圖像翻譯與生成對抗網(wǎng)絡(luò)]
[紙張] https://arxiv.org/pdf/1701.02676.pdf
[學(xué)習(xí)與生成對話網(wǎng)絡(luò)發(fā)現(xiàn)跨域關(guān)系]
[紙張] https://arxiv.org/abs/1703.05192
[代碼] https://github.com/carpedm20/DiscoGAN-pytorch
視頻預(yù)測
[深度多尺度視頻預(yù)測超過均方誤差](Yann LeCun的論文)
[紙張] https://arxiv.org/abs/1511.05440
[代碼] https://github.com/dyelax/Adversarial_Video_Generation
[通過視頻預(yù)測進(jìn)行物理互動的無監(jiān)督學(xué)習(xí)](Ian Goodfellow的論文)
[紙張] https://arxiv.org/abs/1605.07157
[使用場景動態(tài)生成視頻]
[紙張] https://arxiv.org/abs/1609.02612
[網(wǎng)絡(luò)] http://web.mit.edu/vondrick/tinyvideo/
[代碼] https://github.com/cvondrick/videogan
紋理合成和風(fēng)格轉(zhuǎn)移
[使用馬爾可夫生成對抗網(wǎng)絡(luò)的預(yù)計(jì)算實(shí)時(shí)紋理合成](ECCV 2016)
[紙張] https://arxiv.org/abs/1604.04382
[代碼] https://github.com/chuanli11/MGANs
GAN理論
[能源生成對抗網(wǎng)](Lecun論文)
[紙張] https://arxiv.org/pdf/1609.03126v2.pdf
[代碼] https://github.com/buriburisuri/ebgan
[改進(jìn)GAN培訓(xùn)技巧](Goodfellow的論文)
[紙張] https://arxiv.org/abs/1606.03498
[代碼] https://github.com/openai/improved-gan
[模式正則化生成對抗網(wǎng)絡(luò)](Yoshua Bengio,ICLR 2017)
[紙張] https://openreview.net/pdf?id=HJKkY35le
[改進(jìn)產(chǎn)生對抗網(wǎng)絡(luò)的去噪特征匹配](Yoshua Bengio立砸,ICLR 2017)
[紙張] https://openreview.net/pdf?id=S1X7nhsxl
[代碼] https://github.com/hvy/chainer-gan-denoising-feature-matching
[采樣生成網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1609.04468
[代碼] https://github.com/dribnet/plat
[模式正則化生成對話網(wǎng)絡(luò)](Yoshua Bengio的論文)
[紙張] https://arxiv.org/abs/1612.02136
[如何訓(xùn)練甘斯]
[的Docu] https://github.com/soumith/ganhacks#authors
[面向訓(xùn)練生成對抗網(wǎng)絡(luò)的原則方法](ICLR 2017)
[紙張] http://openreview.net/forum?id=Hk4_qw5xe
[展開的生成對抗網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1611.02163
[代碼] https://github.com/poolio/unrolled_gan
[最小二乘法對抗網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1611.04076
[代碼] https://github.com/pfnet-research/chainer-LSGAN
[Wasserstein GAN]
[紙張] https://arxiv.org/abs/1701.07875
[代碼] https://github.com/martinarjovsky/WassersteinGAN
[Lipschitz密度損失敏感的生成對抗網(wǎng)絡(luò)](與WGan相同)
[紙張] https://arxiv.org/abs/1701.06264
[代碼] https://github.com/guojunq/lsgan
[面向訓(xùn)練生成對抗網(wǎng)絡(luò)的原則方法]
[紙張] https://arxiv.org/abs/1701.04862
3D
[通過3D生成 - 對抗建模學(xué)習(xí)對象形狀的概率潛在空間](2016 NIPS)
[紙張] https://arxiv.org/abs/1610.07584
[網(wǎng)絡(luò)] http://3dgan.csail.mit.edu/
[代碼] https://github.com/zck119/3dgan-release
面對生成和編輯
[使用學(xué)習(xí)的相似性度量自動編碼超像素
[紙張] https://arxiv.org/abs/1512.09300
[代碼] https://github.com/andersbll/autoencoding_beyond_pixels
[耦合生成對抗網(wǎng)絡(luò)](NIPS)
[紙張] http://mingyuliu.net/
[Caffe Code] https://github.com/mingyuliutw/CoGAN
[Tensorflow Code] https://github.com/andrewliao11/CoGAN-tensorflow
[圖像編輯的可逆條件GAN]
[紙張] https://drive.google.com/file/d/0B48XS5sLi1OlRkRIbkZWUmdoQmM/view
[代碼] https://github.com/Guim3/IcGAN
[面部屬性操縱的學(xué)習(xí)殘差圖像]
[紙張] https://arxiv.org/abs/1612.05363
[使用內(nèi)省對抗網(wǎng)絡(luò)進(jìn)行神經(jīng)照片編輯](ICLR 2017)
[紙張] https://arxiv.org/abs/1609.07093
[代碼] https://github.com/ajbrock/Neural-Photo-Editor
對于離散分布
[最大似然增強(qiáng)離散生成對抗網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1702.07983v1
[邊界尋求生成對抗網(wǎng)絡(luò)]
[紙張] https://arxiv.org/abs/1702.08431
[GANS-GANSB]的分離元素序列與Gumbel-softmax分布
[紙張] https://arxiv.org/abs/1611.04051
項(xiàng)目
[cleverhans](一個(gè)用于對抗脆弱性的對抗圖書館)
[代碼] https://github.com/openai/cleverhans
[reset-cppn-gan-tensorflow](使用殘差生成對抗網(wǎng)絡(luò)和變分自動編碼器技術(shù)來產(chǎn)生高分辨率圖像)
[代碼] https://github.com/hardmaru/resnet-cppn-gan-tensorflow
(HyperGAN)(開源GAN著重于規(guī)模和可用性)
[代碼] https://github.com/255bits/HyperGAN