結(jié)論
我們揭示了以下結(jié)論:CyeleGAN用于遙感圖像生成是可行的,尤其是給沒有雪的地面覆蓋雪励烦。盡管這個生成結(jié)果并不能騙過人的眼睛造壮,但通過對某些區(qū)域的詳細觀察,可以找到一些植入的偽像:這就提示我們做任何操作時都要小心它對后面過程的影響绪商。我們還介紹了一些質(zhì)量評估的方法苛谷,可以用來指導(dǎo)CycleGAN這種非配對訓(xùn)練應(yīng)該何時停止——雖然只研究了一下同域翻譯(RGBRGB),我們預(yù)感到格郁,以后可能要用CycleGAN或pix2pix在跨域之間做實驗腹殿,但就像我們已經(jīng)說了的那些一樣:我們要對這些模型引入的潛在的artifacts做潛在的分析。
致謝
本文得到了洛斯阿拉莫斯實驗室研究與開發(fā)計劃和空間與地球中心的支持例书。還要感謝笛卡爾實驗室的圖像和技術(shù)支持锣尉。最后,還要感謝同志們的建設(shè)性的討論决采。
引用
[1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y., “Generative adversarial nets,” in [Advances in Neural Information Processing Systems], 2672– 2680 (2014).
[2] Schmitt, M., Hughes, L. H., and Zhu, X. X., “The SEN1-2 dataset for deep learning in SAR-optical data fusion,” arXiv preprint arXiv:1807.01569 (2018).
[3] Grohnfeldt, C., Schmitt, M., and Zhu, X., “A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images,” in [International Geoscience and Remote Sensing Symposium (IGARSS)], 1726–1729, IEEE (2018).
[4] Ji, G., Wang, Z., Zhou, L., Xia, Y., Zhong, S., and Gong, S., “SAR image colorization using multidomain cycle-consistency generative adversarial network,” IEEE Geoscience and Remote Sensing Letters (2020).
[5] Fuentes Reyes, M., Auer, S., Merkle, N., Henry, C., and Schmitt, M., “SAR-to-optical image translation based on conditional generative adversarial networksoptimization, opportunities and limits,” Remote Sens- ing 11(17), 2067 (2019).
[6] Schmitt, M., Hughes, L. H., Qiu, C., and Zhu, X. X., “SEN12MS–a curated dataset of georeferenced multi- spectral Sentinel-1/2 imagery for deep learning and data fusion,” arXiv preprint arXiv:1906.07789 (2019).
[7] Toriya, H., Dewan, A., and Kitahara, I., “SAR2OPT: Image alignment between multi-modal images using generative adversarial networks,” in [International Geoscience and Remote Sensing Symposium (IGARSS)], 923–926, IEEE (2019).
[8] Mohajerani, S., Asad, R., Abhishek, K., Sharma, N., van Duynhoven, A., and Saeedi, P., “Cloudmaskgan: A content-aware unpaired image-to-image translation algorithm for remote sensing imagery,” in [International Conference on Image Processing (ICIP)], 1965–1969, IEEE (2019).
[9] Ren, C. X., Ziemann, A., Durieux, A., and Theiler, J., “Cycle-consistent adversarial networks for realistic pervasive change generation in remote sensing imagery,” arXiv preprint arXiv:1911.12546 (2019).
[10] Theiler, J. and Perkins, S., “Proposed framework for anomalous change detection,” in [ICML Workshop on Machine Learning Algorithms for Surveillance and Event Detection], 7–14 (2006).
[11] Goodfellow, I., “NIPS 2016 tutorial: Generative adversarial networks,” arXiv preprint arXiv:1701.00160 (2016).
[12] Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A., “Image-to-image translation with conditional adversarial networks,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 1125–1134 (2017).20–22
[13] Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A., “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 2223–2232 (2017).
[14] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A., “Going deeper with convolutions,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 1–9 (2015).
[15] Dowson, D. C. and Landau, B. V., “The Fr ?echet distance between multivariate normal distributions,” Journal of Multivariate Analysis 12(3), 450–455 (1982).
[16] Vaserstein, L. N., “Markov processes over denumerable products of spaces, describing large systems of automata,” Problemy Peredachi Informatsii 5(3), 64–72 (1969).
[17] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S., “GANs trained by a two timescale update rule converge to a local Nash equilibrium,” in [Advances in Neural Information Processing Systems], 6626–6637 (2017).
[18] He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” in [Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)], 770–778 (2016).
[19] Keisler, R., Skillman, S. W., Gonnabathula, S., Poehnelt, J., Rudelis, X., and Warren, M. S., “Visual search over billions of aerial and satellite images,” Computer Vision and Image Understanding 187, 102790 (2019).
[20] Longbotham, N., Pacifici, F., Glenn, T., Zare, A., Volpi, M., Tuia, D., Christophe, E., Michel, J., Inglada, J., Chanussot, J., et al., “Multi-modal change detection, application to the detection of flooded areas: Outcome of the 2009–2010 data fusion contest,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 5(1), 331–342 (2012).
[21] Ziemann, A., Ren, C. X., and Theiler, J., “Multi-sensor anomalous change detection at scale,” in [Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXV], 10986, 1098615,
International Society for Optics and Photonics (2019).
[22] Touati, R., Mignotte, M., and Dahmane, M., “Multimodal change detection in remote sensing images using an unsupervised pixel pairwise-based markov random field model,” IEEE Trans. Image Processing 29, 757– 767 (2019).