论文标题

通过有条件的生成对抗网络将多光谱图像转换为夜间图像

Translating multispectral imagery to nighttime imagery via conditional generative adversarial networks

论文作者

Huang, Xiao, Xu, Dong, Li, Zhenlong, Wang, Cuizhen

论文摘要

夜间卫星图像已在广泛的田野中应用。但是,我们对观察到的光强度的形成方式以及是否可以模拟的有限理解极大地阻碍了其进一步的应用。这项研究探讨了有条件生成的对抗网络(CGAN)将多光谱图像转化为夜间图像的潜力。采用并修改了一个流行的CGAN框架Pix2Pix,以促进这种翻译,该培训图像对衍生自Landsat 8和可见的红外成像辐射仪套件(VIIRS)。这项研究的结果证明了多光谱到夜间翻译的可能性,并进一步表明,借助其他社交媒体数据,生成的夜间图像可能与地面真实图像非常相似。这项研究填补了理解卫星观察到的夜间光的组成的空白,并提供了新的范式来解决夜间遥感领域中新兴问题的范围,包括夜间串联构造,轻度饱和度和多传感器校准。

Nighttime satellite imagery has been applied in a wide range of fields. However, our limited understanding of how observed light intensity is formed and whether it can be simulated greatly hinders its further application. This study explores the potential of conditional Generative Adversarial Networks (cGAN) in translating multispectral imagery to nighttime imagery. A popular cGAN framework, pix2pix, was adopted and modified to facilitate this translation using gridded training image pairs derived from Landsat 8 and Visible Infrared Imaging Radiometer Suite (VIIRS). The results of this study prove the possibility of multispectral-to-nighttime translation and further indicate that, with the additional social media data, the generated nighttime imagery can be very similar to the ground-truth imagery. This study fills the gap in understanding the composition of satellite observed nighttime light and provides new paradigms to solve the emerging problems in nighttime remote sensing fields, including nighttime series construction, light desaturation, and multi-sensor calibration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源