Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors
Detailed and accurate information on the spatial variation of land cover and land use is a critical component of local ecology and environmental research. For these tasks, high spatial resolution images are required. Considering the trade-off between high spatial and high temporal resolution in remo...
| Published in: | Remote Sensing |
|---|---|
| Main Authors: | , , , , , , |
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2020-04-01
|
| Subjects: | |
| Online Access: | https://www.mdpi.com/2072-4292/12/8/1263 |
| _version_ | 1850411712567574528 |
|---|---|
| author | Yingfei Xiong Shanxin Guo Jinsong Chen Xinping Deng Luyi Sun Xiaorou Zheng Wenna Xu |
| author_facet | Yingfei Xiong Shanxin Guo Jinsong Chen Xinping Deng Luyi Sun Xiaorou Zheng Wenna Xu |
| author_sort | Yingfei Xiong |
| collection | DOAJ |
| container_title | Remote Sensing |
| description | Detailed and accurate information on the spatial variation of land cover and land use is a critical component of local ecology and environmental research. For these tasks, high spatial resolution images are required. Considering the trade-off between high spatial and high temporal resolution in remote sensing images, many learning-based models (e.g., Convolutional neural network, sparse coding, Bayesian network) have been established to improve the spatial resolution of coarse images in both the computer vision and remote sensing fields. However, data for training and testing in these learning-based methods are usually limited to a certain location and specific sensor, resulting in the limited ability to generalize the model across locations and sensors. Recently, generative adversarial nets (GANs), a new learning model from the deep learning field, show many advantages for capturing high-dimensional nonlinear features over large samples. In this study, we test whether the GAN method can improve the generalization ability across locations and sensors with some modification to accomplish the idea “training once, apply to everywhere and different sensors” for remote sensing images. This work is based on super-resolution generative adversarial nets (SRGANs), where we modify the loss function and the structure of the network of SRGANs and propose the improved SRGAN (ISRGAN), which makes model training more stable and enhances the generalization ability across locations and sensors. In the experiment, the training and testing data were collected from two sensors (Landsat 8 OLI and Chinese GF 1) from different locations (Guangdong and Xinjiang in China). For the cross-location test, the model was trained in Guangdong with the Chinese GF 1 (8 m) data to be tested with the GF 1 data in Xinjiang. For the cross-sensor test, the same model training in Guangdong with GF 1 was tested in Landsat 8 OLI images in Xinjiang. The proposed method was compared with the neighbor-embedding (NE) method, the sparse representation method (SCSR), and the SRGAN. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were chosen for the quantitive assessment. The results showed that the ISRGAN is superior to the NE (PSNR: 30.999, SSIM: 0.944) and SCSR (PSNR: 29.423, SSIM: 0.876) methods, and the SRGAN (PSNR: 31.378, SSIM: 0.952), with the PSNR = 35.816 and SSIM = 0.988 in the cross-location test. A similar result was seen in the cross-sensor test. The ISRGAN had the best result (PSNR: 38.092, SSIM: 0.988) compared to the NE (PSNR: 35.000, SSIM: 0.982) and SCSR (PSNR: 33.639, SSIM: 0.965) methods, and the SRGAN (PSNR: 32.820, SSIM: 0.949). Meanwhile, we also tested the accuracy improvement for land cover classification before and after super-resolution by the ISRGAN. The results show that the accuracy of land cover classification after super-resolution was significantly improved, in particular, the impervious surface class (the road and buildings with high-resolution texture) improved by 15%. |
| format | Article |
| id | doaj-art-71ffddbe7d2c460ebefcba81e349f2d2 |
| institution | Directory of Open Access Journals |
| issn | 2072-4292 |
| language | English |
| publishDate | 2020-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| spelling | doaj-art-71ffddbe7d2c460ebefcba81e349f2d22025-08-19T22:46:30ZengMDPI AGRemote Sensing2072-42922020-04-01128126310.3390/rs12081263Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and SensorsYingfei Xiong0Shanxin Guo1Jinsong Chen2Xinping Deng3Luyi Sun4Xiaorou Zheng5Wenna Xu6Center for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaCenter for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, ChinaDetailed and accurate information on the spatial variation of land cover and land use is a critical component of local ecology and environmental research. For these tasks, high spatial resolution images are required. Considering the trade-off between high spatial and high temporal resolution in remote sensing images, many learning-based models (e.g., Convolutional neural network, sparse coding, Bayesian network) have been established to improve the spatial resolution of coarse images in both the computer vision and remote sensing fields. However, data for training and testing in these learning-based methods are usually limited to a certain location and specific sensor, resulting in the limited ability to generalize the model across locations and sensors. Recently, generative adversarial nets (GANs), a new learning model from the deep learning field, show many advantages for capturing high-dimensional nonlinear features over large samples. In this study, we test whether the GAN method can improve the generalization ability across locations and sensors with some modification to accomplish the idea “training once, apply to everywhere and different sensors” for remote sensing images. This work is based on super-resolution generative adversarial nets (SRGANs), where we modify the loss function and the structure of the network of SRGANs and propose the improved SRGAN (ISRGAN), which makes model training more stable and enhances the generalization ability across locations and sensors. In the experiment, the training and testing data were collected from two sensors (Landsat 8 OLI and Chinese GF 1) from different locations (Guangdong and Xinjiang in China). For the cross-location test, the model was trained in Guangdong with the Chinese GF 1 (8 m) data to be tested with the GF 1 data in Xinjiang. For the cross-sensor test, the same model training in Guangdong with GF 1 was tested in Landsat 8 OLI images in Xinjiang. The proposed method was compared with the neighbor-embedding (NE) method, the sparse representation method (SCSR), and the SRGAN. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were chosen for the quantitive assessment. The results showed that the ISRGAN is superior to the NE (PSNR: 30.999, SSIM: 0.944) and SCSR (PSNR: 29.423, SSIM: 0.876) methods, and the SRGAN (PSNR: 31.378, SSIM: 0.952), with the PSNR = 35.816 and SSIM = 0.988 in the cross-location test. A similar result was seen in the cross-sensor test. The ISRGAN had the best result (PSNR: 38.092, SSIM: 0.988) compared to the NE (PSNR: 35.000, SSIM: 0.982) and SCSR (PSNR: 33.639, SSIM: 0.965) methods, and the SRGAN (PSNR: 32.820, SSIM: 0.949). Meanwhile, we also tested the accuracy improvement for land cover classification before and after super-resolution by the ISRGAN. The results show that the accuracy of land cover classification after super-resolution was significantly improved, in particular, the impervious surface class (the road and buildings with high-resolution texture) improved by 15%.https://www.mdpi.com/2072-4292/12/8/1263super-resolutionSRGANmodel generalizationimage downscaling |
| spellingShingle | Yingfei Xiong Shanxin Guo Jinsong Chen Xinping Deng Luyi Sun Xiaorou Zheng Wenna Xu Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors super-resolution SRGAN model generalization image downscaling |
| title | Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors |
| title_full | Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors |
| title_fullStr | Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors |
| title_full_unstemmed | Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors |
| title_short | Improved SRGAN for Remote Sensing Image Super-Resolution Across Locations and Sensors |
| title_sort | improved srgan for remote sensing image super resolution across locations and sensors |
| topic | super-resolution SRGAN model generalization image downscaling |
| url | https://www.mdpi.com/2072-4292/12/8/1263 |
| work_keys_str_mv | AT yingfeixiong improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT shanxinguo improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT jinsongchen improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT xinpingdeng improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT luyisun improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT xiaorouzheng improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors AT wennaxu improvedsrganforremotesensingimagesuperresolutionacrosslocationsandsensors |
