Please login first
Super-Resolution of Sentinel-2 RGB Images with Venus Reference Images Using SRResNet CNNs
, *
1  School of Surveying and Geospatial Eng., College of Eng., University of Tehran, Tehran, Iran
Academic Editor: Luca Lelli

https://doi.org/10.3390/ECRS2023-16863 (registering DOI)
Abstract:

Super-resolution (SR) is a well-established technique used to enhance the resolution of low-resolution images. In this paper, we introduce a novel approach for super-resolution of Sentinel-2 10m RGB images using higher-resolution Venus 5m RGB images. The proposed method takes advantage of a modified SRResNet network, integrates perceptual loss based on the VGG network, and incorporates a learning rate decay strategy for improved performance. By leveraging the higher-resolution Venus 5m RGB images as a reference image, this approach aims to generate high-quality, super-resolution images of the Sentinel-2 10m RGB images. The modified SRResNet network is designed to capture and learn the underlying patterns and details present in the Venus images, enabling it to effectively enhance the resolution of the Sentinel-2 images. In addition, the inclusion of perceptual loss based on the VGG network helps to preserve important visual features and maintain the overall image quality. The learning rate decay strategy ensures the network converges to an optimal solution by gradually reducing the learning rate during the training process. Our research contributes to the field of super-resolution by offering a novel approach specifically tailored for enhancing the resolution of Sentinel-2 10m RGB images using Venus 5m RGB images. The proposed methodology has the potential to benefit various applications, such as remote sensing, land cover analysis, and environmental monitoring, where high-resolution imagery is crucial for accurate and detailed analysis. In summary, our approach presents a promising solution for the super-resolution of Sentinel-2 10m RGB images, providing an effective means to obtain higher-resolution imagery by leveraging the complementary information from Venus 5m RGB images. We used a SEN2VENµS dataset for this research. The SEN2VENµS dataset comprises cloud-free surface reflectance patches obtained from Sentinel-2 imagery. Notably, these patches are accompanied by corresponding reference surface reflectance patches captured at a remarkable 5-meter resolution by the VENµS Micro-Satellite on the same acquisition day. To assess the effectiveness of the proposed approach, we evaluate it using widely used metrics such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). These metrics provide quantitative measurements of the quality and fidelity of the super-resolution images. Experimental results demonstrate the effectiveness of our proposed approach in achieving improved super-resolution performance compared to existing methods. As an example, the proposed method achieved a PSNR of 35.70 and an SSIM of 0.94 on the training dataset, outperforming the bicubic interpolation method, which yielded a PSNR of 29.53 and an SSIM of 0.92. On the validation dataset, our approach achieved a PSNR of 40.3809 and an SSIM of 0.98, while the bicubic interpolation method achieved a PSNR of 34.26 and an SSIM of 0.94. Finally, on the test dataset, our approach achieved a PSNR of 29.8231 and an SSIM of 0.90, whereas the bicubic interpolation method yielded a PSNR of 26.99 and an SSIM of 0.85. The evaluation, based on MSE, PSNR, and SSIM metrics showcase the enhanced visual quality, increased image resolution, and improved similarity to the reference Venus images.

Keywords: Super-resolution, Remote sensing, sentinel-2, Deep learning, SRResNet, Perceptual loss, VGG network, Learning rate decay
Comments on this paper
Currently there are no comments available.



 
 
Top