DSpix2pix: A New Dual-Style Controlled Reconstruction Network for Remote Sensing Image Super-Resolution

Super-resolution reconstruction is a critical task in remote sensing image classification, and generative adversarial networks (GANs) have emerged as a dominant approach in this field. Traditional generative networks often produce low-quality images at resolutions like 256 × 256, and current researc...

Full description

Bibliographic Details
Published in:Applied Sciences
Main Authors: Zhouyi Wang, Changcheng Wang
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/3/1179
Description
Summary:Super-resolution reconstruction is a critical task in remote sensing image classification, and generative adversarial networks (GANs) have emerged as a dominant approach in this field. Traditional generative networks often produce low-quality images at resolutions like 256 × 256, and current research on single-image super-resolution typically focuses on resolution enhancement factors of two to four (2×–4×), which do not meet practical application demands. Building upon the framework of StyleGAN, this study introduces a dual-style controlled super-resolution reconstruction network referred to as DSpix2pix. It uses a fixed style vector (Style 1) from StyleGAN-v2, generated through its mapping network and applied to each layer in the generator. And an additional style vector (Style 2) is extracted from example images and injected into the decoder using AdIn, enhancing the balance of styles in the generated images. DSpix2pix is capable of generating high-quality, smoother, noise-reduced, and more realistic super-resolution remote sensing images at 512 × 512 and 1024 × 1024 resolutions. In terms of visual metrics such as RMSE, PSNR, SSIM, and LPIPS, it outperforms traditional super-resolution networks like SRGAN and UNIT, with RMSE consistently exceeding 10. The network excels in 2× and 4× super-resolution tasks, demonstrating potential for remote sensing image interpretation, and shows promising results in 8x super-resolution tasks.
ISSN:2076-3417