Blind Super-Resolution Based on Interframe Information Compensation for Satellite Video
Super-resolution (SR) of satellite video has long been a critical research direction in the field of remote sensing video processing and analysis, and blind SR has attracted increasing attention in the face of satellite video with unknown degradation. However, existing blind SR methods mainly focus...
| Published in: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
|---|---|
| Main Authors: | , , , , |
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11130431/ |
| Summary: | Super-resolution (SR) of satellite video has long been a critical research direction in the field of remote sensing video processing and analysis, and blind SR has attracted increasing attention in the face of satellite video with unknown degradation. However, existing blind SR methods mainly focus on accurate blur kernel estimation, while ignoring the importance of interframe information compensation in the time domain. Therefore, this article focuses on precise temporal information compensation and proposes a blind SR network based on interframe information compensation. First, we propose a multiscale parallel convolution block to alleviate the difficulty of alignment between satellite video frames due to the presence of moving objects of different scales. Second, we propose a hybrid attention-based feature extraction module that effectively extracts both local and global information between video frames. While activating more pixels, more attention is allocated to informative pixels to obtain the clean features. Finally, a pyramid space activation module is proposed to gradually adjust the clean features through a multilayer iterative pyramid structure, enabling the clean features to better perceive blur and achieve pixel-level fine compensation for unknown degraded frames. Extensive experiments on real satellite video datasets demonstrate that our method is superior to state-of-the-art non-blind and blind SR methods, both qualitatively and quantitatively. |
|---|---|
| ISSN: | 1939-1404 2151-1535 |
