Summary: | Due to the limited training data, current data-driven algorithms, including deep convolutional networks (DCNs), are susceptible to training data that cannot be applied to new data directly. Unlike existing methods that are trying to improve model generation capability using limited data, we introduce a learning-based image translation method to generate data that share the same characteristics of target data. The low-resolution panchromatic satellite images are converted into high-resolution color images through interpolation and colorization with the proposed symmetric colorization network (SCN). Experiments on a very-high-resolution (VHR) dataset show that images generated by our SCN are with both quantitatively and qualitatively high color fidelity. Furthermore, we also demonstrate that high extraction accuracy is retained during the model transferring from aerial to satellite images. For pre-trained feature pyramid network (FPN), compared to the performance on raw panchromatic images, the interpolated and colorized images increase 305.7% of recall (0.929 vs. 0.229), 78.2% of overall accuracy (0.768 vs. 0.431), 132.5% of f1-score (0.851 vs. 0.366), and 230.8% of Jaccard index (0.741 vs. 0.224), respectively.
|