| 要約: | Abstract Image stitching is a fundamental task in computer vision, enabling critical applications such as panoramic imaging, augmented reality (AR), and autonomous perception systems. However, existing stitching algorithms exhibit significant performance variations under real-world challenges–including illumination changes, noise interference, and geometric distortions–making reliable quality assessment difficult. To address this challenge, we introduce StitchEval, a comprehensive benchmark framework incorporating both objective and subjective assessment metrics, including structural similarity (SSIM), mean squared error (MSE), peak signal-to-noise ratio (PSNR) and human-rated subjective scores (SS). Additionally, we integrate a human-perception-based scoring system to enhance the comprehensiveness of the evaluation. By applying illumination transformations, noise interference, and geometric variations to the original dataset, we systematically analyze the robustness of different stitching algorithms. Furthermore, the proposed evaluation framework and dataset construction methodology are designed to be highly flexible, allowing seamless integration with other datasets to facilitate cross-dataset comparisons and broader benchmarking of stitching algorithms. The insights derived from this study provide a valuable reference for optimizing future stitching methods and improving algorithm adaptability in real-world scenarios.
|