Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach

Automatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform,...

Full description

Bibliographic Details
Main Authors: Anand Vetrivel, Markus Gerke, Norman Kerle, George Vosselman
Format: Article
Language:English
Published: MDPI AG 2016-03-01
Series:Remote Sensing
Subjects:
UAV
Online Access:http://www.mdpi.com/2072-4292/8/3/231
id doaj-11ca3d0e8f044afd83d07df9d9fd64c3
record_format Article
spelling doaj-11ca3d0e8f044afd83d07df9d9fd64c32020-11-24T23:56:46ZengMDPI AGRemote Sensing2072-42922016-03-018323110.3390/rs8030231rs8030231Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words ApproachAnand Vetrivel0Markus Gerke1Norman Kerle2George Vosselman3Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The NetherlandsFaculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The NetherlandsFaculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The NetherlandsFaculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The NetherlandsAutomatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform, sensor-view, image scale, and scene complexity. Therefore, a generalized method for damage detection that is impervious to the mentioned image characteristics is desirable. This study aims to develop a method to perform grid-level damage classification of remote sensing images by detecting the damage corresponding to debris, rubble piles, and heavy spalling within a defined grid, regardless of the aforementioned image characteristics. The Visual-Bag-of-Words (BoW) is one of the most widely used and proven frameworks for image classification in the field of computer vision. The framework adopts a kind of feature representation strategy that has been shown to be more efficient for image classification—regardless of the scale and clutter—than conventional global feature representations. In this study supervised models using various radiometric descriptors (histogram of gradient orientations (HoG) and Gabor wavelets) and classifiers (SVM, Random Forests, and Adaboost) were developed for damage classification based on both BoW and conventional global feature representations, and tested with four datasets. Those vary according to the aforementioned image characteristics. The BoW framework outperformed conventional global feature representation approaches in all scenarios (i.e., for all combinations of feature descriptors, classifiers, and datasets), and produced an average accuracy of approximately 90%. Particularly encouraging was an accuracy improvement by 14% (from 77% to 91%) produced by BoW over global representation for the most complex dataset, which was used to test the generalization capability.http://www.mdpi.com/2072-4292/8/3/231damage detectionfeature representationoblique airborne imagessupervised learningtextureUAVVisual-Bag-of-Words
collection DOAJ
language English
format Article
sources DOAJ
author Anand Vetrivel
Markus Gerke
Norman Kerle
George Vosselman
spellingShingle Anand Vetrivel
Markus Gerke
Norman Kerle
George Vosselman
Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
Remote Sensing
damage detection
feature representation
oblique airborne images
supervised learning
texture
UAV
Visual-Bag-of-Words
author_facet Anand Vetrivel
Markus Gerke
Norman Kerle
George Vosselman
author_sort Anand Vetrivel
title Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
title_short Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
title_full Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
title_fullStr Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
title_full_unstemmed Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
title_sort identification of structurally damaged areas in airborne oblique images using a visual-bag-of-words approach
publisher MDPI AG
series Remote Sensing
issn 2072-4292
publishDate 2016-03-01
description Automatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform, sensor-view, image scale, and scene complexity. Therefore, a generalized method for damage detection that is impervious to the mentioned image characteristics is desirable. This study aims to develop a method to perform grid-level damage classification of remote sensing images by detecting the damage corresponding to debris, rubble piles, and heavy spalling within a defined grid, regardless of the aforementioned image characteristics. The Visual-Bag-of-Words (BoW) is one of the most widely used and proven frameworks for image classification in the field of computer vision. The framework adopts a kind of feature representation strategy that has been shown to be more efficient for image classification—regardless of the scale and clutter—than conventional global feature representations. In this study supervised models using various radiometric descriptors (histogram of gradient orientations (HoG) and Gabor wavelets) and classifiers (SVM, Random Forests, and Adaboost) were developed for damage classification based on both BoW and conventional global feature representations, and tested with four datasets. Those vary according to the aforementioned image characteristics. The BoW framework outperformed conventional global feature representation approaches in all scenarios (i.e., for all combinations of feature descriptors, classifiers, and datasets), and produced an average accuracy of approximately 90%. Particularly encouraging was an accuracy improvement by 14% (from 77% to 91%) produced by BoW over global representation for the most complex dataset, which was used to test the generalization capability.
topic damage detection
feature representation
oblique airborne images
supervised learning
texture
UAV
Visual-Bag-of-Words
url http://www.mdpi.com/2072-4292/8/3/231
work_keys_str_mv AT anandvetrivel identificationofstructurallydamagedareasinairborneobliqueimagesusingavisualbagofwordsapproach
AT markusgerke identificationofstructurallydamagedareasinairborneobliqueimagesusingavisualbagofwordsapproach
AT normankerle identificationofstructurallydamagedareasinairborneobliqueimagesusingavisualbagofwordsapproach
AT georgevosselman identificationofstructurallydamagedareasinairborneobliqueimagesusingavisualbagofwordsapproach
_version_ 1725456651025121280