Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery

碩士 === 國立交通大學 === 土木工程系所 === 107 === Land cover classification has always been a critical issue. People tried to learn information from the distribution of land cover; as it may affect climate, environment or structure of society. Land cover information can be extracted from spectral feature, geomet...

Full description

Bibliographic Details
Main Authors: Yang, Chu-Chun, 楊筑鈞
Other Authors: Teo, Tee-Ann
Format: Others
Language:en_US
Published: 2019
Online Access:http://ndltd.ncl.edu.tw/handle/fv26ta
id ndltd-TW-107NCTU5015029
record_format oai_dc
spelling ndltd-TW-107NCTU50150292019-06-27T05:42:50Z http://ndltd.ncl.edu.tw/handle/fv26ta Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery 應用深度學習於空載光達及空載影像之地物分類 Yang, Chu-Chun 楊筑鈞 碩士 國立交通大學 土木工程系所 107 Land cover classification has always been a critical issue. People tried to learn information from the distribution of land cover; as it may affect climate, environment or structure of society. Land cover information can be extracted from spectral feature, geometrical feature or data fusion approach. Spectral and geometrical features can be provided from airborne LiDAR and aerial imagery, respectively. The supervised deep learning method is applied to this research for land cover classification. Deep learning is able to learn more complex scenes by the means of converting the original observation (i.e. lidar or image) to higher abstract level (i.e., features) with a non-linear model. In traditional classification algorithm, the classification process includes data input, features extraction and classifier operation. A sufficient feature is defined to fit in a specific classification task by knowing the pre-knowledge. Unlike traditional classification, deep learning algorithm assumes that the features are unknown variables and combines the process of features extraction and classifier operation simultaneously. This research analyses the impact of different input datasets for classification and compares different deep learning algorithms. The contribution of the research is twofold. First, analyze and compare the influence on land cover classification using deep learning method by raster data and point cloud data. Second, analyze and compare the influence on land cover classification using deep learning method by spectral and shape features. This research adopted two deep learning networks, FCN-8s (pixel-based classification) and PointNet (point-based classification). The FCN-8s utilized interpolated 2D grid while PointNet utilized a reshaped 2D matrix in classification. The input data set for FCN-8s were RGB color information (RGB), DSM information (DSM) and intensity map (I). The input data set for PointNet were XYZ coordinates (XYZ), RGB color information (RGB) and intensity value (I). In order to compare how spectral and shape features will effect on the result. Four combinations with different observations were designed to evaluate the impact of input dataset for classification. Four combinations for FCN-8s were RGB, RGBI, RGBDSM, RGBDSMI while the other four combinations for PointNet were XYZ, XYZI, XYZRGB, XYZRGBI. The research classified landcovers into 3 main categories, including tree, building, and road. Two experiments were applied to two different deep learning networks using airborne images and lidar points. The results showed that shape information is more useful than spectral information in land cover classification. Since point-based classification is able to describe more complete shape information than pixel-based classification, point-based classification has better results than pixel-based classification. Moreover, the intensity information is more useful for recognizing the road from other categories, and DSM information is able to correct relief displacement caused by high buildings. Teo, Tee-Ann 張智安 2019 學位論文 ; thesis 68 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立交通大學 === 土木工程系所 === 107 === Land cover classification has always been a critical issue. People tried to learn information from the distribution of land cover; as it may affect climate, environment or structure of society. Land cover information can be extracted from spectral feature, geometrical feature or data fusion approach. Spectral and geometrical features can be provided from airborne LiDAR and aerial imagery, respectively. The supervised deep learning method is applied to this research for land cover classification. Deep learning is able to learn more complex scenes by the means of converting the original observation (i.e. lidar or image) to higher abstract level (i.e., features) with a non-linear model. In traditional classification algorithm, the classification process includes data input, features extraction and classifier operation. A sufficient feature is defined to fit in a specific classification task by knowing the pre-knowledge. Unlike traditional classification, deep learning algorithm assumes that the features are unknown variables and combines the process of features extraction and classifier operation simultaneously. This research analyses the impact of different input datasets for classification and compares different deep learning algorithms. The contribution of the research is twofold. First, analyze and compare the influence on land cover classification using deep learning method by raster data and point cloud data. Second, analyze and compare the influence on land cover classification using deep learning method by spectral and shape features. This research adopted two deep learning networks, FCN-8s (pixel-based classification) and PointNet (point-based classification). The FCN-8s utilized interpolated 2D grid while PointNet utilized a reshaped 2D matrix in classification. The input data set for FCN-8s were RGB color information (RGB), DSM information (DSM) and intensity map (I). The input data set for PointNet were XYZ coordinates (XYZ), RGB color information (RGB) and intensity value (I). In order to compare how spectral and shape features will effect on the result. Four combinations with different observations were designed to evaluate the impact of input dataset for classification. Four combinations for FCN-8s were RGB, RGBI, RGBDSM, RGBDSMI while the other four combinations for PointNet were XYZ, XYZI, XYZRGB, XYZRGBI. The research classified landcovers into 3 main categories, including tree, building, and road. Two experiments were applied to two different deep learning networks using airborne images and lidar points. The results showed that shape information is more useful than spectral information in land cover classification. Since point-based classification is able to describe more complete shape information than pixel-based classification, point-based classification has better results than pixel-based classification. Moreover, the intensity information is more useful for recognizing the road from other categories, and DSM information is able to correct relief displacement caused by high buildings.
author2 Teo, Tee-Ann
author_facet Teo, Tee-Ann
Yang, Chu-Chun
楊筑鈞
author Yang, Chu-Chun
楊筑鈞
spellingShingle Yang, Chu-Chun
楊筑鈞
Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
author_sort Yang, Chu-Chun
title Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
title_short Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
title_full Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
title_fullStr Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
title_full_unstemmed Deep Learning-Based Land Cover Classification Using Airborne Lidar and Aerial Imagery
title_sort deep learning-based land cover classification using airborne lidar and aerial imagery
publishDate 2019
url http://ndltd.ncl.edu.tw/handle/fv26ta
work_keys_str_mv AT yangchuchun deeplearningbasedlandcoverclassificationusingairbornelidarandaerialimagery
AT yángzhùjūn deeplearningbasedlandcoverclassificationusingairbornelidarandaerialimagery
AT yangchuchun yīngyòngshēndùxuéxíyúkōngzàiguāngdájíkōngzàiyǐngxiàngzhīdewùfēnlèi
AT yángzhùjūn yīngyòngshēndùxuéxíyúkōngzàiguāngdájíkōngzàiyǐngxiàngzhīdewùfēnlèi
_version_ 1719213344605339648