Resampling Methods on Classification Trees
碩士 === 國立中正大學 === 數理統計研究所 === 85 === Brieman's (1996) bagging and Freund and Schapire's (1996)boosting are recent resampling approaches to improvingpredictive accuracy of classification rules. Both methods combine multi...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Others |
Language: | zh-TW |
Published: |
1997
|
Online Access: | http://ndltd.ncl.edu.tw/handle/92339876415811667275 |
id |
ndltd-TW-085CCU00477004 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-TW-085CCU004770042015-10-13T12:43:57Z http://ndltd.ncl.edu.tw/handle/92339876415811667275 Resampling Methods on Classification Trees 重抽法則在樹狀分類上之研究 Wu, Han-Ming 吳漢銘 碩士 國立中正大學 數理統計研究所 85 Brieman's (1996) bagging and Freund and Schapire's (1996)boosting are recent resampling approaches to improvingpredictive accuracy of classification rules. Both methods combine multiple versions of unstable classifierssuch as classification trees to a composite classifier. In this paper, we study the applications of both techniquesto two tree- structured methods on a collection of datasets.The results show that, on average, both approaches can substantially improve predictive accuracy. But on some datasets consisting of influential observations, inferior results are obtained.A detection rule for influential points is then proposedon the basis of boosting algorithm. By removing influential observationsfrom the original learning sample, our results indicate thatbagging or boosting predictive accuracy. Yu-Shan Shih Wen-Ta Lou Wen-Hsiang Wei 史玉山 樓文達 魏文翔 1997 學位論文 ; thesis 45 zh-TW |
collection |
NDLTD |
language |
zh-TW |
format |
Others
|
sources |
NDLTD |
description |
碩士 === 國立中正大學 === 數理統計研究所 === 85 === Brieman's (1996) bagging and Freund and Schapire's
(1996)boosting are recent resampling approaches to
improvingpredictive accuracy of classification rules. Both
methods combine multiple versions of unstable classifierssuch as
classification trees to a composite classifier. In this paper,
we study the applications of both techniquesto two tree-
structured methods on a collection of datasets.The results show
that, on average, both approaches can substantially improve
predictive accuracy. But on some datasets consisting of
influential observations, inferior results are obtained.A
detection rule for influential points is then proposedon the
basis of boosting algorithm. By removing influential
observationsfrom the original learning sample, our results
indicate thatbagging or boosting predictive accuracy.
|
author2 |
Yu-Shan Shih |
author_facet |
Yu-Shan Shih Wu, Han-Ming 吳漢銘 |
author |
Wu, Han-Ming 吳漢銘 |
spellingShingle |
Wu, Han-Ming 吳漢銘 Resampling Methods on Classification Trees |
author_sort |
Wu, Han-Ming |
title |
Resampling Methods on Classification Trees |
title_short |
Resampling Methods on Classification Trees |
title_full |
Resampling Methods on Classification Trees |
title_fullStr |
Resampling Methods on Classification Trees |
title_full_unstemmed |
Resampling Methods on Classification Trees |
title_sort |
resampling methods on classification trees |
publishDate |
1997 |
url |
http://ndltd.ncl.edu.tw/handle/92339876415811667275 |
work_keys_str_mv |
AT wuhanming resamplingmethodsonclassificationtrees AT wúhànmíng resamplingmethodsonclassificationtrees AT wuhanming zhòngchōufǎzézàishùzhuàngfēnlèishàngzhīyánjiū AT wúhànmíng zhòngchōufǎzézàishùzhuàngfēnlèishàngzhīyánjiū |
_version_ |
1716864636318384128 |