Low-degree Polynomial Mapping of Data for SVM

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 97 === Non-linear mapping functions have long been used in SVM to transform data into a higher dimensional space, allowing the classifier to separate non-linearly distributed data instances. Kernel tricks are used to avoid the problem of a huge number of features of th...

Full description

Bibliographic Details
Main Authors: Yin-Wen Chang, 張瀠文
Other Authors: Chih-Jen Lin
Format: Others
Language:en_US
Published: 2009
Online Access:http://ndltd.ncl.edu.tw/handle/85416613440796675619
id ndltd-TW-097NTU05392033
record_format oai_dc
spelling ndltd-TW-097NTU053920332016-05-04T04:31:31Z http://ndltd.ncl.edu.tw/handle/85416613440796675619 Low-degree Polynomial Mapping of Data for SVM 低階多項式資料映射與支持向量機 Yin-Wen Chang 張瀠文 碩士 國立臺灣大學 資訊工程學研究所 97 Non-linear mapping functions have long been used in SVM to transform data into a higher dimensional space, allowing the classifier to separate non-linearly distributed data instances. Kernel tricks are used to avoid the problem of a huge number of features of the mapped data point. However, the training/testing for large data is often time consuming. Following the recent advances in training large linear SVM (i.e., SVM without using nonlinear kernels), this work proposes a method that strikes a balance between the training/testing speed and the testing accuracy. We apply the fast training method for linear SVM to the expanded form of data under low-degree polynomial mappings. The method enjoys the fast training/testing, but may achieve testing accuracy close to that of using highly nonlinear kernels. Empirical experiments show that the proposed method is useful for certain large-scale data sets. Chih-Jen Lin 林智仁 2009 學位論文 ; thesis 32 en_US
collection NDLTD
language en_US
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 資訊工程學研究所 === 97 === Non-linear mapping functions have long been used in SVM to transform data into a higher dimensional space, allowing the classifier to separate non-linearly distributed data instances. Kernel tricks are used to avoid the problem of a huge number of features of the mapped data point. However, the training/testing for large data is often time consuming. Following the recent advances in training large linear SVM (i.e., SVM without using nonlinear kernels), this work proposes a method that strikes a balance between the training/testing speed and the testing accuracy. We apply the fast training method for linear SVM to the expanded form of data under low-degree polynomial mappings. The method enjoys the fast training/testing, but may achieve testing accuracy close to that of using highly nonlinear kernels. Empirical experiments show that the proposed method is useful for certain large-scale data sets.
author2 Chih-Jen Lin
author_facet Chih-Jen Lin
Yin-Wen Chang
張瀠文
author Yin-Wen Chang
張瀠文
spellingShingle Yin-Wen Chang
張瀠文
Low-degree Polynomial Mapping of Data for SVM
author_sort Yin-Wen Chang
title Low-degree Polynomial Mapping of Data for SVM
title_short Low-degree Polynomial Mapping of Data for SVM
title_full Low-degree Polynomial Mapping of Data for SVM
title_fullStr Low-degree Polynomial Mapping of Data for SVM
title_full_unstemmed Low-degree Polynomial Mapping of Data for SVM
title_sort low-degree polynomial mapping of data for svm
publishDate 2009
url http://ndltd.ncl.edu.tw/handle/85416613440796675619
work_keys_str_mv AT yinwenchang lowdegreepolynomialmappingofdataforsvm
AT zhāngyíngwén lowdegreepolynomialmappingofdataforsvm
AT yinwenchang dījiēduōxiàngshìzīliàoyìngshèyǔzhīchíxiàngliàngjī
AT zhāngyíngwén dījiēduōxiàngshìzīliàoyìngshèyǔzhīchíxiàngliàngjī
_version_ 1718259432132968448