A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output

Transfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine lear...

Full description

Bibliographic Details
Main Authors: Yasutake Koishi, Shuichi Ishida, Tatsuo Tabaru, Hiroyuki Miyamoto
Format: Article
Language:English
Published: MDPI AG 2019-05-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/12/5/95
id doaj-51597845b2264194b9cb1494e290a7d4
record_format Article
spelling doaj-51597845b2264194b9cb1494e290a7d42020-11-25T01:33:15ZengMDPI AGAlgorithms1999-48932019-05-011259510.3390/a12050095a12050095A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping OutputYasutake Koishi0Shuichi Ishida1Tatsuo Tabaru2Hiroyuki Miyamoto3Advanced Manufacturing Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Saga 841-0052, JapanAdvanced Manufacturing Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Saga 841-0052, JapanAdvanced Manufacturing Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Saga 841-0052, JapanGraduate School of Life Science and Systems Engineering, Kyushu Institute of Technology (Kyutech), Fukuoka 808-0196, JapanTransfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine learning to a wide range of real-world problems. However, since the technique is user-dependent, with data prepared as a source domain which in turn becomes a knowledge source for transfer learning, it often involves the adoption of inappropriate data. In such cases, the accuracy may be reduced due to “negative transfer.” Thus, in this paper, we propose a novel transfer learning method that utilizes the flipping output technique to provide multiple labels in the source domain. The accuracy of the proposed method is statistically demonstrated to be significantly better than that of the conventional transfer learning method, and its effect size is as high as 0.9, showing high performance.https://www.mdpi.com/1999-4893/12/5/95transfer learningensemble learningdata expansionflipping output
collection DOAJ
language English
format Article
sources DOAJ
author Yasutake Koishi
Shuichi Ishida
Tatsuo Tabaru
Hiroyuki Miyamoto
spellingShingle Yasutake Koishi
Shuichi Ishida
Tatsuo Tabaru
Hiroyuki Miyamoto
A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
Algorithms
transfer learning
ensemble learning
data expansion
flipping output
author_facet Yasutake Koishi
Shuichi Ishida
Tatsuo Tabaru
Hiroyuki Miyamoto
author_sort Yasutake Koishi
title A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
title_short A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
title_full A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
title_fullStr A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
title_full_unstemmed A Source Domain Extension Method for Inductive Transfer Learning Based on Flipping Output
title_sort source domain extension method for inductive transfer learning based on flipping output
publisher MDPI AG
series Algorithms
issn 1999-4893
publishDate 2019-05-01
description Transfer learning aims for high accuracy by applying knowledge of source domains for which data collection is easy in order to target domains where data collection is difficult, and has attracted attention in recent years because of its significant potential to enable the application of machine learning to a wide range of real-world problems. However, since the technique is user-dependent, with data prepared as a source domain which in turn becomes a knowledge source for transfer learning, it often involves the adoption of inappropriate data. In such cases, the accuracy may be reduced due to “negative transfer.” Thus, in this paper, we propose a novel transfer learning method that utilizes the flipping output technique to provide multiple labels in the source domain. The accuracy of the proposed method is statistically demonstrated to be significantly better than that of the conventional transfer learning method, and its effect size is as high as 0.9, showing high performance.
topic transfer learning
ensemble learning
data expansion
flipping output
url https://www.mdpi.com/1999-4893/12/5/95
work_keys_str_mv AT yasutakekoishi asourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT shuichiishida asourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT tatsuotabaru asourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT hiroyukimiyamoto asourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT yasutakekoishi sourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT shuichiishida sourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT tatsuotabaru sourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
AT hiroyukimiyamoto sourcedomainextensionmethodforinductivetransferlearningbasedonflippingoutput
_version_ 1725078577487020032