Decision Tree Integration Using Dynamic Regions of Competence

A vital aspect of the Multiple Classifier Systems construction process is the base model integration. For example, the Random Forest approach used the majority voting rule to fuse the base classifiers obtained by bagging the training dataset. In this paper we propose the algorithm that uses partitio...

Full description

Bibliographic Details
Main Authors: Jędrzej Biedrzycki, Robert Burduk
Format: Article
Language:English
Published: MDPI AG 2020-10-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/22/10/1129
Description
Summary:A vital aspect of the Multiple Classifier Systems construction process is the base model integration. For example, the Random Forest approach used the majority voting rule to fuse the base classifiers obtained by bagging the training dataset. In this paper we propose the algorithm that uses partitioning the feature space whose split is determined by the decision rules of each decision tree node which is the base classification model. After dividing the feature space, the centroid of each new subspace is determined. This centroids are used in order to determine the weights needed in the integration phase based on the weighted majority voting rule. The proposal was compared with other Multiple Classifier Systems approaches. The experiments regarding multiple open-source benchmarking datasets demonstrate the effectiveness of our method. To discuss the results of our experiments, we use micro and macro-average classification performance measures.
ISSN:1099-4300