Summary: | Learning from imbalanced data is a challenging task in the fields of machine learning and data mining. As an effective and efficient solution, cost-sensitive learning has been widely adopted to address class imbalance learning (CIL) problems. Weighted extreme learning machine (WELM), which is constructed based on ELM, is a significant member in the cost-sensitive-learning algorithmic family. WELM can effectively deal with CIL problems. However, it has two main drawbacks: 1) it has high time complexity on large-scale data since a large-matrix multiplication operation is required in the solution procedure and 2) it lacks flexibility since it can only tune the training error for each instance and not for each class label. In this paper, we present an alternative to WELM, which is called label-WELM (LW-ELM). Unlike WELM, LW-ELM copes with CIL problems by tuning the training error of each class label. Specifically, the expected output (or training class label) that corresponds to the minority class is augmented, thereby providing stronger tolerance to training errors of the minority-class instances. In this paper, we design two types of weight allocation strategies, both of which are based on the class-imbalance ratio (CIR). In contrast with WELM, LW-ELM is fast and flexible, where fast means that it has low-time complexity and flexible indicates that it can also be used to tackle imbalanced multi-label learning problems, while WELM cannot. The experimental results on binary-class, multiclass, and multi-label data sets with skewed class distributions show the effectiveness and superiority of the proposed LW-ELM algorithm.
|