On the Interpretability of Machine Learning Models and Experimental Feature Selection in Case of Multicollinear Data
In the field of machine learning, a considerable amount of research is involved in the interpretability of models and their decisions. The interpretability contradicts the model quality. Random Forests are among the best quality technologies of machine learning, but their operation is of “black box”...
Main Authors: | Franc Drobnič, Andrej Kos, Matevž Pustišek |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-05-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/5/761 |
Similar Items
-
Feature Selection with the Boruta Package
by: Miron B. Kursa, et al.
Published: (2010-10-01) -
An Interpretable Hand-Crafted Feature-Based Model for Atrial Fibrillation Detection
by: Rahimeh Rouhi, et al.
Published: (2021-05-01) -
Construction of Complex Features for Computational Predicting ncRNA-Protein Interaction
by: Qiguo Dai, et al.
Published: (2019-02-01) -
Examining characteristics of predictive models with imbalanced big data
by: Tawfiq Hasanin, et al.
Published: (2019-07-01) -
The Problem of Redundant Variables in Random Forests
by: Mariusz Kubus
Published: (2018-12-01)