Verifying explainability of a deep learning tissue classifier trained on RNA-seq data
Abstract For complex machine learning (ML) algorithms to gain widespread acceptance in decision making, we must be able to identify the features driving the predictions. Explainability models allow transparency of ML algorithms, however their reliability within high-dimensional data is unclear. To t...
Main Authors: | Melvyn Yap, Rebecca L. Johnston, Helena Foley, Samual MacDonald, Olga Kondrashova, Khoa A. Tran, Katia Nones, Lambros T. Koufariotis, Cameron Bean, John V. Pearson, Maciej Trzaskowski, Nicola Waddell |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Publishing Group
2021-01-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-021-81773-9 |
Similar Items
-
Variance explained by whole genome sequence variants in coding and regulatory genome annotations for six dairy traits
by: Lambros T. Koufariotis, et al.
Published: (2018-04-01) -
Verifying cuts as a tool for improving a classifier based on a decision tree
by: Łukasz Dydo, et al.
Published: (2016-10-01) -
[en] EXPLAINABLE ARTIFICIAL INTELLIGENCE FOR MEDICAL IMAGE CLASSIFIERS
Published: (2021) -
Low-cost, Low-bias and Low-input RNA-seq with High Experimental Verifiability based on Semiconductor Sequencing
by: Zhibiao Mai, et al.
Published: (2017-04-01) -
Interpretation of microbiota-based diagnostics by explaining individual classifier decisions
by: A. Eck, et al.
Published: (2017-10-01)