Explainable Machine Learning for Scientific Insights and Discoveries

Machine learning methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of machine learning in the natural sciences, where the major goal is to obtain novel scient...

Full description

Bibliographic Details
Main Authors: Ribana Roscher, Bastian Bohn, Marco F. Duarte, Jochen Garcke
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9007737/
id doaj-07abd4e1023e4666a93531d3bcbd6d7f
record_format Article
spelling doaj-07abd4e1023e4666a93531d3bcbd6d7f2021-03-30T02:07:06ZengIEEEIEEE Access2169-35362020-01-018422004221610.1109/ACCESS.2020.29761999007737Explainable Machine Learning for Scientific Insights and DiscoveriesRibana Roscher0Bastian Bohn1Marco F. Duarte2Jochen Garcke3https://orcid.org/0000-0002-8334-3695Institute of Geodesy and Geoinformation, University of Bonn, Bonn, GermanyInstitute for Numerical Simulation, University of Bonn, Bonn, GermanyDepartment of Electrical and Computer Engineering, University of Massachusetts Amherst, Amherst, MA, USAInstitute for Numerical Simulation, University of Bonn, Bonn, GermanyMachine learning methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of machine learning in the natural sciences, where the major goal is to obtain novel scientific insights and discoveries from observational or simulated data. A prerequisite for obtaining a scientific outcome is domain knowledge, which is needed to gain explainability, but also to enhance scientific consistency. In this article, we review explainable machine learning in view of applications in the natural sciences and discuss three core elements that we identified as relevant in this context: transparency, interpretability, and explainability. With respect to these core elements, we provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas.https://ieeexplore.ieee.org/document/9007737/Explainable machine learninginformed machine learninginterpretabilityscientific consistencytransparency
collection DOAJ
language English
format Article
sources DOAJ
author Ribana Roscher
Bastian Bohn
Marco F. Duarte
Jochen Garcke
spellingShingle Ribana Roscher
Bastian Bohn
Marco F. Duarte
Jochen Garcke
Explainable Machine Learning for Scientific Insights and Discoveries
IEEE Access
Explainable machine learning
informed machine learning
interpretability
scientific consistency
transparency
author_facet Ribana Roscher
Bastian Bohn
Marco F. Duarte
Jochen Garcke
author_sort Ribana Roscher
title Explainable Machine Learning for Scientific Insights and Discoveries
title_short Explainable Machine Learning for Scientific Insights and Discoveries
title_full Explainable Machine Learning for Scientific Insights and Discoveries
title_fullStr Explainable Machine Learning for Scientific Insights and Discoveries
title_full_unstemmed Explainable Machine Learning for Scientific Insights and Discoveries
title_sort explainable machine learning for scientific insights and discoveries
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2020-01-01
description Machine learning methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of machine learning in the natural sciences, where the major goal is to obtain novel scientific insights and discoveries from observational or simulated data. A prerequisite for obtaining a scientific outcome is domain knowledge, which is needed to gain explainability, but also to enhance scientific consistency. In this article, we review explainable machine learning in view of applications in the natural sciences and discuss three core elements that we identified as relevant in this context: transparency, interpretability, and explainability. With respect to these core elements, we provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge from the application areas.
topic Explainable machine learning
informed machine learning
interpretability
scientific consistency
transparency
url https://ieeexplore.ieee.org/document/9007737/
work_keys_str_mv AT ribanaroscher explainablemachinelearningforscientificinsightsanddiscoveries
AT bastianbohn explainablemachinelearningforscientificinsightsanddiscoveries
AT marcofduarte explainablemachinelearningforscientificinsightsanddiscoveries
AT jochengarcke explainablemachinelearningforscientificinsightsanddiscoveries
_version_ 1724185701281431552