Interpretability for Deep Learning Text Classifiers
The ubiquitous presence of automated decision-making systems that have a performance comparable to humans brought attention towards the necessity of interpretability for the generated predictions. Whether the goal is predicting the system’s behavior when the input changes, building user trust, or...
Main Author: | Lucaci, Diana |
---|---|
Other Authors: | Inkpen, Diana |
Format: | Others |
Language: | en |
Published: |
Université d'Ottawa / University of Ottawa
2020
|
Subjects: | |
Online Access: | http://hdl.handle.net/10393/41564 http://dx.doi.org/10.20381/ruor-25786 |
Similar Items
-
Text feature extraction based on deep learning: a review
by: Hong Liang, et al.
Published: (2017-12-01) -
A Customizable Text Classifier for Text Mining
by: Yun-liang Zhang, et al.
Published: (2007-12-01) -
Translating Sentimental Statements Using Deep Learning Techniques
by: Yin-Fu Huang, et al.
Published: (2021-01-01) -
Detection of Loanwords in Angolan Portuguese: A Text Mining Approach
by: Brazdil, P.B, et al.
Published: (2022) -
SicknessMiner: a deep-learning-driven text-mining tool to abridge disease-disease associations
by: Nícia Rosário-Ferreira, et al.
Published: (2021-10-01)