<i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks

The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by ad...

Full description

Bibliographic Details
Main Authors: Dario Onorati, Pierfrancesco Tommasino, Leonardo Ranaldi, Francesca Fallucchi, Fabio Massimo Zanzotto
Format: Article
Language:English
Published: MDPI AG 2020-12-01
Series:Future Internet
Subjects:
NLP
AI
Online Access:https://www.mdpi.com/1999-5903/12/12/218
id doaj-6ace03a84a334f8aa0c25415424f8a6f
record_format Article
spelling doaj-6ace03a84a334f8aa0c25415424f8a6f2020-12-03T00:01:20ZengMDPI AGFuture Internet1999-59032020-12-011221821810.3390/fi12120218<i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural NetworksDario Onorati0Pierfrancesco Tommasino1Leonardo Ranaldi2Francesca Fallucchi3Fabio Massimo Zanzotto4Department of Enterprise Engineering, University of Rome Tor Vergata, 00133 Roma, ItalyDepartment of Enterprise Engineering, University of Rome Tor Vergata, 00133 Roma, ItalyDepartment of Innovation and Information Engineering, Guglielmo Marconi University, 00193 Roma, ItalyDepartment of Innovation and Information Engineering, Guglielmo Marconi University, 00193 Roma, ItalyDepartment of Enterprise Engineering, University of Rome Tor Vergata, 00133 Roma, ItalyThe dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by adding declarative rules. In Pat-in-the-Loop, <i>distributed tree encoders</i> allow to exploit parse trees in neural networks, <i>heat parse trees</i> visualize activation of parse trees, and parse subtrees are used as declarative rules in the neural network. Hence, Pat-in-the-Loop is a model to include human control in specific natural language processing (NLP)-neural network (NN) systems that exploit syntactic information, which we will generically call Pat. A pilot study on question classification showed that declarative rules representing human knowledge, injected by Pat, can be effectively used in these neural networks to ensure correctness, relevance, and cost-effective.https://www.mdpi.com/1999-5903/12/12/218NLPmachine learningdeep learningAIhuman-in-the-loop
collection DOAJ
language English
format Article
sources DOAJ
author Dario Onorati
Pierfrancesco Tommasino
Leonardo Ranaldi
Francesca Fallucchi
Fabio Massimo Zanzotto
spellingShingle Dario Onorati
Pierfrancesco Tommasino
Leonardo Ranaldi
Francesca Fallucchi
Fabio Massimo Zanzotto
<i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
Future Internet
NLP
machine learning
deep learning
AI
human-in-the-loop
author_facet Dario Onorati
Pierfrancesco Tommasino
Leonardo Ranaldi
Francesca Fallucchi
Fabio Massimo Zanzotto
author_sort Dario Onorati
title <i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
title_short <i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
title_full <i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
title_fullStr <i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
title_full_unstemmed <i>Pat-in-the-Loop</i>: Declarative Knowledge for Controlling Neural Networks
title_sort <i>pat-in-the-loop</i>: declarative knowledge for controlling neural networks
publisher MDPI AG
series Future Internet
issn 1999-5903
publishDate 2020-12-01
description The dazzling success of neural networks over natural language processing systems is imposing an urgent need to control their behavior with simpler, more direct declarative rules. In this paper, we propose Pat-in-the-Loop as a model to control a specific class of syntax-oriented neural networks by adding declarative rules. In Pat-in-the-Loop, <i>distributed tree encoders</i> allow to exploit parse trees in neural networks, <i>heat parse trees</i> visualize activation of parse trees, and parse subtrees are used as declarative rules in the neural network. Hence, Pat-in-the-Loop is a model to include human control in specific natural language processing (NLP)-neural network (NN) systems that exploit syntactic information, which we will generically call Pat. A pilot study on question classification showed that declarative rules representing human knowledge, injected by Pat, can be effectively used in these neural networks to ensure correctness, relevance, and cost-effective.
topic NLP
machine learning
deep learning
AI
human-in-the-loop
url https://www.mdpi.com/1999-5903/12/12/218
work_keys_str_mv AT darioonorati ipatintheloopideclarativeknowledgeforcontrollingneuralnetworks
AT pierfrancescotommasino ipatintheloopideclarativeknowledgeforcontrollingneuralnetworks
AT leonardoranaldi ipatintheloopideclarativeknowledgeforcontrollingneuralnetworks
AT francescafallucchi ipatintheloopideclarativeknowledgeforcontrollingneuralnetworks
AT fabiomassimozanzotto ipatintheloopideclarativeknowledgeforcontrollingneuralnetworks
_version_ 1724401738174169088