Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning
Prior research has used an Interactive Landslide Simulator (ILS) tool to investigate human decision making against landslide risks. It has been found that repeated feedback in the ILS tool about damages due to landslides causes an improvement in human decisions against landslide risks. However, litt...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-02-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyg.2020.499422/full |
id |
doaj-c62eeb2c58ca4281a797ff705a51c75c |
---|---|
record_format |
Article |
spelling |
doaj-c62eeb2c58ca4281a797ff705a51c75c2021-02-10T09:33:21ZengFrontiers Media S.A.Frontiers in Psychology1664-10782021-02-011110.3389/fpsyg.2020.499422499422Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement LearningPratik Chaturvedi0Pratik Chaturvedi1Varun Dutt2Applied Cognitive Science Laboratory, Indian Institute of Technology Mandi, Mandi, IndiaDefence Terrain Research Laboratory, Defence Research and Development Organization, New Delhi, IndiaApplied Cognitive Science Laboratory, Indian Institute of Technology Mandi, Mandi, IndiaPrior research has used an Interactive Landslide Simulator (ILS) tool to investigate human decision making against landslide risks. It has been found that repeated feedback in the ILS tool about damages due to landslides causes an improvement in human decisions against landslide risks. However, little is known on how theories of learning from feedback (e.g., reinforcement learning) would account for human decisions in the ILS tool. The primary goal of this paper is to account for human decisions in the ILS tool via computational models based upon reinforcement learning and to explore the model mechanisms involved when people make decisions in the ILS tool. Four different reinforcement-learning models were developed and evaluated in their ability to capture human decisions in an experiment involving two conditions in the ILS tool. The parameters of an Expectancy-Valence (EV) model, two Prospect-Valence-Learning models (PVL and PVL-2), a combination EV-PU model, and a random model were calibrated to human decisions in the ILS tool across the two conditions. Later, different models with their calibrated parameters were generalized to data collected in an experiment involving a new condition in ILS. When generalized to this new condition, the PVL-2 model’s parameters of both damage-feedback conditions outperformed all other RL models (including the random model). We highlight the implications of our results for decision making against landslide risks.https://www.frontiersin.org/articles/10.3389/fpsyg.2020.499422/fulldecision-makingdamage-feedbackinteractive landslide simulatorreinforcement learningexpectancy-valence modelprospect-valence-learning model |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Pratik Chaturvedi Pratik Chaturvedi Varun Dutt |
spellingShingle |
Pratik Chaturvedi Pratik Chaturvedi Varun Dutt Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning Frontiers in Psychology decision-making damage-feedback interactive landslide simulator reinforcement learning expectancy-valence model prospect-valence-learning model |
author_facet |
Pratik Chaturvedi Pratik Chaturvedi Varun Dutt |
author_sort |
Pratik Chaturvedi |
title |
Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning |
title_short |
Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning |
title_full |
Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning |
title_fullStr |
Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning |
title_full_unstemmed |
Understanding Human Decision Making in an Interactive Landslide Simulator Tool via Reinforcement Learning |
title_sort |
understanding human decision making in an interactive landslide simulator tool via reinforcement learning |
publisher |
Frontiers Media S.A. |
series |
Frontiers in Psychology |
issn |
1664-1078 |
publishDate |
2021-02-01 |
description |
Prior research has used an Interactive Landslide Simulator (ILS) tool to investigate human decision making against landslide risks. It has been found that repeated feedback in the ILS tool about damages due to landslides causes an improvement in human decisions against landslide risks. However, little is known on how theories of learning from feedback (e.g., reinforcement learning) would account for human decisions in the ILS tool. The primary goal of this paper is to account for human decisions in the ILS tool via computational models based upon reinforcement learning and to explore the model mechanisms involved when people make decisions in the ILS tool. Four different reinforcement-learning models were developed and evaluated in their ability to capture human decisions in an experiment involving two conditions in the ILS tool. The parameters of an Expectancy-Valence (EV) model, two Prospect-Valence-Learning models (PVL and PVL-2), a combination EV-PU model, and a random model were calibrated to human decisions in the ILS tool across the two conditions. Later, different models with their calibrated parameters were generalized to data collected in an experiment involving a new condition in ILS. When generalized to this new condition, the PVL-2 model’s parameters of both damage-feedback conditions outperformed all other RL models (including the random model). We highlight the implications of our results for decision making against landslide risks. |
topic |
decision-making damage-feedback interactive landslide simulator reinforcement learning expectancy-valence model prospect-valence-learning model |
url |
https://www.frontiersin.org/articles/10.3389/fpsyg.2020.499422/full |
work_keys_str_mv |
AT pratikchaturvedi understandinghumandecisionmakinginaninteractivelandslidesimulatortoolviareinforcementlearning AT pratikchaturvedi understandinghumandecisionmakinginaninteractivelandslidesimulatortoolviareinforcementlearning AT varundutt understandinghumandecisionmakinginaninteractivelandslidesimulatortoolviareinforcementlearning |
_version_ |
1724275465559998464 |