A Computational Model of Context-Dependent Encodings During Category Learning

Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we int...

Full description

Bibliographic Details
Main Authors: Carvalho, P.F (Author), Goldstone, R.L (Author)
Format: Article
Language:English
Published: NLM (Medline) 2022
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02341nam a2200325Ia 4500
001 10.1111-cogs.13128
008 220425s2022 CNT 000 0 und d
020 |a 15516709 (ISSN) 
245 1 0 |a A Computational Model of Context-Dependent Encodings During Category Learning 
260 0 |b NLM (Medline)  |c 2022 
856 |z View Fulltext in Publisher  |u https://doi.org/10.1111/cogs.13128 
520 3 |a Although current exemplar models of category learning are flexible and can capture how different features are emphasized for different categories, they still lack the flexibility to adapt to local changes in category learning, such as the effect of different sequences of study. In this paper, we introduce a new model of category learning, the Sequential Attention Theory Model (SAT-M), in which the encoding of each presented item is influenced not only by its category assignment (global context) as in other exemplar models, but also by how its properties relate to the properties of temporally neighboring items (local context). By fitting SAT-M to data from experiments comparing category learning with different sequences of trials (interleaved vs. blocked), we demonstrate that SAT-M captures the effect of local context and predicts when interleaved or blocked training will result in better testing performance across three different studies. Comparatively, ALCOVE, SUSTAIN, and a version of SAT-M without locally adaptive encoding provided poor fits to the results. Moreover, we evaluated the direct prediction of the model that different sequences of training change what learners encode and determined that the best-fit encoding parameter values match learners' looking times during training. © 2022 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS). 
650 0 4 |a attention 
650 0 4 |a Attention 
650 0 4 |a Attention 
650 0 4 |a Category learning models 
650 0 4 |a computer simulation 
650 0 4 |a Computer Simulation 
650 0 4 |a concept formation 
650 0 4 |a Concept Formation 
650 0 4 |a Encoding 
650 0 4 |a human 
650 0 4 |a Humans 
650 0 4 |a Interleaving 
650 0 4 |a learning 
650 0 4 |a Learning 
650 0 4 |a Sequencing 
700 1 |a Carvalho, P.F.  |e author 
700 1 |a Goldstone, R.L.  |e author 
773 |t Cognitive science