A Deep Learning Knowledge Tracing Model Based on Attention Mechanism

In this paper we proposed a knowledge tracing method based on Transformer structure, improved the embedded representation of interactive records, designed a gate unit suitable for this model, and optimized the input processing of self-attention sublayer to improve the predictive performance of deep...

詳細記述

書誌詳細
出版年:Taiyuan Ligong Daxue xuebao
主要な著者: Kai ZHOU, Yan QIANG, Jiawen WANG, Mengnan WANG
フォーマット: 論文
言語:英語
出版事項: Editorial Office of Journal of Taiyuan University of Technology 2021-07-01
主題:
オンライン・アクセス:https://tyutjournal.tyut.edu.cn/englishpaper/show-410.html
その他の書誌記述
要約:In this paper we proposed a knowledge tracing method based on Transformer structure, improved the embedded representation of interactive records, designed a gate unit suitable for this model, and optimized the input processing of self-attention sublayer to improve the predictive performance of deep knowledge tracing model. The experimental results on four commonly used public data sets show that compared with previous methods, the model proposed in this paper can better reflect learners’ mastery of knowledge points, and has better performance on data sets with large sample sizes.
ISSN:1007-9432