Case Study: Students’ Code-Tracing Skills and Calibration of Questions for Computer Adaptive Tests

Computer adaptive testing (CAT) enables an individualization of tests and better accuracy of knowledge level determination. In CAT, all test participants receive a uniquely tailored set of questions. The number and the difficulty of the next question depend on whether the respondent’s previous answe...

全面介紹

書目詳細資料
發表在:Applied Sciences
Main Authors: Robert Pinter, Sanja Maravić Čisar, Attila Kovari, Lenke Major, Petar Čisar, Jozsef Katona
格式: Article
語言:英语
出版: MDPI AG 2020-10-01
主題:
在線閱讀:https://www.mdpi.com/2076-3417/10/20/7044
實物特徵
總結:Computer adaptive testing (CAT) enables an individualization of tests and better accuracy of knowledge level determination. In CAT, all test participants receive a uniquely tailored set of questions. The number and the difficulty of the next question depend on whether the respondent’s previous answer was correct or incorrect. In order for CAT to work properly, it needs questions with suitably defined levels of difficulty. In this work, the authors compare the results of questions’ difficulty determination given by experts (teachers) and students. Bachelor students of informatics in their first, second, and third year of studies at Subotica Tech—College of Applied Sciences had to answer 44 programming questions in a test and estimate the difficulty for each of those questions. Analyzing the correct answers shows that the basic programming knowledge, taught in the first year of study, evolves very slowly among senior students. The comparison of estimations on questions difficulty highlights that the senior students have a better understanding of basic programming tasks; thus, their estimation of difficulty approximates to that given by the experts.
ISSN:2076-3417