Optimal Weighting for Exam Composition

A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for...

Full description

Bibliographic Details
Main Authors: Sam Ganzfried, Farzana Yusuf
Format: Article
Language:English
Published: MDPI AG 2018-03-01
Series:Education Sciences
Subjects:
Online Access:http://www.mdpi.com/2227-7102/8/1/36
id doaj-06d3d7bab3b144868b536e07c404d860
record_format Article
spelling doaj-06d3d7bab3b144868b536e07c404d8602020-11-24T23:11:17ZengMDPI AGEducation Sciences2227-71022018-03-01813610.3390/educsci8010036educsci8010036Optimal Weighting for Exam CompositionSam Ganzfried0Farzana Yusuf1Ganzfried Research, Miami Beach, FL 33139, USASchool of Computing and Information Sciences, Florida International University, Miami, FL 33139, USAA problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall final score as a proxy for a student’s true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, i.e., for the final and midterm exam, the mean absolute error for prediction decreases by 90.58% and 97.70% for linear regression approach respectively resulting in better estimation. We make several new observations regarding the properties of the “good” and “bad” exam questions that can have impact on the design of improved future evaluation methods.http://www.mdpi.com/2227-7102/8/1/36intelligent tutoring systemscollaborative learningstudent modellingsupervised learning
collection DOAJ
language English
format Article
sources DOAJ
author Sam Ganzfried
Farzana Yusuf
spellingShingle Sam Ganzfried
Farzana Yusuf
Optimal Weighting for Exam Composition
Education Sciences
intelligent tutoring systems
collaborative learning
student modelling
supervised learning
author_facet Sam Ganzfried
Farzana Yusuf
author_sort Sam Ganzfried
title Optimal Weighting for Exam Composition
title_short Optimal Weighting for Exam Composition
title_full Optimal Weighting for Exam Composition
title_fullStr Optimal Weighting for Exam Composition
title_full_unstemmed Optimal Weighting for Exam Composition
title_sort optimal weighting for exam composition
publisher MDPI AG
series Education Sciences
issn 2227-7102
publishDate 2018-03-01
description A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall final score as a proxy for a student’s true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, i.e., for the final and midterm exam, the mean absolute error for prediction decreases by 90.58% and 97.70% for linear regression approach respectively resulting in better estimation. We make several new observations regarding the properties of the “good” and “bad” exam questions that can have impact on the design of improved future evaluation methods.
topic intelligent tutoring systems
collaborative learning
student modelling
supervised learning
url http://www.mdpi.com/2227-7102/8/1/36
work_keys_str_mv AT samganzfried optimalweightingforexamcomposition
AT farzanayusuf optimalweightingforexamcomposition
_version_ 1725604988903751680