How difficult are exams? A framework for assessing the complexity of introductory programming exams

Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the...

Full description

Bibliographic Details
Main Authors: Sheard, J (Author), Simon (Author), Carbone, A (Contributor), Chinn, D (Author), Clear, Tony (Author), Corney, M (Author), D'Souza, D (Author), Fenwick, J (Author), Harland, J (Author), Laakso, M-J (Author), Teague, D (Author)
Other Authors: Whalley, J (Contributor)
Format: Others
Published: Australian Computer Society (ACS), 2013-02-11T10:31:27Z.
Subjects:
CS1
Online Access:Get fulltext
LEADER 02791 am a22003973u 4500
001 5151
042 |a dc 
100 1 0 |a Sheard, J  |e author 
100 1 0 |a Carbone, A  |e contributor 
100 1 0 |a Whalley, J  |e contributor 
700 1 0 |a Simon  |e author 
700 1 0 |a Carbone, A  |e author 
700 1 0 |a Chinn, D  |e author 
700 1 0 |a Clear, Tony  |e author 
700 1 0 |a Corney, M  |e author 
700 1 0 |a D'Souza, D  |e author 
700 1 0 |a Fenwick, J  |e author 
700 1 0 |a Harland, J  |e author 
700 1 0 |a Laakso, M-J  |e author 
700 1 0 |a Teague, D  |e author 
245 0 0 |a How difficult are exams? A framework for assessing the complexity of introductory programming exams 
260 |b Australian Computer Society (ACS),   |c 2013-02-11T10:31:27Z. 
500 |a Australasian Computing Education Research Conference (ACE 2013) held at UniSA, Adelaide, Australia, 2013-01-29 to 2013-02-01, published in: Proceedings of the Fifteenth Australasian Computing Education Research Conference (ACE 2013), vol.136, pp.145 - 154 
500 |a 978-1-921770-21-0 
520 |a Student performance on examinations is influenced by the level of difficulty of the questions. It seems reasonable to propose therefore that assessment of the difficulty of exam questions could be used to gauge the level of skills and knowledge expected at the end of a course. This paper reports the results of a study investigating the difficulty of exam questions using a subjective assessment of difficulty and a purpose-built exam question complexity classification scheme. The scheme, devised for exams in introductory programming courses, assesses the complexity of each question using six measures: external domain references, explicitness, linguistic complexity, conceptual complexity, length of code involved in the question and/or answer, and intellectual complexity (Bloom level). We apply the scheme to 20 introductory programming exam papers from five countries, and find substantial variation across the exams for all measures. Most exams include a mix of questions of low, medium, and high difficulty, although seven of the 20 have no questions of high difficulty. All of the complexity measures correlate with assessment of difficulty, indicating that the difficulty of an exam question relates to each of these more specific measures. We discuss the implications of these findings for the development of measures to assess learning standards in programming courses. 
540 |a OpenAccess 
650 0 4 |a Standards 
650 0 4 |a Quality 
650 0 4 |a Examination papers 
650 0 4 |a CS1 
650 0 4 |a Introductory programming 
650 0 4 |a Assessment 
650 0 4 |a Question complexity 
650 0 4 |a Question difficulty 
655 7 |a Conference Contribution 
856 |z Get fulltext  |u http://hdl.handle.net/10292/5151