An Automatic Assessment System for Marking Programming Exercises with Random Output

碩士 === 國立臺灣大學 === 土木工程學研究所 === 105 === Implementing automatic assessment tools on laboratory or homework exercises in programming courses is a way to provide timely feedback for students as well as lower the instructors’ marking burden. However, if the exercises involve random outputs, which are not...

Full description

Bibliographic Details
Main Authors: Yu-Tzu Wu, 吳育姿
Other Authors: Shang-Hsien Hsieh
Format: Others
Language:zh-TW
Published: 2017
Online Access:http://ndltd.ncl.edu.tw/handle/mkf89k
id ndltd-TW-105NTU05015082
record_format oai_dc
spelling ndltd-TW-105NTU050150822019-05-15T23:39:38Z http://ndltd.ncl.edu.tw/handle/mkf89k An Automatic Assessment System for Marking Programming Exercises with Random Output 可處理隨機輸出的自動程式作業批改系統 Yu-Tzu Wu 吳育姿 碩士 國立臺灣大學 土木工程學研究所 105 Implementing automatic assessment tools on laboratory or homework exercises in programming courses is a way to provide timely feedback for students as well as lower the instructors’ marking burden. However, if the exercises involve random outputs, which are not judged against a gold standard of comparison, system-level source code modification is often required on a ubiquitous online judging system. This research develops an online judging system as a platform, aiming at adopting a variety of assessment specifications. This research also introduces a simple procedure for customizing judge scripts as the core of the online judging system and deals with all textual output problems in 12 aspects: two options for determining whether a problem requires input of test data; three options for classifying a problem with or without random outputs; and two options for matching special output symbols. By designing the judge scripts based on theses 12 aspects, this approach was an excellent solution that dealt with almost half (45.95%) of the exercises that other online judging systems could not handle. With the advantage of preserving submitted source codes, the online judging system not only replaces instructors’ grading work but also enables plagiarism detection for each exercise. Thus, the online judging system frees instructors’ from tedious routine tasks, enabling them to offer further assistance to students in need. During the experiment in a programming course, students could successively correct and re-submit their source codes until their codes passed the test, while the instructors could inspect quality of students’ source codes after class for improving pedagogy. To investigate the future potential of the online judging system, this research analyzed correlations between the students’ performance and data collected from the system. It was found that making a large number of repeated compiling errors is one of the main behaviors of the weak students (Max. |r|=0.52; p<0.01). Therefore, this research proposes that the instructor may set a compiling error threshold to identify the weak students at an early stage so that extra tutoring can be arranged. Shang-Hsien Hsieh 謝尚賢 2017 學位論文 ; thesis 99 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 土木工程學研究所 === 105 === Implementing automatic assessment tools on laboratory or homework exercises in programming courses is a way to provide timely feedback for students as well as lower the instructors’ marking burden. However, if the exercises involve random outputs, which are not judged against a gold standard of comparison, system-level source code modification is often required on a ubiquitous online judging system. This research develops an online judging system as a platform, aiming at adopting a variety of assessment specifications. This research also introduces a simple procedure for customizing judge scripts as the core of the online judging system and deals with all textual output problems in 12 aspects: two options for determining whether a problem requires input of test data; three options for classifying a problem with or without random outputs; and two options for matching special output symbols. By designing the judge scripts based on theses 12 aspects, this approach was an excellent solution that dealt with almost half (45.95%) of the exercises that other online judging systems could not handle. With the advantage of preserving submitted source codes, the online judging system not only replaces instructors’ grading work but also enables plagiarism detection for each exercise. Thus, the online judging system frees instructors’ from tedious routine tasks, enabling them to offer further assistance to students in need. During the experiment in a programming course, students could successively correct and re-submit their source codes until their codes passed the test, while the instructors could inspect quality of students’ source codes after class for improving pedagogy. To investigate the future potential of the online judging system, this research analyzed correlations between the students’ performance and data collected from the system. It was found that making a large number of repeated compiling errors is one of the main behaviors of the weak students (Max. |r|=0.52; p<0.01). Therefore, this research proposes that the instructor may set a compiling error threshold to identify the weak students at an early stage so that extra tutoring can be arranged.
author2 Shang-Hsien Hsieh
author_facet Shang-Hsien Hsieh
Yu-Tzu Wu
吳育姿
author Yu-Tzu Wu
吳育姿
spellingShingle Yu-Tzu Wu
吳育姿
An Automatic Assessment System for Marking Programming Exercises with Random Output
author_sort Yu-Tzu Wu
title An Automatic Assessment System for Marking Programming Exercises with Random Output
title_short An Automatic Assessment System for Marking Programming Exercises with Random Output
title_full An Automatic Assessment System for Marking Programming Exercises with Random Output
title_fullStr An Automatic Assessment System for Marking Programming Exercises with Random Output
title_full_unstemmed An Automatic Assessment System for Marking Programming Exercises with Random Output
title_sort automatic assessment system for marking programming exercises with random output
publishDate 2017
url http://ndltd.ncl.edu.tw/handle/mkf89k
work_keys_str_mv AT yutzuwu anautomaticassessmentsystemformarkingprogrammingexerciseswithrandomoutput
AT wúyùzī anautomaticassessmentsystemformarkingprogrammingexerciseswithrandomoutput
AT yutzuwu kěchùlǐsuíjīshūchūdezìdòngchéngshìzuòyèpīgǎixìtǒng
AT wúyùzī kěchùlǐsuíjīshūchūdezìdòngchéngshìzuòyèpīgǎixìtǒng
AT yutzuwu automaticassessmentsystemformarkingprogrammingexerciseswithrandomoutput
AT wúyùzī automaticassessmentsystemformarkingprogrammingexerciseswithrandomoutput
_version_ 1719150982744506368