Factors influencing the generalizability on Mathematics Performance Assessment

碩士 === 國立屏東師範學院 === 教育心理與輔導學系碩士班 === 91 === Factors influencing the generalizability on Mathematics Performance Assessment Abstract The main purpose of this research was to study the impact of rater’s mathematics background, rater training, task structure on reliability of math...

Full description

Bibliographic Details
Main Author: 林敬修
Other Authors: 張麗麗
Format: Others
Language:zh-TW
Published: 2003
Online Access:http://ndltd.ncl.edu.tw/handle/kvs6pv
id ndltd-TW-091NPTTC328006
record_format oai_dc
spelling ndltd-TW-091NPTTC3280062019-05-15T20:31:58Z http://ndltd.ncl.edu.tw/handle/kvs6pv Factors influencing the generalizability on Mathematics Performance Assessment 影響國小數學科實作評量信度相關因素之類推性理論分析 林敬修 碩士 國立屏東師範學院 教育心理與輔導學系碩士班 91 Factors influencing the generalizability on Mathematics Performance Assessment Abstract The main purpose of this research was to study the impact of rater’s mathematics background, rater training, task structure on reliability of mathematics performance assessment. A quasi-experimental study was implemented. Twelve raters, in eight elementary schools, participated in the study. According to rater’s mathematics background, twelve raters were separated into two groups (those with and those without mathematics background) with six raters in each group. And the raters in each group were then assigned to three different types of rater training (analytic scoring rubrics only, analytic scoring rubrics plus anchors, and a thorough training session with in-depth discussion and practice). After receiving the rater training, each rater scored students’ problem-solving performance on four mathematics items. Spearman rank correlation and generalizability theory were used in studying the reliability of the data. The major findings of the study were as follows: 1.In general, the largest variation of mathematics performance assessment scores was due to person and task interaction, the second was due to person, and the third was due to task. 2.Although rater variation accounted for only a small proportion of total variance components, rater consistency varied across raters with different mathematics backgrounds and rater training. 3.Task structure had an impact on variance components. Variance components were relatively smaller across items with similar task structure than those across items with different task structures. In addition, task structure interacted with rater mathematics background and rater training on rater consistency. For raters with no mathematics background and received scoring rubrics only/scoring rubrics plus anchors, rater consistency was low on less structured items, but for raters with mathematics background (no matter which rater training was received) and raters with no mathematics background but received an in-depth training session, the rater consistency was high regardless of the task structure. 4.Rater consistency on “communication” dimension was the lowest among the three scoring dimensions (i.e., understanding, strategies/procedural, communication). Based on the above findings, suggestions regarding rater training, task structure, model selection, and the research issues for further studies were also provided. Key words: performance assessment, mathematics performance assessment, reliability, generalizability theory. 張麗麗 2003 學位論文 ; thesis 237 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立屏東師範學院 === 教育心理與輔導學系碩士班 === 91 === Factors influencing the generalizability on Mathematics Performance Assessment Abstract The main purpose of this research was to study the impact of rater’s mathematics background, rater training, task structure on reliability of mathematics performance assessment. A quasi-experimental study was implemented. Twelve raters, in eight elementary schools, participated in the study. According to rater’s mathematics background, twelve raters were separated into two groups (those with and those without mathematics background) with six raters in each group. And the raters in each group were then assigned to three different types of rater training (analytic scoring rubrics only, analytic scoring rubrics plus anchors, and a thorough training session with in-depth discussion and practice). After receiving the rater training, each rater scored students’ problem-solving performance on four mathematics items. Spearman rank correlation and generalizability theory were used in studying the reliability of the data. The major findings of the study were as follows: 1.In general, the largest variation of mathematics performance assessment scores was due to person and task interaction, the second was due to person, and the third was due to task. 2.Although rater variation accounted for only a small proportion of total variance components, rater consistency varied across raters with different mathematics backgrounds and rater training. 3.Task structure had an impact on variance components. Variance components were relatively smaller across items with similar task structure than those across items with different task structures. In addition, task structure interacted with rater mathematics background and rater training on rater consistency. For raters with no mathematics background and received scoring rubrics only/scoring rubrics plus anchors, rater consistency was low on less structured items, but for raters with mathematics background (no matter which rater training was received) and raters with no mathematics background but received an in-depth training session, the rater consistency was high regardless of the task structure. 4.Rater consistency on “communication” dimension was the lowest among the three scoring dimensions (i.e., understanding, strategies/procedural, communication). Based on the above findings, suggestions regarding rater training, task structure, model selection, and the research issues for further studies were also provided. Key words: performance assessment, mathematics performance assessment, reliability, generalizability theory.
author2 張麗麗
author_facet 張麗麗
林敬修
author 林敬修
spellingShingle 林敬修
Factors influencing the generalizability on Mathematics Performance Assessment
author_sort 林敬修
title Factors influencing the generalizability on Mathematics Performance Assessment
title_short Factors influencing the generalizability on Mathematics Performance Assessment
title_full Factors influencing the generalizability on Mathematics Performance Assessment
title_fullStr Factors influencing the generalizability on Mathematics Performance Assessment
title_full_unstemmed Factors influencing the generalizability on Mathematics Performance Assessment
title_sort factors influencing the generalizability on mathematics performance assessment
publishDate 2003
url http://ndltd.ncl.edu.tw/handle/kvs6pv
work_keys_str_mv AT línjìngxiū factorsinfluencingthegeneralizabilityonmathematicsperformanceassessment
AT línjìngxiū yǐngxiǎngguóxiǎoshùxuékēshízuòpíngliàngxìndùxiāngguānyīnsùzhīlèituīxìnglǐlùnfēnxī
_version_ 1719099521094385664