Study on appropriateness of interrater chance-corrected agreement coefficients

碩士 === 國立臺灣大學 === 數學研究所 === 100 === On behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters....

Full description

Bibliographic Details
Main Authors: Yan-Ling Kuo, 郭晏伶
Other Authors: Hung Chen
Format: Others
Language:zh-TW
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/86180881787759289329
id ndltd-TW-100NTU05479009
record_format oai_dc
spelling ndltd-TW-100NTU054790092015-10-13T21:45:45Z http://ndltd.ncl.edu.tw/handle/86180881787759289329 Study on appropriateness of interrater chance-corrected agreement coefficients 一致性量測中隨機期望修正量之合理性 Yan-Ling Kuo 郭晏伶 碩士 國立臺灣大學 數學研究所 100 On behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters. The reliability among raters becomes an important issue. In particular, investigators would like to know whether all raters classify objects in a consistent manner. Cohen (1960) proposed kappa coefficient, κ, for correcting the chance agreement among two raters. κ is widely used in literature for quantifying agreement among the raters on a nominal scale. However, Cohen''s kappa coefficient has been criticized for the illness prevalence or base rate in the particular population under study or irrelevant of rater''s rating abilities for latent classes. Gwet (2008) proposed an alternative agreement based on interrater reliability called AC1 statistic, γ1. De Mast (2007) suggested an appropriate chance-corrected interrater agreement coefficient κ* by correcting the agreement due to chance. In this thesis, we use asymptotic analysis to evaluate whether κ or γ1 is a consistent estimate of κ* when both raters adopt random rating model or Gwet''s model (2008) and compare the performances of κ and γ1 with κ*. Hung Chen 陳宏 2012 學位論文 ; thesis 55 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 國立臺灣大學 === 數學研究所 === 100 === On behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters. The reliability among raters becomes an important issue. In particular, investigators would like to know whether all raters classify objects in a consistent manner. Cohen (1960) proposed kappa coefficient, κ, for correcting the chance agreement among two raters. κ is widely used in literature for quantifying agreement among the raters on a nominal scale. However, Cohen''s kappa coefficient has been criticized for the illness prevalence or base rate in the particular population under study or irrelevant of rater''s rating abilities for latent classes. Gwet (2008) proposed an alternative agreement based on interrater reliability called AC1 statistic, γ1. De Mast (2007) suggested an appropriate chance-corrected interrater agreement coefficient κ* by correcting the agreement due to chance. In this thesis, we use asymptotic analysis to evaluate whether κ or γ1 is a consistent estimate of κ* when both raters adopt random rating model or Gwet''s model (2008) and compare the performances of κ and γ1 with κ*.
author2 Hung Chen
author_facet Hung Chen
Yan-Ling Kuo
郭晏伶
author Yan-Ling Kuo
郭晏伶
spellingShingle Yan-Ling Kuo
郭晏伶
Study on appropriateness of interrater chance-corrected agreement coefficients
author_sort Yan-Ling Kuo
title Study on appropriateness of interrater chance-corrected agreement coefficients
title_short Study on appropriateness of interrater chance-corrected agreement coefficients
title_full Study on appropriateness of interrater chance-corrected agreement coefficients
title_fullStr Study on appropriateness of interrater chance-corrected agreement coefficients
title_full_unstemmed Study on appropriateness of interrater chance-corrected agreement coefficients
title_sort study on appropriateness of interrater chance-corrected agreement coefficients
publishDate 2012
url http://ndltd.ncl.edu.tw/handle/86180881787759289329
work_keys_str_mv AT yanlingkuo studyonappropriatenessofinterraterchancecorrectedagreementcoefficients
AT guōyànlíng studyonappropriatenessofinterraterchancecorrectedagreementcoefficients
AT yanlingkuo yīzhìxìngliàngcèzhōngsuíjīqīwàngxiūzhèngliàngzhīhélǐxìng
AT guōyànlíng yīzhìxìngliàngcèzhōngsuíjīqīwàngxiūzhèngliàngzhīhélǐxìng
_version_ 1718068310449324032