A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods

Item response theory (IRT) observed score kernel equating was evaluated and compared with equipercentile equating, IRT observed score equating, and kernel equating methods by varying the sample size and test length. Considering that IRT data simulation might unequally favor IRT equating methods, pse...

Full description

Bibliographic Details
Main Authors: Shaojie Wang, Minqiang Zhang, Sen You
Format: Article
Language:English
Published: Frontiers Media S.A. 2020-03-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fpsyg.2020.00308/full
id doaj-3ddbadcb30f242bcaf55a9f9e8a3f847
record_format Article
spelling doaj-3ddbadcb30f242bcaf55a9f9e8a3f8472020-11-25T01:57:56ZengFrontiers Media S.A.Frontiers in Psychology1664-10782020-03-011110.3389/fpsyg.2020.00308512026A Comparison of IRT Observed Score Kernel Equating and Several Equating MethodsShaojie Wang0Minqiang Zhang1Minqiang Zhang2Sen You3School of Psychology, South China Normal University, Guangzhou, ChinaSchool of Psychology, South China Normal University, Guangzhou, ChinaThe Chinese Society of Education, Beijing, ChinaThe Chinese Society of Education, Beijing, ChinaItem response theory (IRT) observed score kernel equating was evaluated and compared with equipercentile equating, IRT observed score equating, and kernel equating methods by varying the sample size and test length. Considering that IRT data simulation might unequally favor IRT equating methods, pseudo tests and pseudo groups were also constructed to make equating results comparable with those from the IRT data simulation. Identity equating and the large sample single group rule were both set as criterion equating (or true equating) on which local and global indices were based. Results show that in random equivalent groups design, IRT observed score kernel equating is more accurate and stable than others. In non-equivalent groups with anchor test design, IRT observed score equating shows lowest systematic and random errors among equating methods. Those errors decrease as a shorter test and a larger sample are used in equating; nevertheless, effect of the latter one is ignorable. No clear preference for data simulation method is found, though still affecting equating results. Preferences for true equating are spotted in random Equivalent Groups design. Finally, recommendations and further improvements are discussed.https://www.frontiersin.org/article/10.3389/fpsyg.2020.00308/fullitem response theory observed score kernel equatingclassical test theoryitem response theorydata simulationcriterion equating
collection DOAJ
language English
format Article
sources DOAJ
author Shaojie Wang
Minqiang Zhang
Minqiang Zhang
Sen You
spellingShingle Shaojie Wang
Minqiang Zhang
Minqiang Zhang
Sen You
A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
Frontiers in Psychology
item response theory observed score kernel equating
classical test theory
item response theory
data simulation
criterion equating
author_facet Shaojie Wang
Minqiang Zhang
Minqiang Zhang
Sen You
author_sort Shaojie Wang
title A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
title_short A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
title_full A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
title_fullStr A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
title_full_unstemmed A Comparison of IRT Observed Score Kernel Equating and Several Equating Methods
title_sort comparison of irt observed score kernel equating and several equating methods
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2020-03-01
description Item response theory (IRT) observed score kernel equating was evaluated and compared with equipercentile equating, IRT observed score equating, and kernel equating methods by varying the sample size and test length. Considering that IRT data simulation might unequally favor IRT equating methods, pseudo tests and pseudo groups were also constructed to make equating results comparable with those from the IRT data simulation. Identity equating and the large sample single group rule were both set as criterion equating (or true equating) on which local and global indices were based. Results show that in random equivalent groups design, IRT observed score kernel equating is more accurate and stable than others. In non-equivalent groups with anchor test design, IRT observed score equating shows lowest systematic and random errors among equating methods. Those errors decrease as a shorter test and a larger sample are used in equating; nevertheless, effect of the latter one is ignorable. No clear preference for data simulation method is found, though still affecting equating results. Preferences for true equating are spotted in random Equivalent Groups design. Finally, recommendations and further improvements are discussed.
topic item response theory observed score kernel equating
classical test theory
item response theory
data simulation
criterion equating
url https://www.frontiersin.org/article/10.3389/fpsyg.2020.00308/full
work_keys_str_mv AT shaojiewang acomparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT minqiangzhang acomparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT minqiangzhang acomparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT senyou acomparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT shaojiewang comparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT minqiangzhang comparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT minqiangzhang comparisonofirtobservedscorekernelequatingandseveralequatingmethods
AT senyou comparisonofirtobservedscorekernelequatingandseveralequatingmethods
_version_ 1724971602282545152