VALUE Research Hub

The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments

Citation

Saxton, E., Belanger, S., & Becker, W. (2012). The Critical Thinking Analytic Rubric (CTAR): Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical thinking performance assessments. Assessing Writing, 17(4), 251–270. https://doi.org/10.1016/j.asw.2012.07.002

Abstract

The purpose of this study was to investigate the intra-rater and inter-rater reliability of the Critical Thinking Analytic Rubric (CTAR). The CTAR is composed of 6 rubric categories: interpretation, analysis, evaluation, inference, explanation, and disposition. To investigate inter-rater reliability, two trained raters scored four sets of performance-based student work samples derived from a pilot study and subsequent larger study. The two raters also blindly scored a subset of student work samples a second time to investigate intra-rater reliability. Participants in this study were high school seniors enrolled in a college preparation course. Both raters showed acceptable levels of intra-rater reliability (α≥0.70) in five of the six rubric categories. One rater showed poor consistency (α=0.56) for the analysis category of the rubric, while the other rater showed excellent consistency (α=0.91) for the same category suggesting the need for further training of the former rater. The results of the inter-rater reliability investigation demonstrate acceptable levels of consistency (α≥0.70) in all rubric categories. This investigation demonstrated that the CTAR can be used by raters to score student work samples in a consistent manner.

Themes: Analytic rubric, Critical Thinking, Higher-order cognitive skills, Inter-rater reliability, Intra-rater reliability, Performance-based assessment