VALUE Research Hub

A Second Dystopia in Education: Validity Issues in Authentic Assessment Practices

Citation

Hathcoat, J., Penn, J., Barnes, L., & Comer, J. (2016). A Second Dystopia in Education: Validity Issues in Authentic Assessment Practices. Research in Higher Education, 57(7), 892–912. https://doi.org/10.1007/s11162-016-9407-1

Abstract

Authentic assessments used in response to accountability demands in higher education face at least two threats to validity. First, a lack of interchangeability between assessment tasks introduces bias when using aggregate-based scores at an institutional level. Second, reliance on written products to capture constructs such as critical thinking (CT) may introduce construct-irrelevant variance if score variance reflects written communication (WC) skill as well as variation in the construct of interest. Two studies investigated these threats to validity. Student written responses to faculty in-class assignments were sampled from general education courses within an institution. Faculty raters trained to use a common rubric than rated the students' written papers. The first study used hierarchical linear modeling to estimate the magnitude of between-assignment variance in CT scores among 343 student-written papers nested within 18 assignments. About 18 % of the total CT variance was attributed to differences in average CT scores indicating that assignments were not interchangeable. Approximately 47 % of this between-assignment variance was predicted by the extent to which the assignments requested students to demonstrate their own perspective. Thus aggregating CT scores across students and assignments could bias the scores up or down depending on the characteristics of the assignments, particularly perspective-taking. The second study used exploratory factor analysis and squared partial correlations to estimate the magnitude of construct-irrelevant variance in CT scores. Student papers were rated for CT by one group of faculty and for WC by a different group of faculty. Nearly 25 % of the variance in CT scores was attributed to differences in WC scores. Score-based interpretations of CT may need to be delimited if observations are solely obtained through written products. Both studies imply a need to gather additional validity evidence in authentic assessment practices before this strategy is widely adopted among institutions of higher education. Authors also address misconceptions about standardization in authentic assessment practices.

Themes: Alternative assessment (Education), Authentic assessment, Critical Thinking, Educational accountability, Higher Education, Performance assessment, Standardization, Standardized Tests, Task-specificity, Validity, Writing