Citation
Cole, T. L., Cochran, L. F., Troboy, L. K., & Roach, D. W. (2012). Efficiency in Assessment: Can Trained Student Interns Rate Essays as Well as Faculty Members? International Journal for the Scholarship of Teaching and Learning, 6(2). https://eric.ed.gov/?q=%22ethical+reasoning%22+%22scoring+rubrics%22&ff1=dtySince_2009&ff2=eduHigher+Education&id=EJ1135553
Abstract
What are the most efficient and effective methods in measuring outcomes for assurance of learning in higher education? This study examines the merits of outsourcing part of the assessment workload by comparing ratings completed by trained student interns to ratings completed by faculty. Faculty evaluation of students' written work samples provides the most detailed, actionable data useful for improving the curriculum. While this approach may be efficacious, it is also labor-intensive. Both the faculty and student interns were trained to use a scoring rubric developed for this assessment to rate undergraduate student essay responses to an ethical reasoning scenario. The convergent validity, discriminant validity, and source bias showed no significant difference between the values for the student raters versus those for the faculty raters. These findings support the hypothesis that trained student interns can do as well as faculty at evaluating undergraduate student work samples.
Themes: Accuracy, Bias, College Faculty, Essays, Graduate Students, Internship Programs, Interrater Reliability, Outsourcing, Questionnaires, Scoring, Scoring rubrics, Training, Undergraduate Students, Validity