Publications on ePortfolio: Archives of the Research Landscape (PEARL)

E-Portfolio Assessment: A Mixed Methods Study of an Instructional Leadership Program's Assessment System

Citation

Hardin, J., & Wright, V. (2017). E-Portfolio Assessment: A Mixed Methods Study of an Instructional Leadership Program’s Assessment System. Research in the Schools, 24(1), 63–79. http://www.msera.org/publications-rits.html

Abstract

Institutions of higher education face demands to provide evidence of institutional and student achievement. Many institutions utilize standards-based e-portfolio assessment practices to meet these demands. The assessment data derived from the studied program’s e-portfolio process were not originally intended to serve as a data source for programmatic change. Research has indicated that these practices are questionable. Thus, the purpose of this study was to examine the standards-based assessments derived from the use of an e-portfolio assessment tool by one instructional leadership program (ILP) at a southeastern university for making programmatic change. In this study, a program evaluation was conducted on one program’s assessment practices. An explanatory sequential, mixed methods design was conducted in three phases. In Phase I, quantitative analyses were conducted using assessments of 134 students’ performance to determine whether they were predictive of student performance on the Praxis II. In Phase II, the researcher conducted one-on-one, semi-structured interviews with 7 program faculty members to uncover program practices, procedures, polices, and attitudes towards their institutional assessment practices. Phase I data analyses indicated that students’ assessments were not predictive of students’ performance on the Praxis II. Phase II analyses indicated that faculty members have a negative perception regarding their program’s assessment practices. The integration of the results for Phase III revealed that faculty members’ perceptions greatly impacts how they conduct assessments in their courses. In addition, a discrepancy in how faculty members assign assessment scores and a lack of communication among assessors were factors that affect the program’s assessment practices.

Category: Empirical, Assessment and Evaluation