The LEAP Challenge Blog
Making Learning Assessment Count in a Time of Limited Resources
Colleges and universities are “not doing enough to use the data they collect to improve teaching and learning.” So noted a recent Chronicle of Higher Education News Blog (7/16/09) quoting Stan Ikenberry, (coprincipal investigator for the National Institute for Learning Outcomes Assessment, an initiative begun in 2008 to find and detail best practices in assessment across college campuses) regarding new findings from a recently completed NILOA study. Indeed colleges and universities are not doing enough, but there are three key issues that underlie this assertion that can help move assessment out of the sphere of compulsory tasks and into the spotlight of improved learning outcomes.
First, campuses need to do a better job of taking inventory of the various types and forms of assessment already being done. Many campuses have well-established assessment schedules, such as routine administrations of the National Survey of Student Engagement (NSSE) and Higher Education Research Institute Surveys, such as the Cooperative Institutional Research Program (CIRP), inventories of student health or alcohol use, and course evaluations. However, because the term “campus community” tends to mean a collection of independently functioning silos rather than a coordinated network, a good deal of data is gathered, stored, and understood only within particular departments. Administrators, student affairs personnel, and faculty need to be more intentional about the dissemination, reflection, and communication of data that is already being collected.
Second, assessment is too often done as post hoc response to a specific stimulus, rather than as a prescriptive strategy for developing meaningful educational experiences for students. For example, assessment is done to appease accreditors, to evaluate resources following a crisis, or to improve admission and retention rates. Colleges and universities would be better off thinking more holistically and strategically about assessment. Campuses can do this by spending more time clarifying or implementing clear learning outcomes, the degree to which established outcomes are being met, and the potential pathways for improving achievement of key outcomes. AACU’s VALUE project provides one example of this more systemic and strategic approach to assessment. In this way, assessment is done for the long-term health of the institution in the context of students’ learning, campus culture and strategic planning—rather than as a Band-Aid for a current ailment.
Finally, the misdirection of resources, particularly human resources, with regard to assessment efforts both frustrates the efficacy of this work and discourages productive use of data. The NILOA study indicated assessment responsibilities are often undertaken by the “equivalent of one full-time employee or less” on campuses; this finding echoes the reality of too many institutions faced with the need to demand more of limited staff at a time of shrinking resources. And when assessment is deployed as a series of ad hoc queries rather than as a substantive and defined evaluation protocol to address institutional goals, time and money will be likely casualties. Having coherence and communication across campus offices on the substance and meaning of data being collected helps to expose assessment overlaps and gaps. Additionally, collaboration across silos further promotes resource conservation by illuminating what’s not working on campuses. In these tough economic times, campuses cannot afford to waste assessment or waste resources by not assessing well.