Toolkit Resources: Campus Models & Case Studies
Assessing Learning Outcomes at the University of Cincinnati: Comparing Rubric Assessments to Standardized Tests
Richard Robles uses an analogy to illustrate the differences between standardized tests and rubric-based assessments, two important tools currently used to measure student learning outcomes. "Rubric assessments are like thermometers, whereas standardized tests are like litmus tests," Robles explains. "Both are useful for specific tasks—standardized tests give you an immediate answer about one point in time, whereas with rubrics, like thermometers, you check in periodically and see how changes occur over time." Robles, the assistant director for the First-Year Experience in the University of Cincinnati's honors program, has done a lot of thinking about assessment, as have many of his UC colleagues. That's because the university is taking a serious look at assessing student learning outcomes and trying to determine how best to implement various assessment methods.
Several important factors catalyzed UC's assessment focus. First, the university, a public institution of about 40,000 students in Ohio, needed to comply with the Voluntary System of Accountability (VSA), which requires that state institutions provide data about graduation rates, tuition, student characteristics, and student learning outcomes, among other measures, in the consistent format developed by its two sponsoring organizations, the Association of Public and Land-grant Universities (APLU), and the Association of State Colleges and Universities (AASCU). Second, Ohio public institutions are planning a systemwide change from quarters to semesters in fall 2012, providing a timely opportunity for a new focus on assessment as faculty map learning competencies and redesign their courses for the new semester calendar. And finally, UC was accepted in 2008 as a member of the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research, a collaborative body with the goal of advancing knowledge about the effect of electronic portfolio use on student learning outcomes.
Assessing Capstone Seminars
Since 2001, when the university implemented a new general education core program, students have been required to complete a senior capstone experience to earn a baccalaureate degree. The specific capstone requirements are tailored to each program of study, and department faculty assess students' achievement using rubrics designed to measure both learning outcomes specific to the major, and outcomes required of all UC students—including critical thinking, knowledge integration, social responsibility, and effective communication. For the past six years, the university has collected online survey data directly from students about how they scored on their faculty-conducted rubric assessments, explains Gisela Escoe, UC vice provost for assessment and student learning. The university now has enough data to look for patterns and trends in how students are scoring, and use the data to suggest program changes and improvements. "The wonderful thing about this approach is that full-time faculty across the university are gathering data about how their students are doing, and since they'll be teaching their courses in the future, they're really invested in rubric assessment—they really care," Escoe says. In one case, the capstone survey data revealed that students weren't doing as well as expected in writing, and faculty from that program adjusted their pedagogy to include more writing assignments and writing assessments throughout the program, not just at the capstone level. As the university prepares to switch from a quarter system to semester system in two years, faculty members are using the capstone survey data to assist their course redesigns, Escoe says.
Exploring Standardized Tests and Rubric Assessment
In 2008, UC was accepted in to the fifth cohort of the Inter/National Coalition for Electronic Portfolio Research. As a major component of its participation, the university planned a "dual pilot" study examining the applicability of electronic portfolio assessment of writing and critical thinking alongside the Collegiate Learning Assessment, a standardized VSA-approved test that measures critical thinking, analytic reasoning, problem solving, and written communication. Robles, Escoe, and their colleague Wayne Hall, UC's vice provost for faculty development and a professor of English, were among the faculty members who conducted the study, which compared students' CLA scores to rubric-assessed e-portfolio scores. The rubrics the UC team used were slightly modified versions of those developed by AAC&U's Valid Assessment of Learning in Undergraduate Education (VALUE) project. The study involved 111 first-year UC honor students who had already been working to create e-portfolios as part of an assignment for their honors seminar. The students took the CLA, a ninety-minute nationally standardized test, during the same week in which faculty members assessed students' e-portfolios using rubrics designed to measure effective communication and critical thinking. In the critical thinking rubric assessment, for example, faculty evaluated student proposals for experiential honors projects that they could potentially complete in upcoming years. The faculty assessors were trained and their rubric assessments "normed" to ensure that interrater reliability was suitably high.
When the CLA results arrived eight months later, the UC team compared the outcomes of the two assessments. "We found no statistically significant correlation between the CLA scores and the portfolio scores," Escoe says. "In some ways, it's a disappointing finding. If we'd found a correlation, we could tell faculty that the CLA, as an instrument, is measuring the same things that we value and that the CLA can be embedded in a course. But that didn't happen." There were many factors that may have contributed to the lack of correlation, she says, including the fact that the CLA is timed, while the rubric assignments are not; and that the rubric scores were diagnostic and included specific feedback, while the CLA awarded points "in a black box": if a student referred to a specific piece of evidence in a critical-thinking question, he or she simply received one point. In addition, she says, faculty members may have had exceptionally high expectations of their honors students and assessed the e-portfolios with those high expectations in mind—leading to results that would not correlate to a computer-scored test. In the end, Escoe says, the two assessments are both useful, but for different things. The CLA can provide broad institutional data that satisfies VSA requirements, while rubric-based assessment provides better information to facilitate continuous program improvement.
"When we talk about standardized tests, we always need to investigate how realistic the results are, how they allow for drill-down," Robles says. "The CLA provides scores at the institutional level. It doesn't give me a picture of how I can affect those specific students' learning. So that's where rubric assessment comes in—you can use it to look at data that's compiled over time."
Working across Institutions
The ongoing capstone assessments and the dual-pilot study are important steps in UC's continuing work on assessment, explains Hall. "As part of our longer-range assessment plans, we're now starting to track students throughout the four-year curriculum, not just at the beginning or end," he says. "The students in that first honors cohort are now in their second year. Their portfolios are now more like real learning portfolios, not just a few artifacts, and we want to look at them as they go into their third and fourth years to see what they can tell us about students' whole program of study." Hall and Robles are also looking into the possibility of forming relationships with other schools from NCEPR to exchange student e-portfolios and do a larger study on the value of rubric assessment of student learning. "We're hoping to get a realistic sense of whether assessing student learning using VALUE-type rubrics can really work on a larger level," Hall says.
As UC continues its curricular plans for semester conversion in 2012, the assessment team is working to provide faculty members with support and advice about including rubric-based assessments in new course designs, including instruction on how to fine-tune the AAC&U VALUE rubrics to meet specific needs. "We're really trying to stress that assessment is pedagogy," Hall says. "It's not some nitpicky, onerous administrative add-on. It's what we do as we teach our courses, and it really helps close that assessment loop."