It's Time to Get Serious About the Right Kind of Assessment: A Message for Presidents
The Department of Education will soon release for comment its controversial plan to rate colleges and universities using a series of metrics focused on access, affordability, completion, and job-related outcomes. In the April 25 issue of the Chronicle of Higher Education, Mary Wall, a senior policy advisor for higher education at the Department of Education, was quoted as saying, “We’re really fired up about this proposal...”
Many—perhaps all—of you have trustees who have been saying for a long time, “How do we know if we’re delivering what we say we are delivering to students and the nation?”
Now, you may find they are saying, “At last! With this new federal rating system, we will know how we compare.”
The trustees’ question is at bottom the most reasonable of questions—a question each college and university in America absolutely ought to be able to answer. As fiduciaries they have a responsibility to assure the nation that your institution is succeeding at its educational mission. I have not met a president who doesn’t agree.
At the same time, all the presidents I know want to say, “Wait! By design, the federal ratings plan includes absolutely nothing about student learning. How can we be rated on our effectiveness if the most important outcome we seek is student learning and that’s not a part of it?” And, of course, these presidents are dead right.
For a college or university that seeks to provide a high-quality education, the evidence about what students know and can do with their learning is the crucial question. It is, in fact, a question that the federal government is neither equipped nor authorized to answer. But it is the question that higher education itself needs to answer.
And that, of course, leads to the next question: “Can our faculty actually provide meaningful evidence on the kind of learning that matters in the twenty-first century?” The answer up to now, sadly, is “No.”
Our critics and our trustees alike say, in turn, that this response is unacceptable—and they are right.
But, in fact, it is true that not just any kind of learning assessment will suffice and, amazingly, way too few of AAC&U’s more than 1,300 member presidents know enough, or even anything, about AAC&U’s pioneering leadership in this area.
The VALUE Initiative: Using Rubrics to Assess Students’ Own Work is the Key
Currently, with significant funding from philanthropy, AAC&U is working with higher education leaders, faculty, and several state systems to advance a far-reaching change in “what counts as primary evidence” when it comes to assessing students’ learning gains in college. The key innovation is that these faculty-led approaches move students’ own complex college work—projects, writing, research, collaborations, service learning, internships, creative performances, and the like—to the center of the assessment equation. The new approach also underscores the central role of faculty members’ own collaborative judgments about the goals of higher learning and about the rubrics or standards that should be used in evaluating students’ attainment of those goals.
Standardized testing would—in this new approach—become complementary rather than central to national and institutional reporting on students’ gains in learning. The proof of students’ progress would be found, instead, in the evidence of their actual work.
When Institutions Use VALUE Rubrics, Benchmarking Can Happen
At the same time, AAC&U recognizes that it is insufficient for an institution to assess its students in ways that are grounded only in its local curriculum and understandable only within a specific institutional context. In an era when higher education is more important to our future than ever, and when higher education therefore is under more scrutiny than ever, colleges, universities, and community colleges also must provide useful knowledge to the public about goals, standards, accountability practices, and the quality of student learning. The key to the VALUE assessment approach therefore is the creation of common rubrics that can summarize levels of student achievement across different academic fields and institutions, and for particular groups of students.
Recognizing the need for common rubrics that spoke to widely shared goals for liberal education, and with the support of the State Farm Companies Foundation and the US Department of Education’s Fund for the Improvement of Postsecondary Education (FIPSE), AAC&U launched an initiative in 2007 called Valid Assessment of Learning in Undergraduate Education (VALUE) to explore the development of assessment rubrics for AAC&U’s LEAP Essential Learning Outcomes, outcomes that have been strongly endorsed by employers in multiple AAC&U surveys.
There Are VALUE Rubrics for Sixteen Liberal Education Outcomes—Each Essential for Work, Life, and Citizenship
As of now, assessment rubrics for sixteen liberal education outcomes have been developed by teams of faculty and academic professionals from more than 100 campuses across the country—outcomes above and beyond knowledge and competence in specific content fields. Validity studies (an estimate of the extent to which a measure—in this case a rubric scoring—is actually correlated with the underlying trait it seeks to measure) and reliability studies (an estimate of the extent to which multiple raters reach the same conclusion on a rating using a particular rubric) are underway with very positive results so far.
Why do institutions that have participated in the VALUE initiative see it as an improvement over current attempts to assess learning with standardized tests? Because administering standardized tests is costly in time and money, proponents of standardized testing strategies are so far advocating the measurement of too few learning goals—typically just critical thinking and writing. AAC&U, on the other hand, has developed rubrics to assess inquiry and analysis, critical thinking, writing, integrative learning, oral communication, information literacy, problem solving, teamwork, intercultural knowledge, civic engagement, creative thinking, quantitative literacy, lifelong learning, ethical reasoning, global learning, and reading. The many institutions already testing VALUE rubrics argue that reducing the assessment of student and institutional performance to just one or two learning goals disrespects and greatly oversimplifies what students really should learn in college and what institutions should be responsible for teaching. The sixteen VALUE rubrics cover a much fuller spectrum of significant student learning in college and can be used to assess whether students have developed these key capacities in many different content arenas.
This Form of Assessment Is Less Expensive and Far More Useful for Faculty-Led Improvement
In addition, AAC&U’s pilot testing of the time and cost of assessing actual student work, versus administering standardized tests, shows that rubric assessment is less expensive and requires no additional time from students, while it does provide nuanced developmental feedback to students not provided by standardized test scores. For more information, see “Assessing Liberal Education Outcomes Using VALUE Rubrics,” (Peer Review, Fall 2011/Winter 2012). Knowing they are going to use a rubric to assess student work, faculty members must “reverse engineer” their courses, thinking carefully about how their assignments are structured. Is the assigned work going to stimulate the kind of learning the rubric describes? Sharing the rubric with students ahead of time gives them a much deeper and more explicit understanding of the growth in higher-order learning skills they are being asked to achieve. Students can see what the college believes is the difference between exceptionally fine and less fine analysis, critical thinking, integrative learning, and so on. In some institutions, students observing a public presentation by another student are also asked to use a rubric to evaluate their co-student’s work, adding another avenue to learning and insight for the student observers. This kind of assessment activity is embedded in the teaching and learning process itself and actually contributes to learning.
Resulting Institutional Scores Are More Accurate and Valid
Also, institutions know that ensuring that the students whose work will be assessed are a representative sample of each institution’s students is critical to a successful institutional assessment system. Fixing a major flaw in standardized testing strategies, AAC&U’s approach ensures that student work will be obtained from virtually every student in the sample, reducing “non-response” bias essentially to zero. This is possible because students must complete the work needed for rubric scoring as part of their classroom assignments, whereas institutions currently find it very difficult to get students to take and treat seriously standardized tests that are not part of required coursework. So even when the sample of students to be tested has been selected in order to be representative, test-completion rates are almost always very low, invalidating generalizations back to the population.
The new approach also involves students themselves in the intentional project of reporting, integrating, and demonstrating their cumulative gains in college. It gives students focus and skills as they work toward becoming self-directed learners. It is, in fact, another form of active learning.
A Pace-Setting National Study Is Beginning This Year
Thousands of institutions are already using the VALUE rubrics, but until now, there has been no organized way to benchmark and compare results. This year, with financial support and assistance from the Bill & Melinda Gates Foundation and in partnership with the State Higher Education Executive Officers Association, AAC&U has embarked on an extensive “proof of concept at scale” to learn how institutional student learning assessment using VALUE rubrics can be scaled up and sized appropriately to an institution’s mission and resources. This pilot testing of the approach is being organized through a Multi-State Collaborative to Advance Learning Outcomes Assessment. Very importantly, the Gates grant is funding the creation of a national database into which institutions can deposit actual student work products for scoring using appropriate VALUE rubrics. AAC&U will own this database, which will operate much the way that the National Survey of Student Engagement operates: deposited student work will come from representative samples of students; reports back to the institution will be created, structured to be useful to faculty and students engaged in a cycle of continuous quality improvement; institutions will control who can see institutionally-identifiable data; and AAC&U will only release summary reports based on data aggregated across types of institutions.
VALUE Will Become the National Standard
We at AAC&U and the growing number of member-institution faculty and leaders who are exploring this assessment concept for their institutions are absolutely confident that over time a VALUE rubric-based system of learning outcomes assessment will provide what we and the nation need both for continuous improvement in student and institutional performance and for the evidence of student learning those who finance and subsidize American higher education—families, government, and charitable donors—legitimately deserve. My plea, if you and your institution are not already involved in exploring this AAC&U rubric-based form of assessment, or testing it as part of our proof of concept at scale efforts, is that you start now to learn about it and begin to position your institution to take advantage of it when AAC&U is able to make it available broadly. See AAC&U’s VALUE web page for more information.
Colleges and universities are rightly resisting the federal ratings plan and other equally oversimplified and flawed attempts to measure college outcomes. I hope you continue to do so. AAC&U’s approach, on the other hand, goes with the grain of what we do, not across or against it. It begins with and respects the work we ask our students to undertake and the efforts of faculty engaged in what is for most a vocation, not just a job. But then it goes beyond to give us what we need to revise and fine-tune what we do in order for our students to achieve more and more, while also being a respectful and sensitive system for holding us accountable.
Let’s not let this opportunity to raise our standards for truly meaningful assessment—and reporting—of students' accomplishments in college slip away!
Daniel F. Sullivan, President Emeritus, St. Lawrence University; Senior Advisor to the AAC&U President; and Chair, AAC&U Presidents’ Trust