Peer Review, Winter/Spring 2002
Measuring the Difference College Makes:
The RAND/CAE Value Added Assessment Initiative
By Roger Benjamin, president, RAND Corporation's
Council for Aid to Education, and Richard H. Hersh,
president, Trinity College, and senior fellow, RAND
Corporation's Council for Aid to Education
In the fall of 2000, the RAND Corporation's Council for Aid
to Education (CAE) embarked on the national Value Added Assessment
Initiative (VAAI), a long-term project to assess the quality
of undergraduate education in the United States by measuring
its impact on students. With initial funding from a consortium
of major foundations, the VAAI involves the continuum of higher
education, from community colleges to doctoral-degree-granting
private and state colleges and universities. The objective
is to create a model and an incentive for the continuous improvement
of higher education as well as to create measures of quality
that all the major stakeholders-university administrators,
faculty, students, parents, employers, and policymakers-can
use as part of their evaluation of the quality of academic
programs nationwide. This issue of Peer Review presents
an interim report on the progress of this project.
Logically, student outcomes assessment should be the central
component of any effort to measure the quality of an institution
or program. Yet most evaluations of quality are based solely
on student and alumni surveys, tabulations of actuarial data
such as graduation rates, peer review accreditation, "reputation"
rankings, institutional resources, and the admissions selectivity
of the student body. In lieu of any systematic, direct measure
of student learning by higher education itself, rating and
ranking systems, such as those utilized by college guidebooks
and U.S. News & World Report, are being relied upon
by prospective students and their parents as surrogate measures
of quality. Such ratings and rankings depend mainly on "input"
variables, such as endowment dollars and SAT scores. These
indicators do not directly measure the knowledge,
skills, and abilities that students develop in college and
thus do a great injustice to the quality, complexity, and
diversity of higher education in the United States. The development
of direct measures of student learning is the missing but
essential ingredient needed to improve the quality of American
Value Added Assessment: An Important Educational
Excellence and quality should be determined by the degree
to which an institution develops the abilities of its students.
Such a "value added" metric would better inform decisions
concerned with access, productivity, and quality. In the literature
on higher education, the term "value added" often refers either
to the value of having a college degree-in terms of income,
job, and life satisfaction (Krueger 2000)-or to the benefits
derived from alternative programs, courses of study, and experiences
within an institution (Astin 1993).
The VAAI focuses on a third definition, which has to do with
the institution as a whole. What difference does the institution
make for its students? Is it more effective in making a difference
now than in the past? Is it more effective than other similarly
situated schools after controlling for the admissions scores
and other relevant attributes of its incoming students? Measuring
such value requires assessing what students know and can do
as they begin college and assessing them again during and
after (including many years after) they have had the full
benefit of their college education. Value added is the difference
a college makes in their education. Value added assessment
is appropriate for the variety of higher education institutional
missions-including those of community colleges, which account
for close to 40 percent of all undergraduate enrollment. In
addition, given that students increasingly begin in one institution
and finish in another, it may also provide a benchmark against
which to assess appropriate program placement and transfer
credits within and/or across institutions.
The Challenges of Value Added Assessment
If value added assessment is so useful, why has it not been
the standard practice? As Doug Bennett, President of Earlham
College, explains in a recent issue of Liberal Education
(2001), the reason is that value added is very difficult to
measure. First, Bennett suggests, value has many dimensions.
No college or university is trying to develop only a single
capability in students. A campus may be doing well with one
learning dimension but less well with another. And some dimensions
are easier to measure than others. Second, students are different.
Everyone learns some subjects or capabilities more easily
than others. Is a college or university doing a better job
of educating some students because of this fact? Third, institutions
are different. How does one compare the quality of institutions
with differing missions (e.g., a research university focused
on graduate study compared to a small liberal arts college)?
Fourth, effects have many sources, and effects unfold. Even
if students are full-time at a single institution, how can
we tell what contributions that campus has made, as opposed,
for example, to the contributions of a part-time job or their
church? Moreover, the contributions to learning may not be
realized in the short-term; they may be felt only years later.
Fifth, the most important effects may be transformative. Because
a liberal education seeks to develop a unique person, in command
of all the capabilities within her/his potential, the most
important effects may be uniquely combined and transformative
of that person as a whole-a very difficult thing to assess.
Finally, measuring value added is expensive. The kinds of
assessment the VAAI proposes (e.g., using writing samples
rather than multiple-choice questions and implementing tests
that measure critical and imaginative thinking, all of which
in pre- and post-test formats) is far more costly than conventional
testing in individual classrooms.
Because of these challenges, as well as other barriers
to assessment, the VAAI is focused specifically on assessing
a few selected student outcomes of "common" undergraduate
education in America through a research strategy (discussed
in this issue) that takes them into account. By "common" we
mean those educational purposes, objectives, and core attributes
most observers usually include under the rubric of "liberal
education," such as writing, higher order thinking, problem
solving and quantitative reasoning. The VAAI is informed by
but goes beyond current research literature, which is rich
with correlational and student self-report studies suggesting-but
unable to prove-the hypothesized causal teaching/learning
relationships claimed for liberal education curricula and
Now, at the end of the first year of a two-year feasibility
study, we are developing a variety of prototype measures that
can help assess the "value added" of selected competencies,
skills, and values gained by individual students as a consequence
of liberal education at a particular college, university,
or online provider. These measures will be tested on fifteen
campuses this spring. The results of this testing will help
in designing a five-year longitudinal study that will follow
individual students from college entrance to degree completion.
Such measures are being developed in concert with faculty
and administrators throughout the country.
Institutional and Public Policy Benefits of Value
First and foremost, value added assessment should have as
its goal the continuous improvement of curricula, pedagogy,
admissions, certification, and retention within an institution.
However, the initial focus of VAAI will be the institution.
Value added assessments could also provide diagnostic feedback
to both students and faculty within programs and majors and
catalyze improvement efforts. Timely and appropriate assessment
provides feedback to students to improve their learning in
much the same way that doctors' and coaches' assessments help
patients and athletes to improve. In this sense, assessment
should be an inextricable part of the teaching/learning process.
Certainly, some students do not learn because they have not
been responsible; assessment will have obvious consequences
for failed student effort. However, the educational point
is that if assessment shows large numbers of students aggregated
by gender, race/ethnicity, socioeconomic status, or other
criteria not doing as well as expected, there is a faculty
and institutional responsibility to investigate the reasons
for this and, where appropriate, make changes in courses,
programs, and teaching. Moreover, the development of effective
measures of the value added to student performance would create
an additional source of data on teaching effectiveness that
goes beyond current student evaluations of courses.
In addition to the benefits value added assessment could
bring to campus-based improvement efforts, this form of assessment
could also assist those responsible for developing enlightened
policies that will support essential higher education reforms.
Governors and legislators set goals for their higher education
sector based on their perceptions of their workforce needs,
socioeconomic inequality problems, enrollment pressures, and
budgetary constraints. State policies intended to ensure quality,
productivity, and accountability would be enhanced if informed
by a common metric. Value added direct assessment of student
learning best serves that purpose. Anything short of this
systemic approach will leave academic leaders and governors
without a basis for determining the effectiveness of their
own policies, costs, or benefits.
The focus on the state's role in higher education policy,
however, is only one-half of the equation; a focus on public
and private higher education institutions is the other half.
We must connect these halves by linking state policy in higher
education to the institutions that, in the end, carry out
such policies. Unless we develop a research and policy design
logic that combines both institutional and state levels of
analysis, it is highly unlikely that the goals of state-level
policymakers and analysts will be achieved. The concern for
accountability, most often associated with state demands on
public colleges and universities, can unintentionally promote
bad educational policy. For example, state funding is often
predicated on full-time student equivalents (FTE). This body-count
mentality focuses issues like retention more on maintaining
enrollment than on educational quality. And policies to provide
smaller classes, technology for teaching, or more effective
advising, for example, might better be judged by their direct
impact on students.
The problem is that each educational policy issue, such as
access, retention, true costs of instruction, and quality-whether
debated inside or outside the academy-is too often treated
in isolation. For example, the benefits of cost reduction
ideas need to be evaluated against an appropriate benchmark;
the logical candidate is the quality of student learning outcomes.
And while the public discourse in higher education is understandably
focused on improving access, it surely is critically important
to ask the question: Access to what kind and level of quality
of undergraduate education? Access is a hollow promise, indeed,
if the quality of educational programming and teaching is
Finally, value added data, coupled with the necessary clarification
of institutional expectations for students, can provide clearer
signals to the K-12 system. Seventy percent of high school
graduates move on to some form of higher education. The creation
of a seamless K-16 system requires that much more attention
be paid to the criteria for higher education admissions and
graduation. Much has been written recently about the "wasted"
senior year of high school and the universal complaint that
too many students come to colleges and universities severely
under-prepared for college-level work. A value added approach
to direct assessment of student learning (controlling for
the resources the student brings to higher education) requires
clear specification of student performance and thus sets the
stage for linking high school graduation requirements to college
entrance and exit standards that could then be monitored over
time. States with strong and sophisticated K-12 testing regimes,
such as Massachusetts and Connecticut, are now in a good position
to articulate such standards as a K-16 system.
Barriers to Value Added Assessment
It is crucial to emphasize that the culture of higher education
is unique. It is simply not sufficient to import from K-12
or industry the rhetoric of assessment and efficiency. The
nature of teaching, learning, and scholarship, in the context
of college and university cultures, requires an assessment
system designed specifically for those environments. Moreover,
states must understand that real investments in their institutions
are required in order to provide the time, energy, and resources
necessary for such an endeavor. An assessment system cannot
be handed down to higher education from above; it must be
a faculty- and institution- driven initiative.
Generally, academic culture does not value systemic cumulative
assessment of undergraduate learning. Currently, the metric
most commonly used to guide performance-promotion, tenure,
and merit raises-is a system of qualitative and quantitative
measures that emphasize research productivity. To date, the
primary initiative for assessment of educational quality has
come mostly from outside academe-from state and local boards
of education, corporations, state legislatures, governors,
and market-oriented online educators-through calls for increased
accountability standards. States have garnered the most headlines
in this regard with their K-12 school reform priorities, explicit
state-wide standards, and so called "high-stakes testing."
Such assessment, however, is difficult. It requires political
and educational consensus about what is worth learning, developing
valid and reliable assessment measures, constructing efficacious
curricula, improving instruction, providing appropriate reward
and incentive systems, and offering the financial resources
and time for the development and sustenance of a comprehensive,
systemic assessment program. These are equally salient issues
for higher education, whose history and culture make it especially
resistant to assessment.
The academy has observed the problems states are having with
assessment of K-12 education: the tendency to reduce testing
to what is easily measured; inappropriate coaching or even
cheating on the part of teachers and schools; narrowing the
curriculum to just what is tested; and confusing assessment
designed for diagnostic purposes with the politics and economics
of holding individual schools accountable. These are serious
issues and have reinforced the usual questioning by higher
education of the value of the entire assessment enterprise.
We argue that the selection of appropriate tests can, in fact,
have a positive impact on learning (see Klein, "The Educational
Impact of Assessment Policies: What the Legal Community is
Finally, a prior history of intermittent but inappropriate
federal and state administrative intrusion into curriculum
raises a legitimate concern by faculty about the undermining
of "academic freedom." For example, the attempted federal
directive on accountability that was to require the creation
of State Post-Secondary Review Entities in all fifty states
was strongly rejected by the states. And recently, the New
York Board of Regents mandated a core curriculum for the State
University of New York. Assessment beyond individual course
grading, say professors, is just the first slide down the
slippery hill of external intrusion.
Unless the academy constructs an educationally efficacious
assessment system, one may well be imposed from outside. Ultimately,
value added assessment will succeed only if it is "owned"
and constructed by faculty. Boards of Trustees and state systems
must support such leadership. And higher education leaders
must develop productive collaboration among faculty, boards,
and policy leaders.
Assessment of value added requires a radical cultural shift
within higher education, a great deal of time, effort, cooperation,
risk-taking, and funding. It takes more time, more skill,
more trust, and more safeguards than are currently extant.
It is, however, an investment with a potentially large payoff
because, for the first time, many proposed changes would be
evaluated against their positive or negative impact on student
Clearly the assessment of value added is needed and poses
significant challenges. Nevertheless, we believe that the
questions the VAAI is raising are important and that the answers
would be most useful in informing institutional and public
policy. What matters in higher education? What differences
do different roads taken by colleges and universities make?
How might states break through the barriers to educational
quality in an era of increasing demand for excellence in higher
education and increasing distrust of government to provide
what is now understood to be a social and economic necessity
for all citizens? Can we answer these questions more cogently,
wisely, and systematically than we do now by anecdote? The
VAAI is an attempt to bring such questions to the fore and
provide a protocol for deriving the best possible answers.
Astin, Alexander W. 1993. What matters in college? Four
critical years revisited. San Francisco: Jossey Bass.
Bennett, Douglas. 2001. Assessing
quality in higher education. Liberal Education
Krueger, Alan B. 2000. Education matters. Northampton,
MA: Edward Elgar.