Select any filter and click on Apply to see results
Table of Contents
The VALUE Project Overview
As part of the Association of American Colleges and University’s Liberal Education and America’s Promise (LEAP) initiative, the Valid Assessment of Learning in Undergraduate Education (VALUE) project contributes to the national dialogue on assessment of college student learning. VALUE builds on a philosophy of learning assessment that privileges multiple expert judgments of the quality of student work over reliance on standardized tests administered to samples of students outside their required courses. This project is an effort to focus the national conversation about student learning on the set of essential learning outcomes that faculty, employers, and community leaders say are critical for personal, social, career, and professional success in this century and this global environment. The assessment approaches that VALUE advances are based on the shared understanding of faculty and academic professionals on campuses from across the country.
VALUE assumes that
- to achieve a high-quality education for all students, valid assessment data are needed to guide planning, teaching, and improvement. This means that the work students do in their courses and cocurriculum is the best representation of their learning;
- colleges and universities seek to foster and assess numerous essential learning outcomes beyond those addressed by currently available standardized tests;
- learning develops over time and should become more complex and sophisticated as students move through their curricular and cocurricular educational pathways within and among institutions toward a degree;
- good practice in assessment requires multiple assessments, over time;
- well-planned electronic portfolios provide opportunities to collect data from multiple assessments across a broad range of learning outcomes and modes for expressing learning while guiding student learning and building reflective self-assessment capabilities;
- assessment of the student work in e-portfolios can inform programs and institutions on their progress in achieving expected goals and also provide faculty with necessary information to improve courses and pedagogy.
VALUE’s work is guided by a national advisory board that is comprised of recognized researchers and campus leaders knowledgeable about the research and evidence on student achievement of key learning outcomes and best practices currently used on campuses to achieve and measure student progress. VALUE focuses on the development of rubrics for most of the essential learning outcomes that articulate the shared expectations for student performance. Achievement and assessment of these outcomes is demonstrated in the context of the required college curriculum (and cocurriculum), and includes models for e-portfolios and rubrics describing ascending levels of accomplishment (basic, proficient, advanced, etc.).
Learning Outcomes for the Development of Metarubrics
The essential learning outcomes addressed in the project are:
Intellectual and Practical Skills
- Inquiry and analysis
- Critical thinking
- Creative thinking
- Written communication
- Oral communication
- Quantitative literacy
- Information literacy
- Problem solving
Personal and Social Responsibility
- Civic knowledge and engagement—local and global
- Intercultural knowledge and competence
- Ethical reasoning
- Foundations and skills for lifelong learning
- Integrative learning
VALUE Leadership Campuses
The VALUE project selected twelve leadership campuses to participate, based on established student e-portfolio use to assess student learning. While selected campuses use e-portfolios in different ways and in different places in the curriculum, each VALUE leadership campus uses e-portfolio systems in which students collect coursework and related activities in their curricular and cocurricular lives. Upon acceptance into the project, these institutions agreed to test the rubrics developed through VALUE on student e-portfolios on their respective campuses to determine the usefulness of the rubrics in assessing student learning across the breadth of essential outcomes. In addition, each leadership campus agreed to provide faculty feedback on the usefulness, problems, and advantages of each rubric they tested.
VALUE Partner Campuses
As the rubric development process proceeded and leadership campuses tested the rubrics, other campuses became aware of the project and began requesting permission to use the rubrics on their campuses. While many of these campuses did not use e-portfolios, they did have collections of student work on which they wished to test the rubrics and provide the project with feedback. As a result of sharing rubrics with this second set of institutions, VALUE now has seventy partner campuses.
VALUE Alternative to Tests
There are no standardized tests for many of the essential outcomes of an undergraduate education. Existing tests are based on typically nonrandom samples of students at one or two points in time, are of limited use to faculty and programs for improving their practices, and are of no use to students for assessing their own learning strengths and weaknesses. VALUE argues that, as an academic community, we possess a set of shared expectations for learning for all of the essential outcomes, general agreement on what the basic criteria are, and a shared understanding of what progressively more sophisticated demonstration of student learning looks like.
As part of the VALUE project, teams of faculty and other academic professionals have been gathering, analyzing, synthesizing, and drafting rubrics (and related materials) to create what we are calling “metarubrics,” or shared expectations for learning that correlate to fourteen of the AAC&U Essential Learning Outcomes. Rubrics are simply statements of key criteria or characteristics of the particular learning outcome; statements of what demonstrated performance for each criterion looks like at four levels displayed in a one-page table (see example on next page). The VALUE rubrics are “meta” in the sense that they synthesize the common criteria and performance levels gleaned from numerous individual campus rubrics and synthesized into general rubric tables for each essential learning outcome. Each metarubric contains the key criteria most often found in the many campus rubrics collected and represents a carefully considered summary of criteria widely considered critical to judging the quality of student work in each outcome area.
The rubric development process is a proof of concept. The claim is that faculty and other academic and student personnel professionals do have fundamental, commonly held expectations for student learning, regardless of type of institution, disciplinary background, part of the country, or public or private college status. Further, these commonly shared expectations for learning can also be articulated for developmentally more-challenging levels of performance or demonstration.
The process of reviewing collections of existing rubrics, joined with faculty expertise across the range of outcomes, has uncovered the extent to which there are similarities among campuses on core learning expectations. By identifying outcomes in terms of expectations for demonstrated student learning among disparate campuses, a valuable basis for comparing levels of learning through the curriculum and cocurriculum is emerging. This will be especially useful as students, parents, employers, and policy makers seek valid representations of student academic accomplishment, especially when the expected learning can be accompanied by examples of actual student work that tangibly demonstrate the learning.
The rubric teams have been developing each outcome since spring 2008. By late spring, three rubrics had been drafted. Those three rubrics were then pilot tested by faculty on some of the leadership campuses. Feedback from the first round of testing was used by the respective teams to engage in a second round of drafting and redrafting the rubrics. By fall 2008, drafts of the rubrics articulating the fourteen essential learning outcomes were in place. In early 2009, the new rubrics were piloted on both leadership and partner campuses across the country. Currently, the second-round feedback is being used by the rubric development teams to redraft the rubrics once again. In late spring, the rubrics will undergo a third round of campus testing. A final “tweaking” of the rubrics will occur in early summer. Finally, the VALUE rubrics will be released for general use in summer 2009.
Table 1: A draft VALUE project rubric used to assess students’ critical thinking.
Explanation of issues
Problem/issue relevant to situation in context clearly stated
Problem/issue relevant to situation stated and partially described
|Problem/issue relevant to situation stated||Problem/issue relevant to a different situation identified|
Investigation of evidence
Position is established with evidence. Source selection reflects some exploration across disciplines and integrates multiple media modes; Veracity of sources is challenged and mostly balanced. Source summaries and attribution deepen the position not just decorate it.
Position is supported by evidence, though selective (cherry picked), inconsistently aligned, narrow in scope and limited to one or two modes. Examination of source quality shows some balance; attribution (citations) documents and adds authority to position.
|Position strengthened by supporting evidence, though sources are limited or convenient (assigned sources & personal stories only) and in a single mode (text, audio, graphs, or video, etc); Source use repeats information and absent contrary evidence. Attribution merely lists references, decorates.||Position is unsubstantiated, random. Limited evidence of exploration (curiosity) or awareness of need for information, search, selection, source evaluation & source attribution (citations).|
Position qualified by considerations of experiences, circumstances, conditions and environment that influence perspectives and the implications of those perspectives.
Position presented with recognition of contextual sources of bias, assumptions and possible implications of bias.
|Position presented tentatively, with emerging awareness of own and others’ biases, ethical and political, historical sources and implications of bias.||Position presented in absolutes with little recognition of own personal and cultural bias and little recognition of ethical, political, historical or other considerations.|
Own perspective, hypothesis, or
A reasonable, clear, position or hypothesis, stated or implied, demonstrates some complexity of thought. It also acknowledges, refutes, synthesizes, or extends some other perspectives appropriately.
A reasonable, clear position or hypothesis is stated or implied.Important objections and/or alternate perspectives are considered with some thought.
|Position or hypothesis is clear, whether stated or implied, with at least one other perspective acknowledged.||Work contains a discernible position or hypothesis that reflects the student’s perspective.|
Conclusions are based on a synthesis of evidence from various sources. Inferences about causal consequences are supported by evidence that has been evaluated from disparate viewpoints. Analysis of implications indicates some awareness of ambiguity.
Conclusions and evidence are relatively obvious, with synthesis drawn from selected (cherry picked) evidence. Assertions of cause are supported mostly by opinion and are also selective. Considerations of consequences are timid or obvious and easy.
|Conclusions are weakly supported by evidence, with only emerging synthesis. Assertions of cause are doubtful. Considerations of consequences are narrow or exaggerated and dichotomous.||Conclusions are not supported by the evidence or repeat the evidence without synthesis or elaboration; tendency to confuse correlation and cause. Considerations of consequences are sketchy, drawn in absolutes, or absent.|
Created by a team of faculty from higher education institutions across the United States
E-portfolios as the Mode for Presenting Student Work
E-portfolios were chosen as the medium for collecting and displaying student work for three primary reasons: (1) there were sufficient numbers of campuses using e-portfolios for assessment of learning to represent multiple sectors and types of institutions; (2) it would be easier to share student work among campuses, faculty teams, and evaluators digitally than to transport groups of people; and (3) e-portfolios allowed learning to be presented using a broad range of media to capture the multiple ways in which we learn and can demonstrate our learning. E-portfolios provide both a transparent and portable medium for showcasing the broad range of complex ways students are asked to demonstrate their knowledge and abilities for purposes such as graduate school and job applications, as well as to benchmark achievement among peer institutions. To better ensure that judgments about student learning actually reflect the learning that occurs on our campuses, the student artifacts should be drawn primarily from the work students complete through their required curriculum and cocurriculum.
The e-portfolio is an ideal format for collecting evidence of student learning, especially for those outcomes not amenable to or appropriate for standardized measurement. Additionally, e-portfolios can facilitate student reflection upon and engagement with learning across multiyear degree programs, across different institutions, and across diverse learning styles while helping students to set and achieve personal learning goals.
The rubric development teams endeavored to craft language that would not be text bound, but open to use for learning performances that were graphical, oral, video, digital, etc. VALUE rubrics incorporate both the research on learning outcomes and the reality of today’s college students, who work in a learning environment that includes technological, social, and extracampus experiences in addition to traditional classroom learning.
A Final Piece of the Project
Since it is important that the rubrics and the e-portfolio collections of student work serve both campus assessment and noncampus accountability purposes, VALUE will engage a set of national panels during the summer of 2009 to review the rubrics, use the rubrics to assess student e-portfolios, and provide feedback on the usefulness of the rubrics and the student e-portfolio. Three national panels will be formed:
- Panel One—A panel of faculty who are familiar with rubrics and e-portfolios, but who have not been involved in the VALUE project,
- Panel Two—A panel of faculty who are neither familiar with rubrics nor e-portfolio usage, and
- Panel Three—A panel of employers, policy makers, parents, and community leaders.
Each panel will use the same rubrics to assess the same set of student e-portfolios. The results of their reviews and their feedback will be used for the last “tweaking” of the rubrics, and as an initial indicator of the rubrics’ ability to communicate similar meaning about quality of learning to very differently positioned sets of people.
The VALUE rubrics are meant to both capture the foundations of a nationally shared set of meanings around student learning, and to be useful at both general institutional and programmatic levels. The VALUE rubrics, as written, must then be translated by individual campuses into the language, context, and mission of their institution. Programs and majors will have to translate the rubrics into the conceptual and academic constructs of their particular area or discipline. Individual faculty will have to translate the rubrics into the meaning of their assignments and course materials in order for the rubrics to be used effectively to assess their student assignments.
However, as institutional versions of the rubrics are mapped onto the VALUE rubric criteria and performance levels, each level of the institution—individual faculty, disciplines, programs—can have confidence that their assessments are not idiosyncratic, but rather are made within a national understanding of learning expectation and its quality. This translation to the local parlance allows for the work of students and faculty on specific assignments in specific courses to not only serve the purposes of assigning grades and performance indicators in a course, but also for the same pieces of work and their assessment to be sampled and/or aggregated for program-review or assessment purposes, and ultimately at an institutional level. Through this deconstruction process, the rubrics become useful to faculty and students on the ground on a day-to-day basis for moving through a course of study. Through aggregating and sampling, the exact same work can also be used to provide a macro review of student learning without having to start anew or devise separate modes of gathering assessment data. Multiple purposes and needs can be met through shared, layered, and textured rubrics, facilitating both formative assessment for learning and assessment for accountability reporting.
Through use of these rubrics—which set up explicit expectations for learning—students will develop the ability to reflect on their learning and assess their progress, their strengths, and their weaknesses as they move along their educational pathways.
As stated earlier, VALUE is a first step, a proof of concept. At a point that is two-thirds of the way through the project, the evidence suggests that we can talk about a shared understanding of learning across a broad range of outcomes and at increasingly more challenging levels of performance. We are learning that assessment of student learning can be rigorous, effective, useful, and efficient. We do not need to create episodic, artificial tests to demonstrate the effectiveness of our colleagues, our institutions, or our students. There is integrity and validity in portfolio assessment that can lead to rich evidence of student learning for accountability demands, and at the same time encourage improvements in teaching and learning for faculty and staff. Perhaps most important, this process can allow students to develop their own abilities to engage in self-assessment and meaning making. §
The VALUE project, supported by grants from the State Farm Companies Foundation and the Fund for the Improvement of Postsecondary Education, runs May 2007 through April 2010.