Other Pages in this Section

Frequently Asked Questions about the VALUE/MSC Project Demonstration Year

What does VALUE stand for?

Did AAC&U create the VALUE Rubrics?

What is the Multi-State Collaborative to Advance Learning Outcomes Assessment (MSC)?

How did the MSC evolve and what was the Pilot Year?

What did we learn from the Pilot Year?

What was the Demonstration Year?

Who participated in the Demonstration Year?

Are the Demonstration Year results generalizable?

What does it mean to evaluate student work using a VALUE Rubric?

What about standardized tests? Are you saying they are ineffective at measuring learning? 

What are the Student Learning Outcomes being assessed?

How will the results from the Demonstration Year be used?

If a university or college decides it wants to improve the proficiency of all undergraduate students, how can it use VALUE Rubrics to do that?

Is this system designed to judge publicly the effectiveness of individual faculty members?


What does VALUE stand for?
It stands for Valid Assessment of Learning in Undergraduate Education.

Did AAC&U create the VALUE Rubrics?
In 2009, AAC&U initiated and oversaw bringing teams of faculty experts and other educational professionals from member institutions together to envision, draft, and refine the 16 VALUE Rubrics. Through review of the literature, extant rubrics, and their own experience, rubric teams identified key criteria broadly shared regarding critical dimensions of achievement for each student learning proficiency. The rubrics were field tested by faculty on over 150 campuses across the country.

AAC&U did additional validity and reliability testing of the rubrics to ensure they measure what they are intended to measure. AAC&U staff and fellows maintain the VALUE website on www.aacu.org, offer workshops on the use of the rubrics, and have created several publications containing data analysis and case studies.

What is the Multi-State Collaborative to Advance Learning Outcomes Assessment (MSC)?
The MSC is the current centerpiece of AAC&U’s ongoing VALUE initiative. With the active support of the association of State Higher Education Executive Officers (SHEEO), twelve states—Connecticut, Indiana, Hawaii, Kentucky, Maine, Massachusetts, Minnesota, Missouri, Oregon, Rhode Island, Texas, and Utah—agreed to collaborate during the Demonstration year (2015-2016) in cross-state and cross-institutional efforts to document student achievement without using standardized tests and without requiring students to do any additional work or testing outside their regular curricular requirements. This model is rooted in campus/system collaboration and in faculty curriculum development, teaching activity, and assessment of authentic student work. It is based on the use of Essential Learning Outcomes and associated VALUE Rubrics developed by faculty members under the auspices of AAC&U’s LEAP Initiative.

How did the MSC evolve and what was the Pilot Year?
A broadly collaborative multi-campus leadership group in Massachusetts worked to conceptualize a model for state system learning outcomes assessment, based on the LEAP Essential Learning Outcomes and using VALUE Rubrics. That group reached out to the State Higher Education Executive Officers association (SHEEO) to help expand the effort to assess learning outcomes and voluntarily share results. The initiative’s Pilot Year (2014-2015) was a major undertaking to test whether participating institutions and states could develop the capacity to use the protocols and guidelines that had been developed for identifying, sampling, collecting, uploading, scoring, and reporting results for assessing student artifacts using the VALUE Rubrics. Nine states and approximately 60 institutions actively participated in the pilot year, collecting more than 7,000 samples of student work scored by 126 faculty members trained to evaluate achievement in three important learning outcomes: critical thinking, quantitative literacy, and written communication. A press release on the initial findings, limitations, and successes is available at http://sheeo.org/msc-pilot-study-results

What did we learn from the Pilot Year?
The Pilot Year successfully demonstrated that:

  • A wide array of institutions can develop sampling plans to provide reliable samples of student work from across a variety of departments in order to demonstrate achievement of key cross-cutting learning outcomes.
  • Faculty can effectively use common rubrics to evaluate student work products—even those produced for courses outside their area of expertise.
  • Following training, faculty members can produce reliable results using a rubric-based assessment approach. More than one third of the student work products were double scored to establish inter-rater reliability evidence.
  • Faculty report that the VALUE Rubrics used in the study do encompass key elements of each learning outcome studied, and were very useful for assessing student work and for improving assignments.
  • A web-based platform can create an easily usable framework for uploading student work products and facilitating their assessment.
  • Actionable data about student achievement of key learning outcomes on specific key dimensions of these important learning outcomes can be generated via a common rubric-based assessment approach.

What was the Demonstration Year?
The Demonstration Year (2015-2016) was designed to advance our understanding of the feasibility and sustainability of a common statewide model of assessment using actual student work. All nine states from the Pilot Year, plus an additional three states—Hawaii, Maine, and Texas—agreed to continue to engage with the methodologies developed for sampling and collecting student work, including examination of the ability to create a representative sample of student work at the campus, state, and multistate levels, with an appropriate degree of randomization. The Demonstration Year continued to evaluate the ability to produce useful assessment data for institutional use, to organize aggregated data for interstate comparison by sector, and to measure student learning using VALUE Rubrics. Finally, the Demonstration Year continued to test the reliability of using the VALUE Rubrics in the assessment of student work.

Who participated in the Demonstration Year?
67 institutions uploaded student work products in the Demonstration Year, including 19 two-year institutions and 29 four-year institutions in the MSC, plus another 19 institutions participating through other VALUE initiatives. The total number of student work products collected was over 11,000, including more than 8,300 from MSC institutions. More than 175 campus professionals, predominantly faculty from a wide range of disciplines, were trained to score student work using six different VALUE Rubrics: Civic Engagement, Critical Thinking, Intercultural Knowledge and Competence, Quantitative Literacy, Written Communication, and Ethical Reasoning. Over one third of the student work products were double-scored.

Are the Demonstration Year results generalizable?
While the findings from the Demonstration Year are not generalizable across the entire population of students in the participating states or nationally, the study found within the cohort of participating institutions some clear patterns in students’ achievement levels. Using a 4-0 rating scale, much higher percentages of student work products were rated at either a “3” or “4” level in four-year institutions than were rated that those levels in two-year institutions in the project. Significant numbers of students nearing degree completion at two-year institutions demonstrated high or very high levels of achievement on key outcomes.

What does it mean to evaluate student work using a VALUE Rubric?
The VALUE Rubrics are 16 templates, each of which helps evaluators assess the level of proficiency represented in a student work product (paper, performance, community service project, etc.).

Each rubric addresses five to six key criteria for a proficiency, e.g. quantitative literacy, and for each criterion the evaluator chooses from among the four descriptors (a desired capstone, highest level, two milestone levels, and a benchmark level) which level of proficiency the student’s piece of work demonstrates.

The VALUE Rubrics are aligned with the Degree Qualifications Profile (DQP) and AAC&U’s LEAP Essential Learning Outcomes proficiencies for achievement across the associate and baccalaureate levels.

What about standardized tests? Are you saying they are ineffective at measuring learning? 
A standardized test such as the Collegiate Learning Assessment (CLA) takes a snapshot of a sample of students at a particular time and is voluntary, so it measures some learning of some students. Because these tests are almost always taken by volunteers, and carry no consequences, research shows that students are not motivated to do their best work on them. Moreover, good psychometric practice rejects the idea of using any single measure as a proxy either for individual student proficiency or for institutional evaluation. Finally, information from a particular test, because it is disconnected from specific curricula, provides little help for students or faculty to identify specific areas in which to focus their own efforts to achieve higher levels of mastery.

The VALUE Rubrics were developed to answer the need to measure the development and application over time of the essential learning proficiencies that college graduates need in order to be productive in work and in citizenship. By assessing students’ most motivated, best work done in their curricula, those who evaluate the level of achievement get a fuller picture of how much a student’s knowledge and skills have grown and matured during college.

What are the Student Learning Outcomes being assessed?
During the Demonstration Year, the MSC collected student work (artifacts) for the LEAP Essential Learning Outcomes of Critical Thinking, Quantitative Literacy, and Written Communication. Institutions that collected the benchmark number of rubrics for each of these three rubrics also had the option of collecting student work for Civic Engagement or Intercultural Knowledge and Competence.

Critical Thinking (CT)—a “habit of mind” characterized by the comprehensive exploration of issues, ideas, artifacts, and events before accepting or formulating an opinion or conclusion.

Quantitative Literacy (QL)—also known as Numeracy or Quantitative Reasoning—is a “habit of mind” which includes competency and comfort in working with numerical data. Individuals with strong QL skills possess the ability to reason and solve quantitative problems from a wide array of authentic contexts and everyday life situations. They understand and can create sophisticated arguments supported by quantitative evidence and they can clearly communicate those arguments in a variety of formats (using words, tables, graphs, mathematical equations, etc., as appropriate).

Written Communication (WC)—the development and expression of ideas in writing. Written communication involves learning to work in many genres and styles. It can involve working with many different writing technologies, and mixing texts, data, and images. Written communication abilities develop through iterative experiences across the curriculum.

How will the results from the Demonstration Year be used?
Institutions may use results however they choose. Assessment results from the Demonstration Year will be aggregated and reported by segment (two-year and four-year) for all dimensions of the rubric associated with each learning outcome. Demonstration Year analysis will look at identified trends in the data, feasibility, scalability, validity, and reliability of the MSC model.

If a university or college decides it wants to improve the proficiency of all undergraduate students, how can it use VALUE Rubrics to do that?
An institution can undertake a study focusing on key proficiencies. It can decide, for example, to measure the development of students’ critical thinking and written communication through the general education curriculum. A team of faculty members and others can assess authentic, problem-centered student work at the beginning, middle and end of that series of courses, measuring the aggregate improvement in those two skills over time. If institutional leaders and faculty decide the level of development is lower than expected, they now can target where interventions—e.g. assignments may be modified to elicit specific learning improvements to see if improvement occurs, or evidence-based high-impact teaching and learning practices that tend to lead to better learning outcomes—can be included in courses and assignments, and assess the learning again after those changes take place.

Is this system designed to judge publicly the effectiveness of individual faculty members?
VALUE has one goal: to help all students achieve the levels of proficiency necessary for success. It takes a faculty and program working collectively to help students achieve high levels of demonstrated accomplishment. As an institution gathers solid evidence of what teaching and learning practices consistently lead to required proficiency, faculty will be more likely to adopt those evidence-based practices. The process of continuous improvement built into the VALUE project, in other words, is based on carrots and not sticks. AAC&U's on-going initiative with the Multi-State Collaborative (MSC) and select private institutions is developing a process for establishing nationwide benchmarks for learning based on the VALUE Rubrics, collected from two- and four-year campuses across the country and scored by faculty to create a landscape of learning on all of the Essential Learning Outcomes.