Peer Review

Quality Collaborative to Assess Quantitative Reasoning: Adapting the LEAP VALUE Rubric and the DQP

Quantitative skills have been consistently highlighted as among the critical outcomes of a strong undergraduate education. The Association of American Colleges and Universities' (AAC&U) Liberal Education and America's Promise (LEAP) initiative identifies quantitative literacy as one of six "Intellectual and Practical Skills" within its broader list of Essential Learning Outcomes (ELOs) for a liberal education (National Leadership Council for LEAP 2007). More recently, the Lumina Foundation Degree Qualifications Profile (DQP) has included Quantitative Fluency as one of the proficiencies within the "Intellectual Skills" area in both the original version and DQP 2.0 (Lumina Foundation 2011, 2014). Furthermore, AAC&U has surveyed employers across the nation, finding that 55 percent believe colleges and universities should place more emphasis on students' ability to work with numbers and understand statistics, and 81 percent believe more emphasis should be placed on the ability to analyze and solve complex problems (Hart Research Associates 2013).

While the extent to which employers considered analyzing and solving complex problems as related to quantitative literacy or fluency is unclear, this emphasis from employers suggests that our institutions should prioritize the ability to apply quantitative skills to solving problems—quantitative reasoning. The combination of understanding abstract principles and processes of mathematics, and quantitative reasoning as the practice of applying those principles and processes in real-world contexts, sometimes referred to as "the two maths," must be a critical area of emphasis within higher education (Steen 2004).

In 2012, Fitchburg State University (Fitchburg State) and Mount Wachusett Community College (MWCC) were selected to form a two- and four-year dyad as part of AAC&U's nine-state Quality Collaboratives (QC) project. There was substantial overlap between MWCC's Quantitative Reasoning and Scientific Modes of Inquiry learning outcome, Fitchburg State's Problem Solving through Quantitative Reasoning outcome, the LEAP Quantitative Literacy ELO, and the DQP Quantitative Fluency competency, so we selected quantitative reasoning (QR) as one of four areas of focus (along with civic engagement, information literacy, and written communication) for our project. The AAC&U Quality Collaboratives initiative, funded by Lumina Foundation and the William and Flora Hewlett Foundation, has supported efforts to develop practices and strategies for assessing DQP proficiencies as the basis for transfer between two- and four-year institutions.

Our project has brought together four teams of faculty and staff Assessment Scholars—one for each focus area—with equal representation from each campus to perform the following work: develop rubrics for assessing each of the four learning outcome areas, pilot those rubrics in the assessment of student artifacts, develop strategies for engaging faculty in the assessment process, and consider how the assessment data could be used to inform student transfer policies. Through this process, the QR team reaffirmed our institutional commitments to having students from both institutions engage in the application of quantitative skills to complex problems, while grappling with the challenges of designing appropriate assessments in comparable ways across our two campuses.

Modifying the Quantitative Literacy LEAP VALUE Rubric

The QR team began their work by sharing the two institutions' previous rubrics and seeing how they compared to the AAC&U LEAP Valid Assessment of Undergraduate Education (VALUE) rubric for quantitative literacy (http://www.aacu.org/value/rubrics/quantitative-literacy) and the DQP Quantitative Fluency competency. Fitchburg State faculty had recently created a modified LEAP VALUE rubric for assessing problem solving through quantitative reasoning in our general education curriculum. The Quantitative Reasoning and Scientific Modes of Inquiry rubric in use by MWCC was also discussed. The team agreed that the LEAP VALUE rubric was a good starting point, but wanted to incorporate elements from both institutions' rubrics as well as clarify the categories within the rubric: the rubric criteria. Modifications to the LEAP VALUE rubric included changes to criteria names, reordering the criteria to match what we imagined the flow of student work would display, and removing the Communication criterion from the rubric, as we felt it was duplicating the criterion we eventually labeled Judgment/Conclusions (table 1). In addition, a criterion was added (from the Fitchburg State rubric) addressing students' ability to apply content knowledge or methods and/or results to a new situation. The team members felt that this criterion spoke to an important feature of quantitative reasoning that was absent in the LEAP VALUE rubric. A clearer distinction was made between students' ability to describe patterns in data (Interpretation/Description) and to make inferences based on the data (Judgments/Conclusions). Modifications in language to the rubric criteria, or the performance descriptors therein, were made in order to reflect how the team members felt we would use the rubric on our students' work.

Table 1. Comparing the QC Quantitative Reasoning Rubric to the Unmodified LEAP VALUE Rubric

Modified LEAP VALUE Criteria

Orignial Leap VALUE Criteria

Changes

Calculation
Calculation
Appears as the third criterion in the VALUE rubric, but has been moved to the first criterion in our rubric with minor changes in the performance descriptors.
Representation
  • To math—The ability to convert relevant information into various mathematical forms (e.g., equations, graphs, diagrams, tables)
Representation
  • Ability to convert relevant information into various mathematical forms (e.g., equations, graphs, diagrams, tables, words)
The description in italics was changed slightly and the performance descriptors were also modified slightly.
Interpretation/Description
  • From math —The ability to explain information presented in mathematical forms (e.g., equations, graphs, diagrams, tables, words)
Interpretation
  • Ability to explain information presented in mathematical forms (e.g., equations, graphs, diagrams, tables, words)
Appears as the first criterion in the VALUE rubric, but moved down to third criterion on modified rubric. Minor changes were made to the performance descriptors.
Interpretation/Description
  • From math —The ability to explain information presented in mathematical forms (e.g., equations, graphs, diagrams, tables, words)
Interpretation
  • Ability to explain information presented in mathematical forms (e.g., equations, graphs, diagrams, tables, words)
Appears as the first criterion in the VALUE rubric, but moved down to third criterion on modified rubric. Minor changes were made to the performance descriptors.
Judgments/Conclusions
  • Ability to make judgments and draw appropriate conclusions based on the quantitative analysis of data, while recognizing the limits of this analysis
Application/Analysis
  • Ability to make judgments and draw appropriate conclusions based on the quantitative analysis of data, while recognizing the limits of this analysis
The name of this criterion was changed, and minor changes were made to the performance descriptors.
Applies content knowledge, methods and/or results to new situations
Communication
  • Expressing quantitative evidence in support of the argument or purpose of the work (in terms of what evidence is used and how it is formatted, presented, and contextualized)
This criterion is a new one, replacing Communication. At the capstone level students make accurate and comprehensive conclusions about a new situation using information previously learned in another context.
Assumptions
  • Ability to make and evaluate important assumptions in estimation, modeling, and data analysis
Assumptions
  • Ability to make and evaluate important assumptions in estimation, modeling, and data analysis
This criterion is unchanged, but it was moved so that it is the last criterion

These efforts led to a working draft of the rubric, which we used to evaluate student work for a variety of courses selected from both institutions. This assessment resulted in additional minor modifications to the rubric, and, most importantly allowed us to norm as a group. The success of this project rested in the wide range of experience in our group, which included representatives from both institutions—adjunct faculty, full-time faculty (some tenured, some not), a lab technician, and a campus administrator. While we all had similar conceptions of QR, we also brought different disciplinary perspectives to the table, and those nuances shaped our work, both with the development and application of the rubric to evaluate student work. Our collaborative model encouraged us to consider a broad definition of QR that encompasses the criteria of the LEAP VALUE rubric as well as a student's ability to extend and apply their reasoning to new situations.

Comparing the Rubric to the DQP Quantitative Fluency Competency

Entering the second year of the project, the QR team was charged with comparing the modified rubric to the DQP competencies for quantitative fluency. The DQP provides reference points for learning outcomes at the associate's, bachelor's and master's level, demonstrating a progression of learning as a student advances through the levels. The intent of this task was to determine areas of overlap or disparity between the modified rubric and the DQP, and to craft DQP-like statements that could comprehensively reflect our expectations of a quantitatively fluent person at the associate level (table 2).

Our main criticism of the DQP 1.0 competency (Lumina Foundation 2011) was that it focused entirely on a student's ability to perform and explain calculations. Lacking was any expectation of representation or description of data, making judgments or drawing conclusions based on the quantitative analysis of data, applying concepts to new situations, or stating assumptions. These are important skills that students should be practicing at the two-year mark and refining at the four-year mark. Our team developed a revised DQP-like statement (table 2) that was bettered aligned with our modified QR rubric and more accurately reflected our expectations of a quantitatively fluent person at the associate level, which is often the point of transfer to the four-year institution.

Table 2. Comparing the DQP Reference Points to the MWCC /Fitchburg State QC Dyad's DQP-Like Statements for the Associates Level

DQP.1.0

DQP 2.0

MWCC/FITCHBURG State QC DYAD's DQP-Like Statements

Presents accurate calculations, and symbolic operations, and explains how such calculations and operations are used in either his or her specific field of study or in interpreting social and economic trends.
Presents accurate interpretations of quantitative information on political, economic, health-related, or technological topics and explains how both calculations and symbolic operations are used in those offerings.
Students interpret descriptions of situations and use the interpretations to develop appropriate quantitative solution strategies.
 
Creates and explains graphs or other visual depictions of trends, relationships, or changes in status.
Within these solutions, student should make effective choices in which calculation to complete, successfully complete those calculations, and connect information into mathematical forms.
 
 
Students then draw conclusions from results of quantitative analysis, including in novel situations, and reflect on any assumptions they made in completing their work.

Since our first review of the DQP, Lumina published a revised version, DQP 2.0 (Lumina Foundation 2014). The quantitative fluency statement in the DQP 2.0 improves upon the original DQP statement in several ways. First, it expands the qualification to include "creates and explains graphs or other visual depictions of trends" (Lumina Foundation 2014). There is now also an expectation of "accurate interpretations of quantitative information" (Lumina Foundation 2014). However, while students are expected to interpret quantitative information, they are not necessarily expected to draw conclusions from it. Furthermore, the skills of applying concepts to new situations or stating/discussing assumptions are still absent. While the revised DQP statement is an improvement upon the original, it does not go far enough, in our judgment, in terms of emphasizing the quantitative skills and abilities at the associate level needed to tackle complex problems. Reviewing and revising the DQP allowed us to reaffirm our commitment to the quantitative problem-solving skills reflected in our modified QR rubric.

Assessing Student Artifacts with the Modified QR Rubric

In the first year of the Fitchburg State/MWCC dyad, multiple student artifacts from both institutions were collected and assessed by the QR team, and included artifacts from two statistics courses, an environmental science exam, and a chemistry exam. In the second year of the project, the QR team focused on collecting artifacts from high-demand fields and disciplines and on locating assignment prompts that explicitly required students to engage in the more difficult rubric areas such as Application and Assumptions. In total, three sets of student artifacts were collected in year two: a biology lab report from MWCC, a biology lab report from Fitchburg State, and a nutrition analysis assignment from Fitchburg State. For both years of the project, each artifact was independently scored for each rubric criterion (1–4 or NA) by at least two assessors using our Quantitative Reasoning rubric in the TK20 assessment management system.

In each set of first-year artifacts, there were only one or two rubric criteria that could be scored by assessors because many of them were not observable in the student artifacts. The most consistently missing criteria were Judgments/Conclusions, Application of Knowledge, and Statement of Assumptions. In all four sets of artifacts, greater than 60 percent of the artifacts did not demonstrate these quantitative reasoning categories, with Application and Assumptions being absent in 100 percent of the artifacts. Demonstration of the remaining rubric criteria of Calculation, Representation, and Interpretation/Description was highly variable and depended upon the nature of the assignment.

In the second-year collection, there was substantial improvement in the number of artifacts that showed evidence of Interpretation/Description, with 88–98 percent of artifacts being scorable for this criterion (table 3). However, in the categories of Application and Assumptions, there was still little to no evidence to score these criteria. There were no obvious trends or patterns illustrating areas of student weakness in the mean scores in the criteria of Calculation, Representation, Interpretation/Description, and Judgments/Conclusions. Each assignment's mean scores achieved milestone proficiency (>2) for each criterion that could be scored.

Table 3. The Mean Scores of Student Artifacts that Showed Evidence of a QR Criterion*

Criteria Bio 109 Labs (n=24) Respiration Labs (n=20) Nutrition Analyses (n=18)
Calculation
NA
3.3
2.5
Representation
2.4
2.9
2.0
Interpretation/Description
2.4
2.3
2.4
Judgments/Conclusions
2.3
2.3
2.0
Applies to New Situations
NA
NA
NA
Assumptions
NA
NA
NA
*Mean scores (on a scale of 1–4) for three different assessments of QR. NA indicates 94–100 percent of assessors did not provide a score for that criterion. All other criteria were scored by at least 80 percent of assessors.

 

In order to open up discussion on these data, the QR team used a modified "ATLAS: Looking at Data" protocol (School Reform Initiative 2007). In this protocol, participants were asked to make purely descriptive statements about what they saw in the data tables and then make claims about what these data suggest (i.e., making sense of the data in as many ways as possible), often times using the descriptions from the first phase as evidence in support of these interpretations. The conversation then moved into three layers of implications: those for the classroom practice, for assessment practice, and finally for transfer policies between our institutions. This approach allowed interpretations to flow into meaningful conversations and reflections about how material is taught, how we assess student learning, and how we develop and think of transfer issues related to assessment.

Looking more closely at the data through the lens of our institutional assessment practice we noticed some common issues. When using the rubric in both the first- and second-year of the project it was a challenge to find one assignment that would provide evidence of student learning on all (or even most) of the criteria on the rubric. Our discussions revealed that although the second-year assignments improved upon this problem by providing opportunities for students to engage in the application of content to new situations and to identify any assumptions made, students did not demonstrate these skills because they were not explicitly prompted to do so in the assignments.

What emerged from this data analysis was the need to have more carefully constructed assignment prompts that explicitly require these higher order tasks. However, the team also found that assignments that addressed the Analysis, Judgment, and Assumptions portion of the rubric were often project-based assignments in which the student did not explicitly show the method of calculation, as it was often calculated by a piece of software. In this the QR team uncovered a challenge common for those who assess QR: how to assess both calculation skills and conceptual understanding when looking at a completed piece of student work. The team continued to emphasize a shared commitment to the quantitative problem-solving skills reflected in the rubric, while recognizing the challenges of soliciting appropriate assignments of these types from faculty.

Faculty Engagement and Assignment Design

In light of the limitation in the types and breadth of assignments the team was able to collect, one of the key goals of the second year of our work was to develop strategies to reach out to faculty beyond the QR team and to improve faculty involvement in assessment across our campuses. As discussed earlier, the QR team's initial push to identify sets of comparable courses on both campuses to be used in our assessment produced limited success, as the rubric criteria of Application and Assumptions were not assessable in any of the work we had reviewed. As a result of this discovery, the QR team turned to lab reports that would allow most, if not all, of the rubric criteria to be assessed within the same assignment.

To this end, a sample lab report assignment was annotated to communicate to faculty what an appropriate prompt might look like and how to apply the QR rubric to student work. Annotations were provided throughout the assignment to identify where the various rubric criteria were being assessed. For example, when the students were prompted to "create a table that shows the subject descriptive data," the annotation noted this as a method to assess the Representation criterion. Construction of the table reflects the Representation criterion because "the table requires students to determine that they must first calculate average values for the two groups, and then to organize the relevant information into a clear table with appropriate labels." In this way the annotated lab report assignment provides a guideline for using the modified QR rubric. Additionally, faculty may choose to use it to develop assignments that align with the QR rubric. The QR team also developed an e-mail template to be used when requesting student work from faculty for assessment. The e-mail explains the purpose of the assessments, describes what appropriate assignments should include, and inquires about the type of feedback the faculty member would like to receive from the assessments.

As the dyad moved into the third year of work, we shifted our attention to assignment design. We brought new faculty members into the project and turned to another protocol to help focus conversations about assignment prompts, a modified Charette that was developed by the National Institute for Learning Outcomes Assessment (School Reform Initiative 2007). First, using both the Lumina DQP 2.0 and the modified QR rubric, participants gave each other feedback to determine if each of the selected assignment prompts matched the performance criteria. Second, they addressed the role their assignment plays in the course and were asked to reflect on the assignment's strengths and weaknesses to locate what parts were working well and what parts needed attention.

After this, each member of the group had the opportunity to have their assignment discussed (using the Charette protocol). Participants gave the assignment designer feedback on how well the assignment was suited to assessing student work and learning on the DQP proficiencies, and on how the assignment might appear from a student's perspective. At the end of the sessions each faculty member had feedback on the assignment from numerous perspectives. This feedback will be used to further revise the assignment prompts over the summer with the goal of having assignments that effectively measure the criteria of the QR rubric. As we move forward with shared methods for QR assessment on our campuses, we are continuing to focus on collaborative faculty professional development in the area of assignment design to ensure we can assess the higher order quantitative problem solving skills we have stressed through our rubric.

Conclusions and Implications for Transfer

Our work developing common rubrics, stating shared goals for quantitative reasoning expressed as DQP-like statements, and creating a culture of collaborative faculty professional development in the area of assignment design provides a strong foundation for setting expectations for transfer students. Each institution is undergoing a review of its own general education curriculum, and the initial discussions have been informed by the work of our Assessment Scholars. The two institutions are also planning shared professional development days in the future to ensure ongoing alignment of our assignments and student learning expectations. While the LEAP quantitative literacy ELO, VALUE rubric, and DQP Quantitative Fluency competency each provided a necessary tool to frame our cross-institutional discussions, they alone were not sufficient to allow the two institutions to develop our shared vision for Quantitative Reasoning.

By bringing faculty and staff from our two institutions together with a focus on student learning, we expanded our understanding of quantitative reasoning beyond the mechanical translation and computation of mathematical information to embrace an expectation that students will make judgments about the processes and assumptions involved in translation and computation, draw conclusions about what knowledge has been gained from those processes, and be able to apply those processes effectively in novel situations. By emphasizing these student learning outcomes, we hope to better prepare students for successful transfer from MWCC to Fitchburg State and for careers in which they will increasingly need to apply quantitative skills to analyze and solve complex problems.

References

Hart Research Associates. 2013. It Takes More than a Major: Employer Priorities for College Learning and Student Success. Washington, DC: Association of American Colleges and Universities. http://www.aacu.org/leap/documents/2013_EmployerSurvey.pdf.

Lumina Foundation. 2011. The Degree Qualification Profile. Indianapolis, IN: Lumina Foundation.http://www.luminafoundation.org/publications/The_Degree_Qualifications_Profile.pdf.

Lumina Foundation. 2014. The Degree Qualifications Profile 2.0. Indianapolis, IN: Lumina Foundation. http://www.luminafoundation.org/publications/
DQP/DQP2.0-draft.pdf.

National Leadership Council for Liberal Education and America's Promise. 2007. College Learning for the New Global Century. Association of American Colleges and Universities. Washington, DC: Association of American Colleges and Universities.

School Reform Initiative. 2007. Resource and Protocol Book - ver. 3.0. Denver, CO: School Reform Initiative.

Steen, Lynn Arthur. 2004. "Everything I Needed to Know about Averages . . . I Learned in College." Peer Review 6 (4): 4–8.


Jennifer Berg, assistant professor of mathematics, Fitchburg State University
Lisa M. Grimm, assistant professor and graduate program chair of biology, Fitchburg State University
Danielle Wigmore, associate professor of exercise and sports science, Fitchburg State University
Christopher K. Cratsley, director of assessment, Fitchburg State University
Ruth C. Slotnick, former director of articulation and learning assessment, Mount Wachusett Community College; director of assessment, Bridgewater State University
Susan Taylor, professor and chair of computer information systems, Mount Wachusett Community College

 

Previous Issues