Making the VALUE Initiative Work for Us

Our participation in the Multi-State Collaborative (MSC), a collaboration led by the Association of American Colleges and Universities’ VALUE (Valid Assessment of Learning in Undergraduate Education) initiative and the State Higher Education Executive Officers Association (SHEEO), was a natural progression of our efforts to enhance the collection and use of evidence to inform improvements to our undergraduate students’ learning experiences. Indeed, the University of Massachusetts (UMass) Amherst’s strategic plan, Innovation and Impact, calls for the university to promote a “culture of evidence” by demonstrating meaningful accountability, building institutional information resources, and embracing student learning outcomes assessment (University of Massachusetts Amherst 2013, 6). Given these institutional priorities, four aspects of the VALUE initiative were of interest to us: using actual student work from our courses, using rubrics developed by teams of faculty, asking faculty from our own and other institutions to score the work, and emphasizing the formative effects of assessment on learning while also working to develop a state-based and national reporting mechanism for student performance.

We joined VALUE to study the extent to which (and ways in which) the initiative could further our institutional priorities for building assessment through the increased involvement and expertise of university faculty. We pursued this goal, not only by conducting on-campus scoring of students’ work in addition to submitting the work to the MSC for centralized scoring, but also through qualitative inquiry (e.g., focus groups, interviews, and surveys) with participating instructors. The focus of our inquiry was to understand the usefulness of the project to university faculty and learn how to improve our participation both in VALUE and in student learning assessment more generally on campus. Through the participation of faculty on our campus, we are advancing our campus-based assessment efforts and our overarching goal for assessment: to use valid and systematic evidence to foster reflection and inform action on student learning, pedagogy, and curriculum.

The Process

In preparation for participation in the MSC’s 2015–16 cohort, we solicited student work through a broad call to faculty based on their perceptions as to whether they had an assignment that fit the five criteria of the Critical Thinking VALUE rubric: explanation of issues; evidence; influence of context and assumptions; the student's position (perspective, thesis/hypothesis); and conclusions and related outcomes (implications and consequences) (Association of American Colleges and Universities, n.d.). Our only other condition, based on requirements of the initiative, was that the work came from students who had completed at least 75 percent of the credits required for graduation.

In addition to submitting work for national scoring, a team of UMass Amherst faculty scored the same work using the Critical Thinking VALUE Rubric to see how our scoring compared to that of the external national scoring and, equally important, to develop a cadre of faculty experienced with the process and able to help evaluate it. Scorers participated in a full-day norming session and then scored student work online, with additional feedback provided by the leader of the norming session in the early stages of scoring.

Once the scoring was completed and results were available, we held a follow-up meeting with all scorers. We organized this session as an informal focus group with specific prompts regarding their views on the overall process, the rubric as it defines critical thinking, the rubric’s fit with the student work assessed, and the potential of this approach for future campus assessments. In addition to this focused conversation with scorers, we also interviewed a set of instructors who had submitted student work for the assessment. In advance of each interview, we sent each instructor the rubric and both a high- and low-scoring paper from their course, asking them to score the work using the rubric. In the interview, we shared the external and internal scores to discuss how their own assessment fit with the external scoring and how they perceived the alignment of the rubric with the assignment from their courses. We also discussed the critical thinking criteria more broadly and how those criteria fit with their own definition and their discipline’s conception of critical thinking relevant to undergraduate student work.

Our conversations with scorers led us to examine the fit of the rubric to the student work submitted, as the scorers indicated a concern that the broad range of assignments represented might affect scoring in a manner that said less about what students can do than what the assignments asked for. Both a qualitative and quantitative analysis of the assignments and student work validated this concern. That is, we found that the assignments varied greatly in the kinds of critical thinking they called for, including some that were not well suited to the rubric. Further, a statistical analysis showed significant correlations between average scores and both the length of the student work artifact and the number of external sources the artifact cited (University of Massachusetts Amherst 2017).

In our second year of MSC participation (2016−17), we followed many of the same steps for our formative evaluation. We also administered a survey to all scorers to collect their feedback, met with them again for an informal conversation based on the survey, and conducted interviews with selected instructors who submitted work. The findings and observations shared here are drawn from these various sources.

The Value of Participating in the Process

Both years, faculty reported that they found participating in the process worthwhile, particularly for fostering reflection on their teaching and assessment more generally. Scorers commented on the value of both reading a wide range of student work and participating in discussions with colleagues about that work during the norming sessions. As one scorer commented, “I greatly enjoyed seeing work from other disciplines and hearing from faculty across the university.” Others pointed to how the norming discussions and scoring prompted their thinking about what they value when they assess student work: “The experience of discussing assessment of critical thinking with faculty from a range of disciplines has been very useful. I’ve learned from hearing others describe what they look for in student writing and their rationale for assigning certain rubric scores.”

Faculty also saw merit in a departmental approach. These faculty members said it could benefit their department to review and score student work for their own majors as a way to develop a shared understanding of both departmental expectations and their success in helping students achieve those expectations.

In fall 2017, one department used the Critical Thinking VALUE Rubric in an assessment of student work from their capstone course. Other departments are reviewing the Written Communication and Problem-Solving VALUE rubrics as possible tools for their assessment efforts; in at least one case, these efforts were catalyzed by efforts of faculty who served as scorers and submitters for the VALUE initiative.

Impact on Pedagogy

Both the scorers and faculty who were interviewed talked about how participating in the assessment process and interviews prompted reflection on their own methods for fostering critical thinking. For example, “This experience has helped me think about what students can produce and what they need to help them produce a solid paper.” Others commented that it helped sharpen their own conception of critical thinking. As one faculty member said in an interview, “We all talk about critical thinking, but neither our students nor we have a real definition of it.” A scorer indicated in a survey response, “This process has given me a language for defining critical thinking and has helped me to separate the evaluation of critical thinking from the evaluation of writing.”

Not only did scorers and instructors find the rubric helpful to their own thinking about what they meant by the term “critical thinking,” but they could also see the value of the tool in helping to communicate their expectations to students. As a survey response, one scorer wrote, “The rubric is also a useful starting point for faculty to communicate with students about their critical thinking, particularly in gen ed classes and writing classes.” In fact, following each year of participation in the MSC, a few scorers and instructors talked about using the rubric or a revised version of it in their courses, both for explaining what is entailed in critical thinking and for evaluating student work. One saw it as a tool for designing assignments, commenting, “I would use the rubric to break down components of critical thinking that could then be the focus across different assignments.”

Revisions to the Assessment Process and Rubric

Based on what we learned after the first year of participating in the MSC, we decided to develop more precise guidelines, in addition to those outlined by the national project, for the artifacts we would submit in the second year. To that end, when we invited faculty to participate, we specified the following criteria for student work:

  1. The work is from an advanced course within a student’s major.
  2. Papers are at least eight pages, preferably no more than twenty pages.
  3. It should be a final major paper for the course, preferably one where students have had the opportunity to revise prior to final submission.
  4. The work should use primary or secondary sources.
  5. It should be appropriate for assessment using the criteria identified in the rubric.

Feedback from both scorers and faculty who submitted student work pointed to the need to revise the VALUE rubric both to clarify certain aspects and to better align with our local values and student learning goals. For example, faculty were troubled by the stipulation that a high score for the “evidence” criterion required that “viewpoints of experts are questioned thoroughly,” as it seems to imply such questioning is always appropriate. As one instructor commented, “I don’t want [students] to challenge expertise when it’s not called for. I want them to think for themselves.” For this reason, we eliminated this stipulation, feeling that evaluating sources and considering others’ points of view—as included elsewhere in the rubric—sufficiently covered the intention of this descriptor.

A major revision we made was to delete the criterion “conclusions and related outcomes” because scorers reported having difficulty distinguishing it from “student’s position” in some student work, and because it seemed tailored more to some genres than others. On the other hand, scorers were concerned that overall logical coherence was not addressed as an important aspect of critical thinking. As one scorer said, “There is something holistic missing about the coherence of the whole piece, . . . the logical train of thought of the whole.” For this reason, we added “logical coherence” as a criterion.

We used our revised rubric for our on-campus scoring in 2016−17, which means, of course, we cannot easily compare our faculty scoring with the national scores beyond the first year of our participation. However, we felt it was more important to be responsive to faculty feedback and make changes that will fit our context better than to ask faculty to use a rubric they found difficult to manage.

The scorer survey responses indicate that our revisions were well received. In response to a question about how well the revised rubric worked for scoring student work, on a five-point scale from “not well at all” to “very well,” all respondents said it worked “fairly” or “very well.” One added, “The rubric worked well with most papers—much better than last year!” Asked about the effectiveness of “the assessment process overall (calibration session, online system, timeline, clarity of purpose, etc.),” the scorers were even more positive, with two-thirds judging it to be “very effective” and the other third “effective.” One termed the calibration sessions “extremely helpful” and another noted that they were “more helpful than last year.” These responses underscored for us the value of the input we received from scorers and instructors for the revisions that we made.

Looking Forward

Our formative evaluation of our first two years of participation in the VALUE initiative demonstrated clear benefits to the individual faculty members who participated. The opportunity to review student work from across disciplines and engage in focused conversations about critical thinking with each other has helped inform their own teaching and communication with students.

Our survey results offer further insights into the potential value of the assessment process. We asked the scorers to indicate the extent to which they felt the assessment process they participated in could be useful at the university, school/college, and department levels. They could see the value of university-level assessment, with six of the twelve scorers indicating it had “great potential” for university-wide assessment. They were even more inclined to see its value at the department level, with nine of the twelve respondents indicating that it had “great potential” for department-based assessment. One survey respondent wrote:

I learned from working with colleagues and assessment. . . . It did reinforce the belief that this is an important area to continue improving. . . . It would be outstanding to have this happen department-wide, because faculty could learn from each other and be more consistent with elements.

What is particularly promising is the extent to which the process has potential for both cross-disciplinary, university-wide assessment efforts and the more focused departmental assessment needs. The departmental efforts are supported and reinforced by the campus’s enhanced program-based assessment plan (the Educational Effectiveness Plan), which streamlines and regularizes departmental planning, budgeting, and assessment into one coordinated process for improving the undergraduate experience (University of Massachusetts Amherst, 2018).

With two years under our belt, in 2017–18, we created a hybrid rubric that includes key aspects of both critical thinking and written communication and used it with good results. We also collaborated with the University Writing Program and included a selection of writing by first year students to expand our understanding of our students’ skills at two key points and to test the applicability of the rubric beyond upper level capstone work. As we enter our fourth year of participation in VALUE Institute assessments, we continue to fine-tune both the rubric and process. Still, we have increased confidence in using the VALUE approach to inform conversations with faculty about undergraduate student performance and what our results might suggest for changes at the university, department, and course levels. In addition, our campus-developed rubric is emerging as a useful tool for departments to use in their own assessment efforts. We have also been able to build a cadre of faculty with assessment experience who can work with the Office of Academic Planning and Assessment and their own departments and colleagues to build thoughtful assessment approaches that augment the evidence-based inquiry they are already conducting.

Our participation in the VALUE initiative reinforces the student learning assessment outcomes we want to communicate to faculty, students, and the administration. These coincide with the goals of the VALUE initiative: using student work, encouraging faculty participation in determining what criteria will be used for assessment and how the assessment will be conducted, and using the faculty conversations and assessment results to inform and improve student learning and the student experience on campus.

In an interview with a faculty member who submitted student work, we asked how the VALUE assessment process compared with the assessment work her department conducts for their external disciplinary accreditation. She said, “The [professional accreditation] assessment process is not very interesting. This seems more interesting.”

Assessment activities that faculty view as engaging and interesting and promote self-reflection on teaching have great value to any campus. The faculty on our campus who have invested their time and effort into the process have experienced individual benefits to their teaching and see the potential for realizing the university’s larger goal of enhanced student learning assessment. We look forward to what our future involvement might hold.

References

Association of American Colleges and Universities. n.d. “VALUE Rubrics.” www.aacu.org/value-rubrics.

University of Massachusetts Amherst. 2013. Innovation and Impact: Renewing the Promise of the Public Research University. Amherst, MA: University of Massachusetts Amherst Joint Task Force on Strategic Oversight. www.umass.edu/chancellor/sites/default/files/pdf/jtfso-phase-i-report2.pdf.

University of Massachusetts Amherst. 2017. Report on UMass Amherst’s Participation in the Pilot Valid Assessment of Learning in Undergraduate Education (VALUE) Initiative, 2015−2016. Amherst, MA: Office of Academic Planning and Assessment. http://www.umass.edu/oapa/sites/default/files/pdf/learning_outcomes/final_umass_amherst_value_initiative_participation_report_2016.pdf

University of Massachusetts Amherst. 2018. “Educational Effectiveness Plan.” https://www.umass.edu/oapa/program-assessment/academic-department-assessment/educational-effectiveness-plan-eep.


Martha L. A. Stassen, Associate Provost for Assessment and Educational Effectiveness; and Anne J. Herrington, Distinguished Professor of English Emerita and Faculty Fellow, Office of Academic Planning and Assessment, both of University of Massachusetts Amherst

Select any filter and click on Apply to see results