Liberal Education

The Value of Community Building: One Center's Story of How the VALUE Rubrics Provided Common Ground

In an environment focused on research and overhead dollars, it is easy to lose sight of the main purpose of a university, which is to educate students—both as scholars within the disciplines and as citizens within a larger global community. The latter half of that mission has prompted Michigan State University (MSU) to bring attention to the importance of liberal education at Research I institutions. Although their structures and funding sources differ, Research I institutions and small liberal arts colleges share the same goal of helping students master the knowledge and skills that will enable them to become informed citizens who are able to contribute effectively to our democratic society. But how can this transformation be achieved, and what metrics can we use to define success?

To answer these questions, institutions must first identify what students are expected to gain from taking coursework and participating in academic life. These student learning outcomes may represent changes in how students think, feel, perceive, or even act as they learn and undergo various experiences during their college years. Institutions vary in how clearly they articulate expected student learning outcomes, ranging from completely implicit, and therefore unarticulated, expectations to completely explicit lists of the ways in which students should show improvement by the end of their undergraduate tenure. Once learning outcomes have been established, however, the institution must evaluate the extent to which students are making progress in achieving them.

Backward design

According to Keeling and Hersh (2011), the next step after reaching institutional consensus on student learning goals is to link those goals to the general education curriculum. But this sequencing obscures one of the trickiest issues in higher education: how to assess student learning. One way of evaluating learning outcomes at the institutional level is by applying the backward design method, which is more commonly used in instructional development (Wiggins and McTighe 1998). The first step in backward design is to identify desired results—in this case, student learning outcomes. The second step is to determine what constitutes acceptable evidence that the learning outcomes have been achieved by students. This step is accomplished by using newly created or preexisting assessment methods that align directly with the student learning outcomes. For example, if the desired outcome is the ability to evaluate and justify whether an article is scientific or not, then, as part of the assessment, the student should be asked to evaluate and justify whether or not an article is scientific. This example may seem obvious, but it illustrates the optimal level of alignment between an assessment method and the related outcome.

The third step in the backward design process is to ensure proper alignment between instruction and curricula, a step that should result in improved student performance on the assessments. To continue with the example used above, this may include an activity where the students practice examining various types of articles with the goal of discerning the characteristics of scientific articles as compared to nonscientific articles. Students should also be given opportunities to argue with others and be required to provide supportive reasoning to justify their decisions. At the end of these three steps, an evaluation of student learning should commence. This evaluation should then be followed by the refinement of any of the steps, as needed to improve alignment and student learning outcomes for future iterations. Just as this process can be followed in an individual class, so too can it be applied at the institutional level.

The alignment of institutional goals and curricula should also involve the alignment of course-level goals and the assessment of student learning, both within individual classrooms and across curricula. The gap between student learning goals and curricula can be bridged using the rubrics that the Association of American Colleges and Universities (AAC&U) developed through its Valid Assessment of Learning in Undergraduate Education (VALUE) project (see www.aacu.org/value/rubrics). Institution-wide learning outcomes, as well as those at the level of the individual course, can be readily aligned with some or all of the outcomes that the VALUE rubrics were designed to assess (Rhodes 2010). These are consensus liberal learning outcomes that have emerged at institutions of all sizes and types nationwide. Some are associated with fundamental skills, such as reading, writing, and mathematics (or quantitative literacy). Others are more nuanced and may not be targeted across all levels of education, and yet graduates entering the workforce are generally expected to have developed them (e.g., creative thinking, ethical reasoning, problem solving, and teamwork). The full set of student learning outcomes should be addressed within and across curricula, rather than isolated within any single course, discipline, or program.

Liberal learning at MSU

Michigan State University (MSU) recently adopted a set of five liberal learning goals: analytical thinking, cultural understanding, effective citizenship, effective communication, and integrated reasoning. Over the past year, MSU faculty and staff have developed rubrics to assess student progress on each of these outcomes. The rubrics are currently being evaluated by focus groups composed of faculty and staff from all colleges and instructional resource areas on campus in order to determine how they can be operationalized within and across units. The five liberal learning goals have also been aligned with a previously adopted set of global competencies.

In its approach to general education, MSU is unique among Research I institutions. In 1992, MSU created centers for integrative studies in three areas: arts and humanities, general science, and social science. Because these three centers share primary responsibility for general education and undergraduate liberal learning, and therefore face close scrutiny by institutional accreditors, the associate provost for undergraduate education has requested that they pay particular attention to the assessment of the university’s new liberal learning goals. Accordingly, the goals have been embedded into the syllabi, course materials, and curricula of all three centers. The centers have also begun to evaluate the effectiveness of their curricula and of general education, more broadly. The assessments implemented as part of these efforts are aligned with the particular goals of each center and with the university’s liberal learning goals, with implicit consideration of the global competencies.

Over the past two years, due to an influx of resources and expertise from the College of Natural Science, the Center for Integrative Studies in General Science has emerged a trailblazer with regard to the large-scale programmatic assessment of the liberal learning goals. An important element of the center’s success has been the collaborative work of an affiliated faculty learning community. While the center has several of its own full-time faculty and staff members, most of the faculty and graduate assistants who teach the courses offered by the center come from departments in either the College of Natural Science or the College of Agriculture and Natural Resources. The lecture and laboratory courses are led by graduate students, postdoctoral scholars, and faculty—including non-tenure-track, tenure-track, and tenured members.

The center offers both online and campus-based courses, as well as international study abroad and United States–based study away experiences. In addition to location, several other factors can account for wide variation among the sections of a similar course. These include the varying integration of sciences, the level of student-centeredness represented by an individual instructor’s teaching approach, the length of the term, and the instructor’s level of experience. This variety of experiences and diversity of expertise has proven to be a boon for discussions of student learning assessment.

Because the center’s instructors are not required to meet regularly, there is a spatial and temporal disconnect between instructors of courses offered for non-science majors. This challenge has been addressed through the creation of a faculty learning community, a professional development venue through which like-minded faculty convene to discuss common interests (Cox and Richlin 2004). Generally, such learning communities are led by one or more faculty facilitators, but all members have equal say in choosing the topics to be discussed and the training to be pursued. Cosponsored by the College of Natural Science and the Office of Faculty and Organizational Development, the center’s faculty learning community convenes monthly throughout the academic year to discuss programmatic evaluation efforts, goals related to the desired student outcomes for general education, challenges and solutions to teaching and learning, and new initiatives to improve teaching and learning within the courses offered by the center. The meetings have helped provide the framework and common ground needed for a diverse group of faculty to engage as active learners and participants in shared dialogue.

One center’s road to assessment and community

The Center for Integrative Studies in General Science’s faculty learning community was primed to respond when AAC&U contacted MSU in the spring of 2012 and invited its scientists to help evaluate the rubric for global learning that was then being developed as part of the association’s VALUE project. That spring, participants reviewed the VALUE rubric for global learning individually, using an online survey, and as a group, during a face-to-face meeting of the faculty learning community. Prior to the meeting, the center’s assessment team conducted a thematic content analysis of participant responses to the online survey. This analysis was then used to focus the group discussion. During the meeting, the common themes identified by the content analysis proved useful for stimulating further discussion and generating questions. The meeting also provided an opportunity for faculty to discuss the types of student assignments they had used to evaluate the VALUE rubric.

As a professional development opportunity, the process of evaluating the VALUE rubric provided training in how to use rubrics and, at least potentially, how to incorporate them into current teaching and assessment practices. Faculty participants focused on providing collaborative, iterative feedback for assessment and improvement, including active discussion of the rubric’s strengths and weaknesses, and explored ways to align individual course goals with the rubric—and ways to communicate these goals to students. They also shared effective, innovative instructional activities directly related to the goals of the rubric for use across courses.

In addition to providing AAC&U with feedback that was used to inform the subsequent revision of the VALUE rubric for global learning, the evaluation process provided an opportunity for the center’s faculty to share ideas and resources across their own community of practice—and, therefore, across disciplinary boundaries. The group discussed the center’s next steps in adopting the global learning rubric, or other rubrics, for use in classes. Before participating in the rubric review process, many of the center’s faculty were unfamiliar with rubrics. These faculty in particular gained valuable training in the creation and use of rubrics for their own courses, and all participants came to a better understanding of the use of rubrics as a way to measure and improve instructional efficacy. Engagement as a community, rather than as individual faculty members, ultimately resulted in deeper understanding.

Reflection on rubrics

By engaging with the VALUE rubric, members of the faculty learning community were able to consider the metacognitive aspects of their own teaching, including consideration of where instruction fits into the broader context of general education science training at MSU. Specifically, they addressed whether or not the student learning goals included as part of the VALUE rubric for global learning aligned with their courses and with the center’s curriculum. Faculty were able to recognize both unarticulated alignment with broad institutional goals and disconnects between practices and expectations. Evaluating the VALUE rubric was particularly useful for faculty teaching study away or study abroad courses, since these courses inherently seek to expand students’ global perspectives. Those faculty were particularly interested to see how their students would perform on assessments using the global learning rubric and how they could align their course goals more closely with those included as part of the rubric. When participants identified aspects of the VALUE rubric that aligned with goals of their courses, they were able to discuss curricular interventions that they currently use and that could be implemented across courses. The group was also able to generate instructional innovations that would advance the shared goals.

One assignment developed by a member of the faculty learning community asks each student to identify and describe an environmental problem within their region and then to search for a comparable issue in a different county. The students are then asked to compare and contrast the likely efficacy of potential solutions in both locations. Following their completion of this assignment, the students are given the VALUE rubric for global learning and asked to highlight the learning goals of the assignment and to identify their own individual levels of competency within each of the goals. This assignment addresses issues related to global citizenship and forces students to think about how they view the world and their role in it. The students’ self-assessments commonly overestimate their scores on the VALUE rubric, as compared to the instructors’ evaluations of their written responses. But such misalignments can be easily identified and then remediated through a feedback loop.

In a related development, the faculty learning community asked technology and assessment specialists to develop online versions of the rubric using the MSU-specific course management software. This online rubric will be made available to all center faculty who wish to use it in their courses. Results from these online assessments could be used in conjunction with programmatic survey data and course-specific student data to evaluate learning outcomes more holistically, either at the course, center, or institutional level.

A model for success

The integration of institution-specific goals for student learning with those specified by the VALUE rubric for global learning won broad support at MSU. The importance of evaluating student learning has been communicated across all levels of the institution, and assessment is fundamentally aligned with a core set of learning outcomes that have broad support across the university. The development of MSU’s liberal learning goals and global competencies, along with their respective rubrics, and the adaptation of the VALUE rubrics have set the stage for the institution-wide evaluation of curricula and student learning outcomes.

The success of the effort is due in large part to buy-in from the faculty who volunteered to participate in the faculty learning community as well as the financial and other support, such as letters of recognition and participation, provided by the dean of the College of Natural Science and the associate provost of undergraduate education. These administrators have also provided resources to support the assessment of student learning outcomes campus-wide, which has led subsequently to further growth in the community of practice. Through its intentional efforts to maintain institutional focus on the goal of providing undergraduate students with a liberal education, MSU can serve as a model for other Research I institutions.

References

Cox, M. D., and L. Richlin, eds. 2004. Building Faculty Learning Communities. New Directions for Teaching and Learning, no. 97. San Francisco: Jossey-Bass.

Keeling, R. P., and R. H. Hersh. 2011. We’re Losing Our Minds: Rethinking American Higher Education. New York: Palgrave Macmillan.

Rhodes, T. L., ed. 2010. Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics. Washington, DC: Association of American Colleges and Universities.

Wiggins, G. P., and J. McTighe. 1998. Understanding by Design. Alexandria, VA: Association for Supervision and Curriculum Development.


Sarah Jardeleza is assistant professor and associate director of educational research in the Center for Integrative Studies in General Science; April Cognato is assistant professor of zoology; Michael Gottfried is associate professor of geological sciences; Ryan Kimbirauskas is academic specialist in the Center for Integrative Studies in General Science; Julie Libarkin is associate professor of geological sciences and director of educational research in the Center for Integrative Studies in General Science; Rachel Olson is graduate assistant in the Department of Entomology; Gabriel Ording is associate professor of entomology and director of the Center for Integrative Studies in General Science; Jennifer Owen is assistant professor of fisheries and wildlife and large animal clinical sciences; Pamela Rasmussen is assistant professor of zoology; Jon Stoltzfus is assistant professor of biochemistry and molecular biology; and Stephen Thomas is assistant professor of zoology and associate director of the Center for Integrative Studies in General Science—all at Michigan State University.


To respond to this article, e-mail liberaled@aacu.org, with the author’s name on the subject line.

Previous Issues