Diversity and Democracy

Engaging Assessment: Applying Civic Values to Evaluation

At the 2015 Annual Meeting of the Association of American Colleges and Universities in Washington, DC, we asked participants in our session on approaches to assessing civically engaged student learning, “Would you describe your own assessment initiatives as involving meaningful effort, offering staff and faculty rich feedback about your work, and providing opportunities for you to engage across differences as you reflect on the people your students are becoming?”

Not a single person in a room of more than fifty raised their hands. This did not surprise us. We were testing a bold idea: that as faculty and staff charged with assessing community-engaged student learning, we can let the values that underpin our civic engagement work shape our assessment efforts as well. This practice, we have found, is an effective tool for institutional cultural change, allowing us to reclaim assessment as a generative, inquiry-based cycle whose processes—and not only products—further our center’s mission.

From Frustration to Meaning-Making

At the University of Richmond’s Bonner Center for Civic Engagement (BCCE), we have been exploring the idea that the values that guide our efforts to support engaged student learning can also guide our assessment practices. This theme is also emerging through national conversations. At an Imagining America (IA) preconference meeting in 2015, for example, Patti Clayton and members of IA’s Assessing the Practices of Public Scholarship research group (of which Sylvia Gale is a part) invited participants to explore what it would mean to “walk the talk” in assessment of community engagement, considering “how and why we might best enact the values of community engagement in assessment practices.”

Yet the frustrations that emerge as we talk with colleagues around the country make it clear that many of us still work in assessment cultures that do not feel like our own because they do not emerge from our own questions, interests, and commitments. We can distill these frustrations into three common complaints:

  • Assessment is about collection, not reflection. Assessment is a routine activity, not a creative activity. We collect information, but we rarely make meaning of the data in ways that bring fresh understanding to our work.
  • Assessment is something done to us rather than by us. We are responding to external requests rather than pursuing our own inquiry about our work and its impact.
  • Assessment is something experts do best. Assessment relies on specialized knowledge; therefore, doing it well demands that we outsource it when possible. 

These complaints echo the kinds of barriers others have observed in their work on effective outcomes assessment (see, for example, Banta and Blaich 2011). Such frustrations point, in part, to gaps between the predominant assessment tools at our disposal and the complex, iterative, community-engaged learning we are trying to measure (including the learning that occurs through community-engaged signature work). As David Scobey writes, “I have rarely seen evaluative tools that do justice to my experience or that of my students,” an experience that he describes as rooted in “meaning-making and reflection,” which in turn “nurture[s] the student’s capacity for self-making and engagement (ethical, civic, vocational) in the world” (emphasis in original, 2009). This critique is especially sharp from academic humanists like Scobey, yet it is one that resonates deeply with our own experiences.

Eager to disentangle ourselves from the frustrations named above, we began several years ago to create assessment processes that resonated with our center’s four values: lifelong learning, collaboration, full participation, and intentionality. These values originally emerged from careful consideration of what guides our programs and partnerships, and they have come to define the processes we employ to build relationships, make decisions, and do our work. Attending to these values in our assessment efforts has allowed us to invite our staff, faculty, and partners into the very culture of authentic, collaborative, purposeful inquiry that we want to cultivate in, for, and with our students. 

Collecting and Reflecting

Our most significant means of cultivating a culture of inquiry related to community-engaged student learning is a method we call the “data lab.” In a data lab, stakeholders in a program, class, or shared experience gather to look carefully at artifacts (data) that emerge from their collaborations. The artifacts originate in classes our center supports or programs we administer, and may include reflections written by Bonner Scholars (who have committed to a four-year program of sustained community engagement), student papers drafted in connection with community-based learning experiences, and site evaluation surveys. We organize the data in stations, and participants cycle through each, working alone or in groups to complete a visual or verbal interpretative experiment as they interact with the collected artifacts. The data lab helps us to deepen our understanding of student learning across our programs, and in turn to develop and refine our programs using evidence. Yet its fundamental goal is different: to build a culture of inquiry among our colleagues and allies, in part by opening dialogue about foundational concepts relevant to our work and engaging our entire team in the assessment.

We conclude each data lab by asking two simple questions: What are we learning about [focus of the data lab] from this data? and What else do we wish we knew? The first question unifies our inquiry and prevents us from slipping into a critique of the specific program or initiative that produced that data lab’s artifacts. The second question reveals important gaps in our data collection processes, and, perhaps most importantly, points us toward future directions for inquiry. This question about what’s missing has been particularly fruitful. For example, in a recent data lab, we examined end-of-year surveys in which students reflected on the skills they were learning through civic engagement. Our analysis led us to ask, “How are students utilizing their skills to build the capacity of our nonprofit partners?” We have now modified the Bonner Foundation’s capacity-building survey, completed by Bonner Scholars at the end of the year, in order to capture more nuanced answers to that question.

Assessment “Experts” at Play

Data labs are generative. But they are also playful. In fact, we have learned that creative interaction with the data is critical to a data lab’s success. While it is tempting to simply let participants discuss artifacts in familiar ways, we instead strive to identify distinct themes for data labs and surprising protocols or instructions for each data “station.” Creativity matters, we’ve found, because it freshens people’s relationships with the question, “What are our students learning?” The playfulness that characterizes a data lab moves staff away from sensitivity about the success or shortcomings of their own programs, and toward inquiry about the bigger picture of our work.

In our most recent data lab, for example, which focused on learning about students’ understandings of their own identities and the identities of others, our colleagues entered the room and walked through the magical gates of Wonderland—a carnival that featured four stations: a ferris wheel, bumper cars, a duck shooting gallery, and an opportunity to design your own ride. Each station featured one type of artifact and instructions about how to analyze the artifacts. For instance, at the design-your-own-ride station, data lab participants read the following instructions:

The artifacts are three Presentations of Learning (POLs) videos. [Note: POLs are ten-minute presentations in which senior Bonner Scholars respond to the question, “How has civic engagement affected you?”]

Your instructions are to watch one POL. When you are done watching, draw a carnival ride that demonstrates what the student learned about his/her identity and the identities of others through civic engagement. Use labels and comments to explain how the ride’s design represents the student’s understanding of identity. Name the ride for bonus points! 

This exercise was useful in two ways. First, it forced us to slow our individual processing of the information in order to imagine and draw a representation of a student’s understanding of identity (see fig. 1 for an example). Second, the accumulation of these images allowed our colleagues, together, to connect and synthesize our examination of singular artifacts in a way that transformed our larger understanding. While this was fun, it was not easy. For BCCE staff, who work across three locations and do not see each other on a daily basis, data labs spur critical discourse and curiosity about student learning in a structured, generative, and at times complicated way.

Figure 1. Click here to Enlarge Figure

dd19.4_dolson_fig1_0.jpg

At the end of the Wonderland data lab, we considered, as a group, our two customary data lab questions—in this case, What are we learning about students’ identity development? and What else do we wish we knew about students’ identity development? Though the data lab focused on an established BCCE student learning objective, the discussion that ensued made it uncomfortably clear that, even among ourselves, we had multiple understandings of the concept of identity. Our questions at the conclusion of the data lab highlight how complicated the foundational concepts of our work can be:

  • How does the center define identity when we work with students? 
  • When we discuss identity, what is important? How do we talk about it? Do we reinforce frameworks, or do we teach students to look beyond them?
  • How do students’ majors or courses of study—or their career searches—influence their thinking and understanding of identity?

The data lab method enables us to see data in new ways and allows our own questions to emerge so that we become fully invested. We left Wonderland with a more nuanced and complicated understanding of ourselves, each other, and our students’ experiences—but also, perhaps most importantly, with questions to guide our continued shared inquiry.

Where Is the Answer?

After visiting Wonderland, a new staff member confided, “It was great and I learned a lot. But I don’t understand—what is the answer?” The culture in which assessment means checking for right answers is entrenched, and has, ironically, robbed many of us of opportunities to learn about and from our own work. When we treat assessment as being primarily about finding out whether or not students learned what we wanted them to learn, we do not do justice to our students’ meaning-making experiences, or to our own. By emphasizing the data lab as a cornerstone of our assessment cycle (held two to three times a year), we are not rejecting more conventional assessment measures. But we are shifting the paradigm for what assessment of engaged learning can involve, and we are challenging the notion that any one person can “do” assessment alone. 

Our data labs are a part of a larger assessment ecosystem which includes the traditional report we complete for our campus’s Office of Institutional Effectiveness. But instead of merely collecting data for that report, we first collect and share data internally, and reflect and learn together. We think that our mission is furthered more by this emphasis on process than by focusing on a completed assessment product alone. In the short term, embracing the messiness of the data we collect, and using data labs to inspire open-ended conversations, makes us better at giving students space for more authentic reflection. For example, we now lead seniors working on their Presentations of Learning through a multistep reflection process that includes a weekend retreat. In the longer term, having been to Wonderland together with our colleagues, we can never “unhear” what we heard about the complexity of identity, and the complexity of our roles within the university. These complexities will thread their way into our conversations when we visit our community partners, plan new community-based learning classes with faculty, reflect with students, participate in working groups for the university’s new strategic plan, and, of course, as we develop our next evaluation plan. 

What we are describing is a culture of assessment that mirrors and models the kind of learning we want students to do—learning that is intentional, so we own the questions being asked; learning that is collaborative, because shared inquiry changes cultures; learning that values full participation so that multiple perspectives are heard; and learning that is generative and points, always, to new questions to pursue.

References

Banta, Trudy W., and Charles Blaich. 2011. “Closing the Assessment Loop.” Change 43 (1): 22–27.

Clayton, Patti, Joe Bandy, Sylvia Gale, Pam Korza, Lisa Lee, and Stephani Woodson. 2015. “Reimagining Assessment.” Abstract for preconference session presented at Imagining America’s National Conference, Baltimore, Maryland, September.

Scobey, David. 2009. “Meanings and Metrics.” Inside Higher Ed, March 19. https://www.inside highered.com/views/2009/03/19/scobey.


Terry Dolson is manager of community-based learning in the Bonner Center for Civic Engagement at the University of Richmond; Bryan Figura is director of the Bonner Scholars program in the Bonner Center for Civic Engagement at the University of Richmond; Sylvia Gale is director of the Bonner Center for Civic Engagement at the University of Richmond.

Previous Issues