Toolkit Resources: Campus Models & Case Studies

At Texas A&M, faculty members who have had success with assessment help encourage their colleagues. (Photo courtesy Texas A&M)

Taking Assessment University-Wide at Texas A&M

March
2011

Texas A&M University has a pretty simple philosophy when it comes to assessment: Make it useful, make it flexible, and "play with the people who want to play," says Ryan McLawhon, Texas A&M's assistant director of institutional assessment. While this laid-back approach might seem like an oversimplification of one of the most relevant (and fraught) issues in higher education today, it's worked remarkably well at Texas A&M, which has managed to make assessment a regular university-wide activity—despite a student body of almost 50,000 and more than 2,800 faculty members. The university's Office of Institutional Assessment assists faculty members and administrators in assessing student learning outcomes and programmatic effectiveness, and in the past five years, it has introduced and carried out a variety of assessment activities ranging from department-level reviews to large-scale critical thinking assessments across many colleges.

Emphasizing the Positive

Prior to 2006, there had been many discrete efforts to assess student learning at Texas A&M, but nothing lasting. The university was starting to think about its Southern Association of Colleges and Schools (SACS) accreditation, and knew assessment should be a significant part of its preparation. At the time, Pamela Matthews, Texas A&M's associate provost for undergraduate studies, was associate dean in the College of Liberal Arts. "I got really excited about the possibilities for assessment," Matthews says. "I talked to Loraine Phillips, the director of assessment at the university, and I said, 'Let's not just do this—let's make it fun.' A big part of our work on assessment is simply demystifying the process, giving examples of what assessment looks like and how it works. Humor really helps, with frequent reminders that assessment is for them—for the faculty."

Preparation for SACS accreditation provided an opportunity to see what was working and what was not, McLawhon says, but it wasn't the only driver of assessment. "A lot of our motivation is to get more strategic. Being a large research institution, we have a large voice. We've been speaking out about what we believe is quality assessment of general education, and it's caused us to look inward and back up the things we say we believe in," he explains. One of the university's main beliefs is that assessment should be useful to the faculty. By emphasizing this utility, Matthews has been able to get faculty members from many fields on board with assessment. "If you do program-level assessment well, it will give you an awful lot of data about your program," she says.

Another core belief about assessment at Texas A&M is that the results should be easily transferable to curricular improvement efforts. While there are two main reasons that institutions assess programs—accountability and improvement—focusing on the improvement function encourages greater faculty involvement. For this reason, Texas A&M prefers assessment methods like Tennessee Tech's Critical Thinking Assessment Test (CAT), which is scored by trained faculty scorers using rubrics, over tests like the Collegiate Learning Assessment, a standardized test that is scored by computer. The CAT also provides department-level reports of results, allowing faculty to more easily close the assessment loop by making program changes.

Finally, Matthews, McLawhon, and their colleagues in the Office of Institutional Assessment believe that assessment need not be rigidly regulated. Texas A&M's large size means that its departmental assessment efforts are largely decentralized, and will look different depending on who is conducting them. The English department, for example, recently decided to focus on assessing whether inquiry-based classes would help students learn how to write and how to understand humanities research. So department faculty turned all freshman composition courses into inquiry-based courses. One assignment requires students to write essays advancing arguments that stem from research questions of the students' choosing. The department will assess whether these types of assignments help students become better writers. "We have a pretty flexible assessment philosophy," Matthews says. "If it works for you and it's something you want to know about, write up a plan, get some feedback, refine it, and then put it into practice. The flexibility and variability are things we don't just tolerate, but instead try to encourage and foster."

Aligning Assessments with Learning Outcomes

One of the Office of Institutional Assessment's main goals is to measure the extent to which university programs align with the institution's seven undergraduate learning outcomes. These learning outcomes, which were approved in January 2010 after more than a year of collaborative, campus-wide work, include effective communication, personal and social responsibility, and preparation for lifelong learning, among others. The fifteen to twenty assessment efforts conducted each year are divided informally into three "tiers" that include both direct and indirect methods, McLawhon explains. The first tier includes direct measures of learning outcomes applied to large numbers of students. An example is the CAT critical thinking assessment, which is administered on a three-year rotation to each of Texas A&M's ten colleges, so three to four colleges are participating each year. Similarly, a writing assessment project (WAP) conducted in cooperation with the University Writing Center gathers five hundred upper-level papers from capstone courses each year. The papers are assessed by faculty raters using rubrics, and inter-rater reliability is measured. The WAP is also run on a three-year rotation.

Tier 2 assessments include both direct and indirect measures of student learning, administered to large sample sizes. Examples include the Global Perspectives Inventory, which measures global competence, and the National Survey of Student Engagement. Tier 3 assessments involve indirect measures on smaller sample sizes—including counting the numbers of students who study abroad, the number involved in student organizations, and the number of students participating in co-op and internship programs. Some smaller direct assessments are also included Tier 3—like surveys of employers who host co-op students, or pilots of assessments that will eventually fall under the Tier 1 guidelines. "In the past, assessment was something you did to appease your accreditors," McLawhon says. "We're trying to change that. We've thought deeply about the outcomes we want students to achieve, and we're letting the outcomes drive the measures."

Working across Colleges and Departments

While there are challenges to creating momentum for assessment at a large institution, there are also benefits. One of those benefits is that although many faculty will opt out of assessment, the sheer number of faculty means that a significant number will still opt in. One approach that's worked well for Texas A&M is the designation of assessment liaisons for each of the university's ten colleges. Assessment liaisons work with department heads or assessment representatives from each department within the college to provide assistance in conceptualizing, carrying out, and analyzing assessment projects. "The liaisons already have relationships with people in the departments and are really able to encourage assessment," McLawhon says. Departmental faculty benefit from program-level assessment because it provides data that's purely institutional—not reported to Texas A&M's accreditors or other national entities. "Knowing it's department-only data provides a lot of incentive for faculty to get involved," McLawhon explains. "They want to know how their students are doing."

Fran Gelwick, an associate professor in the department of Wildlife and Fisheries Sciences, has become the assessment expert for her department, which is part of the College of Agricultural and Life Sciences—the largest college in the nation of its type. Because the department has more than forty faculty members, many of whom are interdisciplinary appointments, Gelwick hasn't had an easy job. Starting in 2009, she systematically talked to faculty members in the department about what assessment they were already doing and how it might be tied to larger efforts. "Our assessment story is similar to a lot of others'," she says. "We're doing things that we had been doing for quite a while, but we hadn't been specifically quantifying them." Faculty members doing research funded by the National Science Foundation, for example, were skilled at writing reports assessing their projects' effectiveness, and it wasn't a stretch for them to turn the same methods inward toward their departmental work. "I really tried to make faculty understand that we're looking at the program level," Gelwick says. "I'm not trying to grade faculty members!"

Gelwick encouraged faculty members in her department to make curriculum maps, which indicated which of the department's disciplinary learning outcomes—like self-directed learning, effective communication, and collaborative learning—were being assessed at the course level, and how this data might be aggregated at the program or departmental level. A curriculum map about writing, for example, revealed to Gelwick that undergraduate students were getting most of their writing practice in the department's upper-level courses. "This was a big 'aha!' experience, because it showed that we needed more writing early on," she explains. "We now have three new hires who are already using rubrics and assessing undergraduate writing. We're also closing that assessment loop. The temptation before was to say, 'The assessment review is done, we're finished.' But we've changed that quite a bit lately."

For the future, Matthews and MacLawhon hope to bring some of Texas A&M's academic divisions into existing projects conducted by the university's Student Life Studies department that assess aspects of student life outside the classroom—for example, how students grow as leaders while participating in cocurricular activities. They also hope to develop a more intentional marketing campaign to help faculty members understand the benefits of assessing learning outcomes. For now, though, they'll continue doing what has worked well: "We're very patient and very deliberate," Matthews says. "But good assessment is like our own pyramid scheme—people draw others in."

Institution: 
Texas A&M University