Liberal Education

Assessing the Assessment Decade

The North Central Association of Colleges and Schools has stated concisely that "Programs to assess student learning should emerge from, and be sustained by, a faculty and administrative commitment to excellent teaching and learning" (NCA 2000, 32). But excellence seems to represent a moving target. As Winona State University Assessment Director Susan Hatfield (1999) points out, the validation of excellence in higher education has shifted from an earlier emphasis on inputs and processes to a more recent focus on outcomes. This fundamental change is only one of several imbalances in the practice of assessment that cry out for equilibrium.

In "A Matter of Choices," Palomba and Banta (1999, 331) write that assessment can be conducted in various legitimate ways. "As such, the process of planning and implementing assessment programs requires many choices," between philosophically different alternatives. However, these pairs of alternatives need not be seen as mutually exclusive; in fact, they should complement each other in striking "a balance that works." They discuss three critical sets of choices that institutions must face in quest of balanced assessment:

  • improvement versus accountability as motivations for assessment
  • quantitative versus qualitative means of assessment
  • course-based versus non-course models of assessment.

Half of my time is spent as a professor of political science at West Liberty State College, where I served for three years as co-chair of its College Assessment Committee, exposing me to many of the asymmetries found in assessment practices. My instincts as an instructor tell me that Palomba and Banta are right when they support equilibrium, or homeostasis, as desirable concerning these three choices. I would go even further by suggesting that when gross imbalances exist they bespeak pathological symptoms in academe.

Looking around at current practices at my home institution, at the other institutions in West Virginia, and nationally (as recounted in books, a major research survey, and journals), I see a system rife with disequilibrium concerning these vital issues. That is, a system motivated more by accountability than desired improvement, employing quantitative techniques far in excess of qualitative ones, and conceptualizing the issue chiefly as non-course-based. What troubles me about the status quo is that it reveals a profound disconnect between: a) the inclusive theory of assessment, and b) the equally exclusive practice of assessment.

The assessment movement practically owned the decade of the nineties in higher education. However, the "assessment of assessment" undertaken in a recent survey of 1,393 institutions, conducted by the National Center for Postsecondary Improvement (NCPI), chronicles disturbingly unimpressive results (Peterson and Augustine 1999). As the first major study asking exactly what institutions do with the extensive data that previous studies say are being gathered on campuses, the NCPI authors want to know if assessment data are used profitably, because the assessment literature itself posits that student assessment should not become an end in itself, but rather, serve as a means to improve education. The NCPI's baseline conclusion is that "student assessment has only a marginal influence on academic decision making" (21). Among the many valid questions raised by this research are descriptive and prescriptive ones about the nature of the faculty role in gathering and using assessment data.

Faculty role
Leading institutional researchers (IR) trumpet the axiom that assessment works best when faculty-driven, and Palomba and Banta underscore the point when they argue that "faculty members' voices are absolutely essential in framing the questions and areas of inquiry that are at the heart of assessment" (1999, 10); but current practice almost seems to mock this proposition. Another team asserts that "it is fact that most faculty still have not considered the assessment of student outcomes seriously" (Banta, Lund, Black, and Oblander 1996, xvii). The 1999 NCPI study (Peterson and Augustine 1999) concurs, reporting that only 24 percent of institutions say faculty members involved in governance are very supportive of assessment activities. An earlier Middle States Association survey (MSA 1996) found that fear of the unknown, plus heavy workloads, contribute to pervasive faculty resistance to assessment.

Many professors actively engaged in assessment have expressed thoughtful criticisms regarding the current modus operandi. In particular, instructors lack confidence in assessment's relevance (applicability to classroom teaching and learning), validity (truly measuring learning outcomes), proportionality (institutional benefits of assessment commensurate with effort devoted to it), and significance (answering the question that comes naturally to academics: So what?). Addressing these concerns is essential for the movement's goal of an assessment culture developing on campus. My own experience leads me to hypothesize that many faculty involved in assessment have failed to prioritize it above competing agendas. And what results when professors relegate assessment to such second-class citizenship? Deferring initiative for assessment to IR professionals who typically are not teachers.

For those professors truly infected by the virus of skepticism, one antidote consists of a healthy dose of qualitative methods, or soft data. Assessment's practitioners have clung to quantification, a syndrome critics call the data lust fallacy. The 1999 NCPI national survey found that the norm consists of institutions using "easily quantifiable indicators of student progress and making only limited use of innovative qualitative methods" (Marchese 1999, 54). Yet, it strikes me as naive for IR specialists to expect overreliance on empiricism to capture either the hearts or minds of dubious instructors.

Qualitative methods
One pair of advocates for greater reliance on qualitative assessment believes that a pervasive myth needs to be disputed. This myth assumes that, since qualitative methods communicate in words rather than numbers, they are less rigorous. The authors contend, however, that "These methods, when applied with precision, take more time, greater resources, and certainly as much analytical ability as quantitative measures" (Upcraft and Schuh 1996, 52). Another observer finds that the flexibility of qualitative techniques allows them to operate in a more natural setting and "permit the evaluator to study selected issues in depth and detail" (Patton 1990). A subtext reason why assessment features quantification may be that numbers are more easily processed by state legislators and external governors--those influential individuals applying pressure for institutional accountability.

Once soft data have been added to campus assessment, another antidote for the skepticism infecting some faculty is an equally strong dose of course-related process and content. Put simply, process relates to the heuristic "how" of teaching and learning; content refers to the heuristic "what" of teaching and learning. These topics embrace what faculty know and care about, and can be expressed in language congenial to the professoriate. The typical approach of using standardized tests to measure student outcomes in areas such as mathematics, writing skills, critical thinking, and computer literacy is useful, but insufficient. Free-standing outcomes testing entails an amorphous feedback loop back to the classroom.

Practitioners relying exclusively on outcomes testing exhibit something of the myopia lampooned by Plato in his Allegory of the Cave. Plato's mythic prisoner, chained in a manner allowing him to see only shadows of life on the cave wall--not life itself--parallels those willing to settle for shadows of the educational process as opposed to genuine education. The 1999 NCPI research supports this line of reasoning, finding that "relatively few links exist" between measures of student assessment and the faculty's classroom responsibilities. Germane to this gap is Palomba and Banta's assertion that "integrating assessment activities into the classroom and drawing on what faculty are already doing increases faculty involvement" (1999, 65). Emulating best practices rather than worst practices is axiomatic, and an NCA assessment consultant recently praised Winona State University for the incentives devised there to foster faculty participation in assessment activities (López 2000). Not coincidentally, the half-time director of assessment at Winona State, Susan Hatfield, spends the other half of her time teaching in the communications department.

Therefore, pedagogical process and content pertinent to the faculty mindset ought to be blended liberally into the assessment mix. But too seldom does this happen. A well-known advocate of Classroom Assessment Techniques (CAT) contends that the one-minute paper (now used in over 400 courses at Harvard) provides valuable feedback from student to instructor, quickly and efficiently, making it an example of CAT worth emulating (Cross 1998). One program steeped in CAT operates at Raymond Walters College of the University of Cincinnati and uses the course grading process for both departmental and general education assessment. Notably, the mind behind assessment at Raymond Walters is a chemistry professor, Janice Denton, who splits her time between the classroom and administering assessment. Her consultancy at my home institution impressed me as replete with creative ideas. However, direct results there elude detection. I sense that the key players (department chairs) accept many of Denton's ideas but don't know how to apply the concepts to their own bailiwick. Because I believe that a rigorous course syllabus can provide concrete hooks to ground assessment in the classroom experience that department chairs surely understand and ought to value, I have begun conducting seminars there on the model syllabus as an assessment tool.

The syllabus
The other half of my time is spent at West Virginia University, serving as co-director of a statewide international studies consortium (FACDIS), which includes all twenty of West Virginia's public and private institutions. This role has given me an appreciation for the ability of rigorous course syllabi to enhance both faculty and course development. For two decades, FACDIS has relied on improving course syllabi as its principal means of holding faculty accountable. The consortium involves 375 faculty from more than fifteen disciplines in projects supported by a combination of state funds and $1.5 million from competitive external grants. FACDIS has received two prestigious national awards in the process.

The vital resource of an exemplary course syllabus can link assessment to the classroom, and it can also generate innovative soft data germane to pedagogical process and content. A recent article develops the case for more sophisticated course syllabi (Strada 2000). Just as the last thing a fish would notice is water, academics tend to overlook the value of a comprehensive course syllabus. It seems too prosaic for some higher education professionals to take seriously. But despite operating largely in obscurity, a nascent body of literature appreciative of the syllabus' diverse contributions is beginning to emerge (Altman and Cashin 1992; Birdsall 1989; Grunert 1997). One of the most ambitious examinations of the syllabus considers course content, course structure, mutual obligations, and procedural information as basic necessities, but advocates a truly "reflective exercise" serious enough to improve courses by clarifying hidden beliefs and assumptions as part of a well-developed philosophical rationale for the course (Grunert 1997). Ideally, I look for some aspect of a professor's academic soul to shine through the pages of a thoughtful syllabus.

Benefits of good syllabi
The potential benefits of creating more complex syllabi fall into three categories. First and foremost, good syllabi enable student learning by improving the way courses are taught. This benefit seems transparent to veteran instructors who have worked to improve a syllabus; they know how it adds efficiency to organizing the course, saves time in future semesters, and establishes a paper trail to highlight the good things they already do in the classroom.

Such intuitive insights are bolstered by a study of commonalities found among Carnegie Professors of the Year recognized by the Council for Advancement and Support of Education (CASE). University of Georgia Management Professor John Lough spawned the idea of dissecting the behavior of CASE Professors of the Year to see what makes them tick--a form of best-practices benchmarking. The universal common denominator cited by Lough is that "Their syllabi are written with rather detailed precision. Clearly stated course objectives and requirements are a hallmark. They employ a precise, day-by-day schedule showing specific reading assignments as well as all other significant requirements and due dates" (Lough, in Roth, ed., 1996, 196).

Closely related to energizing teaching and learning is a second benefit of sophisticated syllabi that remains more opaque to academic eyes: use in faculty evaluation. A recent book purporting to comprehensively explain the duties of department chairs fails to include the word syllabus in its index, and I could not locate the "s" word in the book's 279 pages (Leaming 1998). An elegant syllabus includes lesson plans that provide the only true road map of what is really being taught and how it is being taught in that course. The very mention of lesson plans is too summarily dismissed by higher education faculty and administrators as pertinent only to secondary schools (therefore beneath us).

Yet, my experience tells me that, because the process is a cumulative one, lesson plans help to establish an upward course trajectory from semester to semester: One no longer backslides by forgetting something effective done five years ago or by failing to ground a trial balloon that didn't fly last time out. In the one course that I teach every semester, I revise lesson plans immediately after class. In this way, they evolve in ways analogous to the process of pecking away at a script.

Precise lesson plans also provide something of a pedagogical insurance policy for institutions that find themselves with aging faculty. If illness strikes, good lesson plans help to protect the academic integrity of what transpires in the professor's absence. Furthermore, since the comprehensive syllabus and its lesson plans are underappreciated, it is not surprising that academic administrators rarely grasp the syllabus's pertinence to promotion and tenure decisions.

Completely absent from the extensive assessment literature is any hint that the exemplary course syllabus is a player on the academic stage. This is unfortunate, because a fine syllabus contains what is tantamount to the DNA code for an endangered species: qualitative assessment that is creative and relevant to curricula. Curricular structures matter, and the solid planning of worthy syllabi yields dividends that can help to bolster curricular integrity. More importantly, dense syllabi allow us to forge substantive links between the three curricular levels of the academy which researcher Robert Diamond says currently proceed in random directions: individual courses, programs of study at the departmental level, and general education programs at the institutional level. The disconcerting result, claims Diamond, is that most free-wheeling curricula "do not produce the results that we intend" (1998, 2). Another higher education analyst similarly bemoans the curricular randomness noted above, suggesting that "institutions tend to frame policies at the global level, leaving the specifics of learning to disciplines comprised of single courses, and those disciplines seldom have the necessary resources" (Donald 1997, 169).

Linking these curricular levels in meaningful ways can only occur by holding faculty accountable, but doing so without violating their sense of academic freedom--which may happen if they are told what they should teach (content) or how they should teach it (process). Only sophisticated syllabi provide detailed and accurate snapshots of how content and process come to life in the classroom. Only thoughtful syllabi afford instructors the breathing space to reveal their pedagogical essence, thus facilitating scrutiny without rigid or heavy-handed directives. Only serious syllabi provide extensive soft data to augment the hard data routinely generated to satisfy demands for curricular accountability emanating from oversight bodies.

I am passionate about the virtues of solid syllabi because I have seen them bear fruit in the efforts of the FACDIS consortium, and in my own classroom. However, while sophisticated course syllabi can be used legitimately for either faculty evaluation or college assessment purposes, it is a cardinal principle in the assessment literature that these two processes should not overlap at any given institution, to avoid the possibility of conflict of interest between assessment and faculty evaluation.

A place for creativity
IR professionals can facilitate the course syllabus's emerging as the fulcrum linking the three levels of the academy. In order to do so, they would benefit from insights gleaned from educational psychologist Robert Sternberg (1995). He attacks standardized testing (the norm in educational assessment) for its failure to incorporate the crucial element of creativity. Thirty-two years as a teaching professor in higher education have convinced me that the value of creativity in solving academia's problems remains ill-appreciated.

The academy loves science, but mistrusts experiential insight. Consequently, higher education tends to undervalue creativity. In 2000, the president of my home institution presented a keynote address to the Association of Institutional Researchers, challenging IR people to think more creatively. Accordingly, I recommend balancing assessment with more soft data, greater concern for improvement of instruction, and the creation of course-based efforts. Fortunately, the sophisticated course syllabus can be employed to realize each of these worthy ends more comprehensively than the portfolios and capstone courses usually cited in the literature as exemplary creative assessment.

Summing up
The institutional research literature's best-case scenario - that assessment efforts be faculty-driven - makes good abstract sense. However, in the real world of widespread faculty skepticism about assessment, wisdom counsels that IR professionals nurture faculty support more creatively, preferably where they live, in and around the classroom. The common polemical cement housing both administrators and faculty is still damp enough to preclude predicting the future with any certainty.

Four scenarios still seem plausible during the next decade: 1) assessment as faculty-driven; 2) assessment as faculty-supported; 3) assessment as faculty-tolerated; 4) assessment as faculty-denigrated. In my view, the first option is an ideal type that will occur rarely under special circumstances. The second option is feasible, if assessment practitioners endeavor to engage the concerns of relevance, validity, proportionality, and significance that rankle the professoriate. I see the third option, the status quo, as likely to continue without all parties involved thinking more creatively and more critically. However, the fourth option, the worst-case scenario, should not be discounted as impossible. Realistic faculty know that the age of accountability will not soon disappear, but unless assessment is constructively linked to the courses they teach, even their acquiescence cannot be taken for granted.

The North Central Association's 1999 extensive, decade-long review of assessment concludes somberly (much like the NCPI) that "where key faculty have not claimed ownership, or participated wholeheartedly and in large numbers, institutions have had great difficulty in launching and developing their assessment programs" (López 1999, 9). This comprehensive NCA document places a great deal of emphasis on the gravity of opposition by "faculty leaders" (as opposed to rank-and-file faculty). The corrosive scenario of influential senior faculty speaking out against assessment is something that "institutions are reluctant to bring up in conversation or written documents," but if not carefully defused, it can become the "most persistent and deleterious" of all the obstacles to successful assessment (11).

As a coda, it looks to me as if the assessment literature is unaware of another relevant resource. If administrators and faculty hail from analytically-distinct planets, their differences can be bridged innovatively by those few "split personalities," like Janice Denton (Raymond Walters College) and Susan Hatfield (Winona State University), who teach half-time and hold academic rank while running exemplary assessment programs as their alter-ego. Having engaged in similar 50/50 time structuring for twenty-two years, I call this situation the Lokai role (for Frank Gorshin's character in the original Star Trek). Lokai is black on his left side and white on his right side, exactly the opposite of a rival race colored white on the left side and black on the right side. To Star Trek's audience, Lokai and his enemy seem barely distinguishable, but to the protagonists they might as well come from different planets. Some risks may exist for people who play Lokai roles on campus, but those are personal risks. Institutions that have individuals performing Lokai roles can utilize these human resources to humanize communication between faculty and administrators, thereby breathing life into what is often a moribund endeavor.


Michael J. Strada is professor of political science at West Liberty State College and visiting professor at West Virginia University


Works Cited

Altman, H., and W. Cashin. 1992. Writing a syllabus. Center for Faculty Evaluation and Development, Kansas State University.

Banta, T., J. Lund, K. Black, and F. Oblander, eds. 1995. Assessment in practice: Putting principles to work on college campuses. San Francisco: Jossey-Bass.

Birdsall, M. 1989. Writing, designing, and using a course syllabus. Office for Effective Teaching, Northeastern University.

Cross, K. P. 1998. Classroom research: Implementing the scholarship of teaching. In T.Angelo, ed. 1998. Classroom assessment and research: An update on uses, approaches, and research findings. San Francisco: Jossey-Bass, 5-22.

Diamond, R. 1998. Designing and assessing courses and curricula: A practical guide. San Francisco: Jossey-Bass.

Donald, J. and G. Erlandson, eds. 1997. Improving the environment for learning: Academic leaders talk about what works. San Francisco: Jossey-Bass.

FLAG Web site: The field-tested learning assessment guide. http://www.wcer.wisc.edu/cll/flag/

Grunert, J. 1997. The course syllabus: A learning-centered approach. Bolton, Mass: Anker.

Hatfield, S. R. 1999. Best practices in assessment: Building an assessment culture. Presentation at Minnesota State Colleges and Universities (MNSCU) Assessment Day Workshop. St. Paul, MN.

Leaming, D. 1998. Academic leadership: A practical guide to chairing the department. Bolton, Mass: Anker.

López, C. 1999. A decade of assessing student learning: What we have learned; What's next? 104th Annual Meeting, North Central Commission on Institutions of Higher Education.

---- 2000. The faculty role in assessment: Using the levels of implementation to improve student learning. Fairmont, WV Workshop Presentation.

Lough, J.R.1996. The Carnegie Professors of the Year: Models for teaching success. In J.K. Roth, ed. Inspiring teaching: Carnegie Professors of the Year speak. Bolton, MA: Anker, 212-25.

Marchese, T. 1999. Revolution or evolution? Gauging the impact of institutional student assessment strategies. Change, September/October, 53-58.

Middle States Association of Colleges and Schools. 1996. Framework for outcomes assessment. Philadelphia: Middle States Association, 3.

North Central Association of Colleges and Schools. 2000. Assessment of student academic achievement: Levels of implementation. Addendum to the Handbook of Accreditation.

Palomba, C. and T. Banta. 1999. Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

Patton, M.Q. 1990. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage.

Peterson, M. and C. Augustine. 2000. Organizational practices enhancing the influence of student assessment information in academic decisions. Research in Higher Education, 41: 1, 1-47.

Sternberg, R. 1995. Defying the crowd: Cultivating creativity in a culture of conformity. New York: Free Press.

Strada, M.. 2000. The case for sophisticated course syllabi. In, D. Lieberman, ed. To improve the academy. Bolton, Mass: Anker.

Upcraft, M. and J. Schuh. 1996. Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass.


To respond to this article, e-mail: liberaled@aacu.org, with author's name on the subject line.

Previous Issues