Peer Review, Summer 2004

Summer 
2004, 
Vol. 6, 
No. 4
Peer Review

Everything I Needed to Know about Averages... I Learned in College

Several months ago, the conservative-leaning American Council of Trustees and Alumni (ACTA) excoriated America's leading colleges and universities with a report documenting the "failure of general education" (ACTA 2004). Among many cited shortcomings, one--emphasized in bold face in the opening paragraph--is that "mathematics is no longer required at 62% of the examined institutions."

Much could be said about the educational merits of traditional core curricula or the political agendas served by debates about the core. But that is not what I found most interesting about this report. Rather, it was the messages hidden in the fine print. There, in the endnotes, lie intriguing clues about collegiate mathematics--both about its place in general education and its role in the ACTA study.

First, many colleges and universities call this core requirement not "mathematics" but "quantitative reasoning," although variations abound: "quantitative or formal reasoning," "mathematical thinking," "mathematical and logical analysis," "quantitative and deductive sciences," "formal reasoning and analysis," or "quantitative and deductive reasoning." All of these stress the processes of mathematics (reasoning, deduction, analysis) rather than its components (algebra, geometry, statistics, calculus).

Second, these requirements are often fulfilled with courses that help students build connections between mathematics and other subjects, courses that reveal how quantitative reasoning is used across the entire spectrum of collegiate studies:

  • Counting People;
  • Economics and the Environment;
  • Health Economics;
  • Introduction to Energy Sources;
  • Introduction to Population Studies;
  • Language and Formal Reasoning;
  • Limnology: Freshwater Ecology;
  • Maps, Visualization, and Geographical Reasoning;
  • Practical Physics: How Things Work;
  • Quantifying Judgments of Human Behavior.

Here's what caught my attention: in every case where colleges allowed students to fulfill a quantitative reasoning requirement with courses such as these, the ACTA study judged the institutions as not including "mathematics" in its core curriculum. These colleges wound up on the 62 percent blacklist. But colleges that required a course in college algebra--whose pièce de résistance is the manipulation of negative fractional exponents--were checked off for having a suitable "mathematics" core requirement.

Quantitative Literacy

This ACTA analysis demonstrates the presence of "two mathematics" (see Bernard Madison's article in this issue). One is an abstract, deductive discipline created by the Greeks, refined through the centuries, and employed in every corner of science, technology, and engineering. The other is a practical, robust habit of mind anchored in data, nourished by computers, and employed in every aspect of an alert, informed life. This is what these many colleges call "quantitative reasoning," what many other countries call "numeracy," or what I'll call "quantitative literacy" (or QL for short).

Although clearly related, quantitative literacy and mathematics are not the same. Whereas mathematics rises above context, QL is anchored in context. Whereas the objects of mathematical study are ideals (in the Platonic sense), the objects of QL are data, generally measurements retrieved from some computer's data warehouse. Because quantitative reasoning relies on concepts first introduced in middle school--averages, percentages, graphs--many believe that QL is just watered down mathematics (and thus should not satisfy a "mathematics" requirement). Some academics, typically mathematicians, argue that students should complete QL by the end of high school; in this view, it is not a central (or even proper) responsibility of higher education. Others, typically not mathematicians, argue that QL is too important to be left to mathematicians, whose training inclines them more toward Platonism than earthly practicality.

The issue of the core curriculum raised in the ACTA study is exactly the central issue for quantitative literacy. Whereas, typically, college-level mathematics serves primarily preprofessional purposes (as prerequisites for particular courses), quantitative literacy is essential for all graduates' personal and civic responsibilities. College-level quantitative literacy is inextricably connected to virtually all areas of undergraduate study.

Understanding compound interest is a trite staple of QL expectations, but it is nonetheless a good example whose significance is not truly manifest until students are of college age. Only when students become responsible for their own loans do the formal calculations they may have learned in eighth grade become personally meaningful. (Few adults realize the extraordinary difference even a quarter-percent change in interest rates can make on payoff time for a fixed payment loan.) More generally, it is in college where many students study historical events and first become personally engaged in social and political causes whose roots often lie just beneath the surface in the financial conditions of individuals or states. The habit of thinking quantitatively--even more, of seeking quantitative evidence--requires repeated practice in many different contexts. For that reason, many colleges have replaced course requirements (whether in mathematics or QL) with programs of "QL across the curriculum."

Less obvious, perhaps, than compound interest are the many examples of public policy issues requiring voters' attention that depend significantly on subtle quantitative reasoning. I'm not referring to obvious, although nonetheless complex, issues such as projecting future deficits or counting votes accurately, but to situations where quantitative traps lie hidden beneath routine calculations of percentages and averages. I offer a few examples from issues in public education; similar examples abound in every area of public policy.

Percentages

Major problems beset public education, leading to significant gaps in performance and to high dropout rates. Measuring the gap between expectations and accomplishment is a complex, multidimensional challenge that every parent recognizes as a task requiring judgment and interpretation. But measuring dropout rates seems simple: just apply the formula for percentages everyone learned in the seventh grade. Here's one result, as reported by the New York Times on August 13, 2003, under the headline "The ‘Zero Dropout' Miracle" (Winerip 2003):

Robert Kimball, an assistant principal at Sharpstown High School [in Houston], sat smack in the middle of the "Texas miracle." His poor, mostly minority high school of 1,650 students had a freshman class of 1,000 that dwindled to fewer than 300 students by senior year. And yet--and this is the miracle--not one dropout to report!

Nor was zero an unusual dropout rate in this school district that both President Bush and Secretary of Education Rod Paige have held up as the national showcase for accountability and the model for the federal No Child Left Behind law. Westside High here had 2,308 students and no reported dropouts; Wheatley High 731 students, no dropouts. A dozen of the city's poorest schools reported dropout rates under 1 percent.

Now, Dr. Kimball has witnessed many amazing things in his 58 years. Before he was an educator, he spent 24 years in the Army, fighting in Vietnam, rising to the rank of lieutenant colonel and touring the world. But never had he seen an urban high school with no dropouts. "Impossible," he said. "Someone will get pregnant, go to jail, get killed." Elsewhere in the nation, urban high schools report dropout rates of 20 percent to 40 percent.

A miracle? "A fantasy land," said Dr. Kimball. "They want the data to look wonderful and exciting. They don't tell you how to do it; they just say, ‘Do it.'"

As it turns out, there are a number of different ways to "do it," each with its own justification. Finding one that produces the desired answer of zero may take some effort, but it is not beyond the realm of plausible quantitative argument. One simple way is to divide the number of high school graduates by the number of entering freshmen four years earlier, and then to subtract from 100 percent. If a high school is growing, it is not unreasonable that this dropout calculation yields a number close to zero. Another approach--the one used in higher education--tracks a specific entering cohort of students through their four years of high school, ignoring all other students in the school (e.g., transfers). A third common method is to classify the reasons students leave school each year (transfer, work, jail, death, dropout, etc.) and then report only the "dropout" classifications.

Each method has distinct characteristics that may make it more or less useful for a particular purpose. The first and simplest calculation is highly sensitive to irrelevant circumstances such as growth and transfers. The second, being limited to a subset of students, may not represent the quality of education received by all students. The third attempts to account for why students leave a school, thereby limiting the meaning of "dropout" to students for whom no other reason may apply. (Setting aside the possibility of deliberate misrepresentation, this may explain the Texas miracle: "Do it" can be taken by teachers as a challenge to find any reason other than "drop out" to explain why students left school.)

In the seventh grade students might be asked, "if 500 students enter Abraham Lincoln High School as freshmen and 400 graduate, how many dropped out?" The next time they may be asked to consider what "drop out" means is when they vote for school board candidates or on a school levy referendum. Unless their quantitative literacy has been significantly enhanced, citizens are likely to enter the voting booth with a seventh-grade concept of dropout rate. That's why courses such as those discounted by the ACTA study are so important: students who took a course on, say, "Economics of Education" would be far better equipped to fulfill their responsibilities as educated citizens than those who met their mathematics requirement by simplifying rational functions in a college algebra course.

Averages

Students in my hypothetical "Economics of Education" course are likely to learn that averages, like percentages, are also a source of mysteries. A recent study shows that the average verbal SAT score did not improve during the two decades between 1981 and 2002 (Bracey 2004). But during that same period, the average scores of each of the six major ethnic categories used in reporting SAT data (white, black, Asian, Puerto Rican, Mexican, and American Indian) increased by amounts ranging from eight to twenty-seven points. Yet the overall average did not budge--enabling skeptics to claim that all the money invested in education during the last two decades has produced no noticeable improvement.

A quantitatively literate college graduate would recognize this mystery as a classic example of Simpson's Paradox: changes in composition can cause the whole to show trends opposite to each of its parts when considered separately. Demagogues rely on the public's simplistic seventh-grade understanding of how numbers work to ply their trade. But in today's data-drenched society, sometimes no one really understands what is going on.

Society's reliance on data as a justification for decisions increased gradually throughout the nineteenth and twentieth centuries (Desrosières 1998; Porter 1995), but it has taken a significant leap during the last two decades--the chief reason being the vast quantity of data that computers disgorge. More recently, the importance of QL (and the consequences of quantitative illiteracy) has been greatly magnified, if not totally transformed, by the behavior of computer networks. Recent financial scandals, for example, were enabled by clever bookkeeping that displayed apparent corporate gains while every part of the business was actually losing money. Yet even professionals well aware of Simpson's Paradox did not detect these machinations.

Something deeper than just clever or illegal accounting seems to be at work. Two years ago, an analysis of this new "culture of finance" was presented at the International Congress of Mathematicians and subsequently published by the American Mathematical Society (Poovey 2003). It suggests that the invisible impact of "mathematical abstractions" on modern society has generated a "new form of value" that is unhinged from work or experience. "In the new culture of finance, the numbers one writes and the computations a computer performs upon them generate the only value that matters." In this purely quantified culture, value is created "without labor," decisions rely on "an unstable mixture of mathematical equations and beliefs," and "responsibility is simply dispersed." In short, the mechanisms of quantification that began with averages and percentages have become just as abstract--and hence as powerful--as mathematics itself. We just haven't realized it yet.

QL on Campus

My main point in these examples is not to argue that QL is important; I've rarely met anyone who doubts that. Rather, my point is that QL is sufficiently sophisticated to warrant inclusion in college study and, more important, that without it students cannot intelligently achieve major goals of college education. Quantitative literacy is not just a set of precollege skills. It is as important, as complex, and as fundamental as the more traditional branches of mathematics. Indeed, QL interacts with the core substance of liberal education every bit as much as the other two R's, reading and writing.

Quantitative literacy differs from mathematics primarily by being anchored in real contexts. While this anchor is generally a source of strength--notably for improved student motivation and learning--it is also a source of structural weakness. Since QL is not a discipline in the traditional sense, it lacks the academic infrastructure of departments, journals, and professional associations. By its nature, QL is dispersed and, thus, almost invisible. Many efforts are now underway to make QL visible and to establish a strong presence in the ecology of liberal education. Some are described in the box on pages 6-7, others later in this issue.

From all these sources one clear priority has emerged: the need to develop benchmarks for quantitative literacy that can guide both curriculum and assessment in grades 10-16. Since QL is relatively new and since it lives in the matrix of other disciplines, neither higher education professionals nor public leaders have a clear understanding of suitable performance expectations. Consensus on expectations is a desirable (but not inevitable) outcome of various approaches to mathematical and quantitative literacy in core curricula and, more broadly, general education. This issue of Peer Review is an important step in the process of building consensus.


References

American Council of Trustees and Alumni. 2004. The hollow core: Failure of the general education curriculum. Washington, DC: American Council of Trustees and Alumni.

Bracey, Gerald W. 2004. Simpson's paradox and other statistical mysteries. American School Board Journal, February.

Committee on the Undergraduate Program in Mathematics. 2004. CUPM Curriculum Guide 2004. Washington, DC: Mathematical Association of America.

Desrosières, Alain. 1998. The politics of large numbers: A history of statistical reasoning. Cambridge, MA: Harvard University Press.

Ganter, Susan L. and William Barker, eds. 2004. Curriculum foundations project: Voices of the partner disciplines. Washington, DC: Mathematical Association of America.

Poovey, Mary. 2003. Can numbers ensure honesty? Unrealistic expectations and the U.S. accounting scandal. Notices of the American Mathematical Society 50(1): 28-35.

Porter, Theodore. 1995. Trust in numbers: The pursuit of objectivity in science and public life. Princeton, NJ: Princeton University Press.

Winerip, Michael. 2003. The "zero dropout" miracle: Alas! alak! a Texas tall tale. The New York Times. August 13.

Previous Issues