Select any filter and click on Apply to see results
Table of Contents
Are You Smart Enough?: How Colleges' Obsession with Smartness Shortchanges Students
The social and economic inequities in America’s K-12 education system are well known, what with a rapidly expanding system of expensive private schools and the striking contrasts between urban and suburban public schools. America’s higher education system, on the other hand, is generally regarded as far more equitable, given that each of the fifty states attempts to provide low-cost higher education opportunities for almost any high school graduate. Nevertheless, these “open-access” state systems mask an important truth about American postsecondary education: the opportunities available to students with differing levels of academic preparation are far from equivalent. Our system invests the most in those students with the highest levels of academic preparation, and the least in those with the poorest preparation. As long as postsecondary educational opportunities remain so unequal for students with differing levels of preparation, higher education will continue to be handicapped in its efforts to contribute to a more just and equitable society.
The unequal educational treatment of college students stems largely from the fact that American higher education is designed to favor its “smartest” students, that relatively small number who get the top grades in school and earn the highest scores on standardized admissions tests. You might prefer terms like intelligence, ability, brilliance, or whatever, but for simplicity I’ve settled on smartness. The most prestigious colleges limit their admissions to such students, which means that all the other students must attend colleges with fewer resources that are staffed by faculty members who, according to national surveys, would rather be teaching “smarter” students.
Every fall, faculty members in selective colleges and universities watch closely to see how well their new freshmen have scored on admissions tests. The “smarter” the students, as reflected by their average test scores, the better. And once students enroll, most professors rely on course grades to assess student progress. Grades may be useful in identifying the “smartest” students for purposes of awarding honors, but they don’t tell us much about what each student is learning.
Given that learning is the main business of any college or university, this lack of information on what is being learned is regrettable, since it deprives college students and their teachers of valuable feedback that could strengthen the learning process among students at all levels of preparation.
In short, faculty seem content with assessment methods that are of little use in measuring what students are learning, as long as these methods allow them to rate and rank students in terms of relative smartness.
Assessment in our public schools falls prey to the same problems. Instead of revealing to teachers what each student is learning, standardized tests are used merely to compare schools, to identify the “best” and “worst” ones. To be of any use in improving teaching and learning, tests should be repeated with the same students over time to measure growth and improvement. And the tests would have to be scored differently: rather than merely comparing students with each other—“You scored better than 30 percent of test takers”—test scores would also have to reflect what students actually know, what their specific strengths and weaknesses are. The fact that college and university admissions offices rely so heavily on norm-referenced tests “gives permission,” in effect, to the lower schools to do the same.
This preoccupation with being smart has affected many of the faculty’s most important functions, to the point where it has compromised our most basic missions of educating our students and serving the society that supports us. What are some of the things about higher education that this preoccupation with smartness has affected?
When SATs and ACTs are used to screen and select, they put poor students, first-generation students, and underrepresented students of color at a competitive disadvantage. If colleges were equivalent in terms of the opportunities they provide, this wouldn’t be a problem, but they are far from equivalent. In public higher education, which enrolls 80 percent of all college students, the four-year colleges spend more than twice as much per full-time undergraduate student on instruction as the community colleges do. If we were to compare the community colleges with just the flagship universities, the instructional expenditure ratio would be well over three-to-one. Also, whereas most of the full-time freshman enrolling at four-year public colleges live on the campus—an experience that research has repeatedly shown to enhance the learning process—most community colleges have no residential facilities.1
We test because we want to identify the smartest students. And we go along with the testing companies’ longstanding practice of normative scoring. Note that normed scores—standard scores and percentiles—don’t reveal much of anything about what a student knows or what that student’s particular strengths and weaknesses are. Instead, they merely order students from the smartest to the least smart. But then that’s mainly what institutions are after: to identify the smartest students.
Schools at the K-12 level have also come to rely heavily on standardized tests, in part because we at the collegiate level are such heavy users. Since the “smartest” students account for only a small minority of those who are tested, the truly insidious aspect of normative testing at the K-12 level is that it sends powerful negative messages to the average and underprepared students: you’re not “college material,” you’re dumb, you’re lazy, you’re a loser. Since most school students receive such messages year after year, it’s no wonder that so many young people lose interest in education before they ever reach college age.
Another serious limitation of these tests is their narrowness of content. If you consult college mission statements to find out what student outcomes are most valued by institutions, you’re most likely to find qualities like leadership skills, social responsibility, creativity, and citizenship—none of which has much relevance to what standardized tests measure.
In other words, to equate student “smartness” with scores on standardized tests like the SAT or ACT greatly oversimplifies the remarkable diversity of human talent.
Another practice that reflects our obsession with smartness is selective admissions. We faculty seek to admit only the smartest students because it reflects well on us: if our students are so smart, surely we must be pretty smart. This heavy reliance on SATs and ACTs has helped spawn an annual “admissions madness,” where affluent students and their parents pull out all the stops to get the student admitted to the most selective institutions.
Institutional competition for smart students is intimately tied into the pecking order of American colleges and universities that US News and World Report attempts to document every year in what has proved to be its most profitable enterprise. We faculty prefer to work in the highest-ranked institutions, in part because it feeds our egos. The more selective and exclusive and elite our institution, the better.
Some of our less elite universities have actually resorted to purchasing smart students by “sponsoring” National Merit Scholarships, just to elevate their prestige and enhance their US News rankings. If Merit Finalists name a sponsoring college as their first choice, their chances of winning a Merit Scholarship can be substantially enhanced. Virtually none of the most elite colleges engage in this practice, however, since they can get their very smart students for free.
In the fall of 2013, the Southeastern Athletic Conference (SEC), a consortium of fourteen universities known mainly for their formidable football teams, enrolled more Merit Scholars (1,046) even than the eight Ivy League colleges did (1,014). However, more than 80 percent of the scholars enrolling at SEC universities were sponsored, while none of the scholars enrolling at Ivy League institutions was sponsored.2
Publish or perish
Still another aspect of university life that’s heavily influenced by our obsession with smartness is the faculty reward system.
American higher education has struggled for decades with the problem of “research versus teaching.” This conflict is especially severe in the major research universities, but the “publish or perish” mandate affects faculty in many smaller universities and liberal arts colleges as well. Once again, it’s the faculty’s preoccupation with smartness that fuels this imbalance. Most faculty value research because they believe that writing and publishing require a good deal of intelligence and smartness. There is little agreement, however, on what it takes to be a good classroom teacher and advisor. Moreover, whereas there is a clearly established performance standard for demonstrating your smartness through research and scholarship—and that’s publication—there is no agreed-upon way to know for sure how good a colleague is at teaching and advising.
The relative importance assigned to research versus teaching is revealed in the terminology that college professors use. Practically all faculty members, at one time or another, have probably made reference to their teaching “load.” And when a university is trying to recruit a new faculty member, one of the perks that is sometimes offered is a “reduced teaching load.” Such language implies that, for at least some faculty members, teaching is regarded as a burden. By contrast, one never hears university faculty members refer to their research “load.”
Here, in a nutshell, is the crux of the “research versus teaching” dilemma: college professors attach great importance to research and scholarship because they have created a culture that venerates smartness, but they happen to be employed in institutions where their main responsibility is to educate students.
Their reverence for smartness also influences how faculty view different disciplines. Why do the hard sciences have so much status and prestige on university campuses, and why are fields like nursing, social work, and education often looked down upon? Because professors in the hard sciences are regarded as very smart, whereas professors of nursing, social work, and education are suspected of being not so smart.
Perhaps the most telling evidence of how academics’ obsession with smartness affects their relations with students comes from national surveys. Fully half of college faculty are not satisfied with the “quality” of their students, and more than half report that working with underprepared students is a “source of stress.”3 Clearly, in a culture that venerates smartness, working with the less-than-smartest students is not a valued activity.
Our obsession with smartness also helps perpetuate the dubious practice of course grading. If faculty members really wanted to assess the effectiveness of their pedagogy and document what students are actually learning, they could hardly pick a worse assessment method than course grades.
A student who already knows most of the material before enrolling in a course can get an A without having to learn much, but among students who begin the course with little knowledge of the material, a B might mean that they’ve learned a lot. To know what individual students are actually learning, then, faculty need to employ metrics that reflect what the students actually know and use these measures longitudinally in order to measure growth or change, practices that very few college teachers employ.
So why do so many institutions persist in giving course grades? Because grades can be useful in differentiating students in terms of their relative smartness. Students with the best grades are awarded honors, and those with the poorest grades are placed on probation or dismissed. Moreover, graduate and professional schools and employers like to use undergraduate grades because they believe that grades help them identify the smartest applicants. But grades tell us virtually nothing about student learning.
Alternatives to grading
A potentially powerful tool for broadening our conception of “smartness” or “talent” is the narrative evaluation. Narrative evaluations can deal with students’ performance in individual courses, as well as with their overall growth and development. In contrast to traditional letter grades, the narrative evaluation enables the professor to make reference to anything in the student’s performance that might be relevant to the learning process: knowledge of the subject matter, strong and weak points, motivation, study habits, writing, logic, originality, and so on. Such evaluations also make it possible to provide feedback concerning almost any other developmental quality—leadership, creativity, self-understanding, citizenship, etc.—that might be relevant to the learning goals of the professor, the student, or the institution.
Narrative evaluations have tremendous potential to enhance the learning process. Students are given specific feedback about subject areas where they need to do more work, or about specific learning skills (e.g., writing) that they may need to strengthen. Moreover, the very act of preparing the narratives helps the professor and the graduate teaching assistant enhance their understanding of possible adjustments that might strengthen their pedagogical approach.
Besides inertia, there are several reasons why the faculties of most colleges and universities have so far shown little interest in replacing traditional letter grading with narrative evaluations. Perhaps the most obvious reason is the extra work involved—not only in writing the evaluation but also in getting to know the student well enough to write a meaningful evaluation. This latter problem is especially relevant for faculty who teach large lecture sections, where it’s possible for students to remain anonymous throughout the academic term. Depending upon the course and the subject matter, it might be possible for graduate teaching assistants to write narrative evaluations, with guidance from the professor. In fact, the quality of the narrative evaluation is likely to be improved significantly if the professor and the graduate teaching assistant were first to discuss each individual student’s progress. At the same time, such discussions could well enhance the ability of the professor and the graduate student to provide useful advice and guidance to the student.
The other major form of resistance to narrative evaluations stems from the fact that they fail to yield information that can readily be used to rank and rate students—that is, to identify the “smartest” students. Graduate schools and employers, in particular, are likely to raise this objection, because instead of being provided with a handy quantitative measure—the college GPA—they are forced to read a qualitative account of the candidate’s knowledge, skills, and accomplishments. Notwithstanding this minor inconvenience, it would seem that such narrative information could be of substantial value in evaluating any student’s potential as an employee or graduate student. Moreover, if a graduate or professional school wants a quantitative indicator to rank and rate their applicants, they always have their usual admissions tests—the GRE, LSAT, GMAT, or MCAT—to rely on.
Not having a traditional GPA for each student might also cause some inconvenience to institutions that like to award various sorts of academic “honors” or that want to select the “smartest” students for participation in special “honors” programs. The tradeoff here, of course, is between a simplistic quantitative measure like the GPA and a much richer resource of qualitative information that would almost surely serve to diversify the performance criteria used to award honors and to assign students to special educational programs.
In short, the potential power of narrative evaluations to enhance the teaching-learning process and to diversify the criteria used to evaluate student learning and growth would appear to far outweigh the extra work of preparing the evaluations and the inconvenience of not having a simple numerical indicator for judging a student’s relative “smartness.” Colleges can, of course, have it both ways by introducing narrative evaluations while at the same time retaining course grades and the GPA, although such an approach might dilute the value of the narrative evaluation by offering an “easy way out” for those who are content to have a simple way to rank students in terms of their relative smartness.
An ongoing national effort to find alternatives to traditional course grading is the Liberal Education and America’s Promise (LEAP) initiative of the Association of American Colleges and Universities. One purpose of LEAP, which as of fall 2015 involved more than three hundred institutional participants, is to help all students “acquire the broad knowledge, higher-order capacities, and real-world experience they need to thrive both in the economy and in a globally engaged democracy.”4 In recognition of the limitations of traditional student assessment practices, LEAP strives to “develop authentic assessment frameworks and practices . . . —keyed to student learning outcomes—that elicit and document learning through students’ own work.”5
Alternatives to traditional testing practices
College faculty obviously cannot remedy all the problems associated with the use of standardized tests, but there is at least one important action they can take that would enhance the educational development of all students: cease using norm-referenced tests, and encourage teachers and administrators in the lower schools to do the same. Implementing this recommendation does not necessarily mean that testing would have to be eliminated. What it does mean is that our method of scoring tests and reporting test results would be revised, so that normative percentiles, which tell students only where they stand in relation to other students, would be replaced by “raw” scores, such as the number or percentage of questions answered correctly. With raw scores, the built-in competitiveness of normed scores is alleviated, since each student competes with herself: “I’m going to work hard to see how much better I can do next time.”
Perhaps the main advantage of raw scores over normed scores is that they make it possible to assess growth and learning over time: “You answered fifteen more questions correctly this time,” or “This time you answered 70 percent correctly rather than only 60 percent.”6 In this way, students are being provided with concrete evidence of their learning. Being able to measure change or improvement over time also better enables schools and colleges to assess the effectiveness of their educational programs. Colleges that insist on continuing to use standardized tests to identify their “smartest” applicants can still pick those with the highest raw scores. The point is simply this: if the consumers of standardized tests—the schools and the colleges—insist that normed scores be replaced by raw scores, the testing agencies will (grudgingly) have to do it.
Replacing normed scores with raw scores suggests an entirely new use for testing: instead of using tests merely to make invidious comparisons—to sort out the “smart” from the “not-so-smart” students—why not start using them instead to strengthen the educational process? With raw scores, it becomes possible to assess changes over time, so that each student’s growth and development can be tracked. In this way, college teachers have a way of determining whether, and how much, their students are learning and whether their basic skills are improving. At the same time, students are provided with objective feedback about their own growth and improvement. Such feedback enables both the professor and the student to capitalize on one of the well-established principles of effective learning: knowledge of results.
This discussion brings us back to my central theme, the fact that college faculties need to focus more on cultivating and developing smartness than on merely identifying and celebrating it. Having regular access to information concerning how their students are changing and developing over time should help shift the attention of college faculties more in the direction of the learning process. The same would be true of narrative evaluations, which—in contrast to traditional course grades—necessarily deal with how much progress students are making in their studies.
If college professors could manage to make this shift in attention—from traditional grading, which merely compares different students’ performance at one point in time, to monitoring each student’s growth and improvement over time—some of the stigma associated with teaching average or underprepared students might well dissipate. The job of an educator, after all, is to add to the student’s development, and in many respects it is at least as important for average and underprepared students to show growth as it is for the best-prepared students to do so.
1. While it is true that community colleges enroll many adult and part-time students who may have no need for residential facilities, they also enroll at least one in three of the full-time freshmen enrolling in college directly out of high school in pursuit of a baccalaureate degree.
2. National Merit Scholarship Corporation, Beyond Academic Excellence: National Merit Scholarship Corporation Annual Report, 2012–2013 (Evanston, IL: National Merit Scholarship Corporation, 2014).
3. K. Eagan, E. B. Stolzenberg, J. B. Lozano, M. C. Aragon, M. R. Suchard, and S. Hurtado, Undergraduate Teaching Faculty: The 2013–2014 HERI Faculty Survey (Los Angeles: Higher Education Research Institute, Graduate School of Education and Information Studies, University of California, Los Angeles, 2014).
4. “About LEAP,” Association of American Colleges and Universities, accessed March 2, 2017, https://www.aacu.org/leap.
5. “What Does It Mean to Be a LEAP Institution?,” Association of American Colleges and Universities, accessed March 2, 2017, https://www.aacu.org/leap/can/what-does-it-mean-to-be-a-leap-institution.
6. If your percentile changes from one time to the next, there’s no way to tell for sure whether, or by how much, your actual level of performance has really changed.
To respond to this article, e-mail email@example.com, with the author’s name on the subject line.
Alexander W. Astin is the Allan M. Cartter Distinguished Professor Emeritus of Higher Education and founding director of the Higher Education Research Institute at the University of California–Los Angeles. This article is adapted from an address delivered at the 2017 annual meeting of the Association of American Colleges and Universities and is based on the author’s latest book, Are You Smart Enough? How Colleges’ Obsession with Smartness Shortchanges Students (Stylus, 2016).