Liberal Education

The Quality Challenge: How Kaplan Is Tackling the LEAP Call to Action

When I attended a Network for Academic Renewal conference sponsored by the Association of American Colleges and Universities (AAC&U) last year, I was surprised by the level of skepticism expressed from the podium about proprietary higher education. This article is intended as a response. From my perspective as a for-profit educator, I explore the educational concerns that AAC&U promotes so strongly. Do they matter? Why? And what can the proprietary sector contribute to the larger conversation that AAC&U is promoting?

But first, a caveat about the author. I have had the privilege of serving in both state and federal government, first as a state senator (Vermont) and later as a congressman. I have also served as founding president of both a community college (Community College of Vermont) and a state university (California State University–Monterey Bay), as well as dean of a graduate school of education (George Washington University). Most recently, I was the assistant director general for education at the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in Paris before coming to Kaplan as senior vice president of academic strategies and development in August 2007. I list these experiences as a way of suggesting that, over the years, I have looked at the issues of access, quality, and innovation from several vantage points and, thus, bring an informed and balanced perspective to this topic.

I can say unequivocally that Kaplan’s academic community is committed to the values and aspirations promoted by AAC&U through its Liberal Education and America’s Promise (LEAP) initiative. Both LEAP and the Lumina Foundation’s Degree Qualifications Profile (DQP) are shifting the policy and practice discussion in higher education toward learning outcomes, while joining the academic quality agenda with the completion and productivity agendas. Completion or attainment without academic quality is a betrayal of learners and the larger society. LEAP and the DQP provide institutions like mine the needed opportunity to discuss academic quality in a transparent environment, using third-party frameworks and definitions. Kaplan and most other American colleges and universities, both nonprofit and proprietary, badly need this transparency about academic quality.

Accountability, readiness, and quality

Historically, student learning assessment has been regarded as a faculty prerogative and conducted at the course level, free from any serious oversight. If we were educating only the “sons of Harvard,” we could probably still get away with that today. We know that strong prior educational performance is the single best indicator of future educational performance in college. So, until the fathers and mothers of those students can’t afford to pay the steadily escalating bills associated with that model, technologically enabled quality, efficiency, and effectiveness will take a back seat to prestige. Furthermore, the inputs at a place like Harvard—students, faculty, and other resources—coupled with reputation are probably quality assurance enough for most people.

There are, however, at least two big problems with the traditional approach. First, the “quality via inputs” model won’t scale to meet the requirements of mass higher education for the American public. Second, and importantly, for the vast majority of existing institutions, including the burgeoning proprietary sector, assuring that all learners, including multiple-risk-factor learners, are getting a quality academic experience and professional preparation at a price they can afford is the accountability imperative of the twenty-first century. These problems, in turn, raise at least two issues: the “qualify” issue and the “quality” issue. Although this article is predominantly about the latter, about how best to stitch academic quality throughout the curriculum in a mass higher education environment, let me say a brief word about how Kaplan “qualifies” students in order to determine that they have a chance of succeeding in an increasingly open-access world.

As anyone in the community college and state university sectors knows from experience, it is one thing to be legally qualified to attend college and quite another to be academically “ready” as a learner to prosper and have a chance of succeeding. There are and will continue to be increasingly sophisticated assessments of college readiness. These will ultimately evolve into tools that allow us to get a high-percentage handle on who has a chance at succeeding in our institutions, and who does not. This will be accompanied by alternative steps to remediate the characteristics or knowledge deficiencies that render a learner incapable of success. In the meantime, with a high early-dropout rate, we at Kaplan didn’t think we could wait for the “perfect” entry assessment. Instead, we have developed and implemented the Kaplan Commitment.

The Kaplan Commitment is simple: we provide the first five weeks of the first term without charge to all enrolled students—no tuition or fees. At or before the end of the fifth week, the learner has the right to withdraw without financial or academic penalty if they believe they are in the wrong program. The university also has the right to dismiss any student who is not making satisfactory academic progress and showing serious curricular and cocurricular engagement. The Kaplan Commitment allows learners and institution alike to “try each other out” before paying any money or receiving any aid.

Interestingly, for every student who has asked to leave, the university has dismissed several students for poor engagement and performance. The Kaplan Commitment is costly. It resulted in almost $65,000,000 in lost tuition revenue in 2011. But it is also the right thing to do until a better proxy is developed, one that more effectively identifies and responds to the behavioral and academic needs of these marginalized learners.

But for the second issue, that of academic quality, we don’t have to wait. Higher education is in the early stages of a seismic shift away from a sole focus on faculty-driven curriculum and teaching and toward learning outcomes, learning support, and learning assessment as quality differentiators. For-profit education and other private-sector interests will play a significant role in defining and developing the potential in this migration. The sector will play a leading role not because it is more virtuous, per se, but because it is not tied down politically to state funding traditions or organizationally to restrictive traditional academic processes and practices. At Kaplan, for example, the faculty jointly owns the curriculum with our Subject Matter Experts (SMEs).And the conversation amoung them drives the process. My experience in multiple sectors tells me that, as in this example, the proprietary sector will help drive innovation and improvement because it is able and motivated to respond to the new markets, new learners, and new opportunities generated by the “new ecology” of learning that is now burgeoning throughout our society.

Technology

Technology encourages and supports learning and learning operations at a scale and scope that would have been unimaginable a decade ago. As such, it has transformed both the access agenda and our ability to scale up curricular, programmatic, and support initiatives for hundreds of thousands of learners. In addition, by enabling levels of personalization and customization that would have been unthinkable only a few years ago, new media are changing the way learners behave.

With the advent of this transformative resource, which I will call “web-enabled software,” the learning platform (or its future derivatives), not simply the campus, will become the organizing architecture of the college experience. And, correspondingly, learning networks (or their future derivatives) will become the defining process for much college learning. As higher education comes out of the classroom and off the campus, the need for transparency and for quality standards linked to independent benchmarks is increasing. Moreover, the growing number of public consumers, employers, and political and policy leaders who are served by and paying for higher education will require higher and more transparent standards and processes for academic quality, effectiveness, efficiency, and success.

Several recent books, including DIY U (Kamenetz 2010) and Harnessing America’s Wasted Talent (Smith 2010), explore the impact of the web and social networking on the traditional practices and assumptions of higher education. As mentioned above, I believe that technology’s value in the “new ecology of learning” has expanded beyond providing greater access to rewriting the rules governing the traditional value proposition that is higher education itself. This “rewrite” includes assuring academic quality more consistently and reliably, with more personalization and customization, better student diagnostics, and better learning support in mass higher education.

Liberal Education and America’s Promise

The LEAP initiative offers an elegant con­ception of a liberal education curriculum. Importantly, LEAP offers common definitions and an overall rationale for what a liberally educated person should know and be able to do. In doing this, it calls on institutions of all stripes, and their faculties, to declare their interpretation of the LEAP standards and their approach to implementing them. By bridging several of the false dichotomies that have hamstrung our ability to think across traditional curricular boundaries, LEAP also brings three other values to the conversation. First, LEAP envisions the achievement of liberal education outcomes in professional and preprofessional courses and programs. This has important implications not only for deepening the quality of the course-level educational experience, but also for increasing its effectiveness and efficiency. Second, LEAP bridges the artificial separation of academic life and community life, of service to democracy and academics. No longer are the life of the community and the life of the individual learner in that community divorced by rule from academic life, recognition, and achievement. Third, LEAP expands and deepens the understanding of what it means to be “smart” and successful in school to include being capable in the nonschool world as well. This understanding of academic quality is based on a broader understanding of intellectual and behavioral capacities that include being ready and able to contribute to society civically, socially, and economically.

One of the challenges facing the implementation of the LEAP vision, however, is the need to drive the assessment of learning from the program level down to the course level, and to do so with a high degree of consistency and quality across the curriculum. How can we design curricula and then gather analytics on the teaching and learning going on such that we know reliably what is being learned, by whom, and why in any given course? To answer this question, Kaplan has introduced several initiatives—based on curriculum design and fueled by technology—that focus directly on the consistency and quality of what is learned by using learning outcomes and rubrics at the course level; the effectiveness and interplay of learner support, curricular design, and effective teaching in order to build in and secure tangible answers to academic quality questions for all learners, including historically marginalized populations; and the ability to stitch liberal education learning outcomes throughout the curriculum of a program as well as the whole university.

Course-level assessment

At Kaplan, course-level assessment is used to measure student learning and to inform a continuous academic improvement process. It is a method for measuring student mastery of stated course-level learning outcomes in an objective manner. Course-level assessment is criterion-referenced, not norm-referenced. The scores obtained measure the student’s current level of mastery of the skills and knowledge described by the outcomes. The assessment supports program-level outcomes, while providing the framework for assessing specific learning objectives and activities within a course.

Each learning outcome describes one primary area of knowledge or skills, and reflects the specific behavior(s) underlying the area of knowledge or skills for which students should be able to demonstrate mastery by the end of the course. Also, each outcome is written in a style that reflects the appropriate level of complexity of the underlying cognitive tasks required for given levels of mastery. Tracking student learning outcomes at the course level allows us to gauge both the effectiveness and the career relevance of our instruction and our curriculum—and to engage in a continuous improvement process.

The learning outcomes are supported by rubrics at the course level. For each course, faculty members develop and employ standardized rubrics to assess student success in achieving all the course outcomes. They develop a rubric for each outcome based on specific criteria, and use them to identify student progress toward mastery. Scores on outcomes are then analyzed to determine whether students are gaining the desired mastery. And, as students proceed through their programs of study, their progress on achieving the outcomes is monitored.

This approach to learning assessment allows us to achieve a high degree of consistency and academic quality. Standards and learning expectations are consistent and transparent across every section and every course. Expectations for the student-facing experience are equally consistent and transparent. Within this consistent curricular and outcome structure, faculty teaching can flex and adapt based on learner needs and faculty strengths. For example, the course-level assessment gauges student progress on mastering the general education literacies and discipline-specific course outcomes throughout the learner’s degree program. We evaluate on a 0–5 scale (0=“no progress,” 3=“practiced,” and 5=“mastery”). It is our objective that each learner will reach mastery of disciplinary course outcomes by the end of the course, and mastery of general education outcomes by the time of degree completion.

The rubrics’ structure allows us to look at the “profile of learning” either within a section or across all sections of a particular course in order to identify anomalies and success rates as well as levels of learning. In one study, completed in 2009 and 2010, the data indicated that 77 percent of our learners demonstrated “practiced” or better in early 2009, 83 percent did the same a year later, and 87 percent scored at “practiced” or better by late 2010 (Eads, Prost, and Van Dam 2010). Technology is the key differentiator in enabling us to gather and analyze these kinds of data. It allows us to scale this research to all our learners and, ultimately, to collect information on thousands of students and courses per year. Technology also allows us to obtain consistent information across every section, which would be impossible in a traditional environment. And finally, technology provides clear control over the means and structure of learning assessment, which leads to a high degree of consistency in that process as well.

A matrix approach to learning outcomes

Our use of technology also gives us the capacity to embed certain learning outcomes across the curriculum through a matrix approach. In a single course experience, we can evaluate mastery of the substance of the course as well as knowledge development in other domains, such as teamwork, writing, and critical thinking. For example, general education at Kaplan University is taught through a core curriculum of six courses, with other outcomes distributed throughout the undergraduate curriculum. The overall program goal is for the student to be literate and knowledgeable in the following eight areas: arts and humanities, communication, critical thinking, ethics, mathematics, research and information, social science, and science. The matrix approach allows us to “double up” the learning in undergraduate courses, getting better value for the learner and increased efficiency and effectiveness for the institution. The vast majority of courses contain a communication course outcome, which is key to our writing-across-the-curriculum approach. All required courses also contain course outcomes in critical thinking, ethics, or research and information. Elective courses contain evenly distributed course outcomes in the areas of arts and humanities, mathematics, science, or social science. Finally, technological literacy is reinforced throughout a student’s program.

The matrix approach provides several other advantages. The centrally managed curriculum ensures a consistent distribution of learning objectives and outcomes, as well as consistency of course outcomes across course sections regardless of individual faculty. Further, the use of common rubrics to evaluate student learning ensures the universality of the learning outcomes, and consistent faculty training on rubric use ensures inter-rater reliability.

We have conducted several studies of this approach. In a 2009 cohort study, we used three courses in sequence, with the same students remaining enrolled in each course. We reviewed the percentage of students who achieved the level of “practiced or higher” on the communication outcome—that is, the percentage who “demonstrated college-level communication through the composition of original materials in standard American English.” The percentage of students who achieved “practiced or higher” increased from 76 percent in the first course to 85 percent in the third course, a result that demonstrates steady improvement in core academic skills as the students progressed.

In a 2010 ethics and communication study, the sample included 2,581 undergraduate psychology students who were learning at the one-hundred, two-hundred, and four-hundred levels. In ethics, the average scores on the 0–5 rubric scale were, respectively, 2.72, 3.54, and 3.64. In communication, the average scores improved from 3.20 at the one-hundred level to 3.49 at the two-hundred level and, finally, 3.54 at the four-hundred level. Our initial conclusion is that the general education program is resulting in documented improvement in the core knowledge and ability areas of ethics and communication.

As students advance through their programs of study, their progress in achieving these outcomes is monitored. Course-level assessments provide feedback to students, faculty, and administrators alike about specific knowledge, skills, and abilities acquired by the students during the course of their education. The assessments also allow us to monitor the quality of the curricular design as well as the effectiveness of training in improving faculty performance, both in teaching and in learning assessment.

The ability to employ technology in order to matrix learning outcomes within a single learning experience may also reduce time to (and cost of) the degree without reducing the amount or quality of learning accomplished. If, for example, the outcomes embedded across the curriculum amounted to the equivalent of forty-five quarter credits, we could consider increasing the credit award per course and decreasing the number of courses required for graduation. By deepening the learning we are assessing, and by improving the assessments, we further enrich the value of the overall experience.

Conclusion

Our increasingly complex world has a concomitant need for more well-educated people. As American higher education moves forward, the new standard for academic quality will be determined by the use of consistent, reliable, and valid assessments that are linked to life skills as well as workplace and professional abilities. Academic quality that is derived from the reputation of the institution will continue to have validity in a few, specific cases. But in a mass higher education environment informed by information abundance, efficiency, and effectiveness, the core quality standard for most colleges and universities will be based upon consistent, reliable, and valid assessments of transparent learning outcomes. A student portfolio that tells employers (and others) what the learner knows and is able to do as a result of his or her learning will be a significant quality differentiator.

References

Eads, J., J. Prost, and K. Van Dam. 2010. “Data-Driven Improvement Using the Assessment of General Education Literacy.” Paper presented at “General Education and Assessment 3.0: Next-Level Practices Now,” Association of American Colleges and Universities Network for Academic Renewal Conference, Chicago, IL, March.

Kamenetz, A. 2010. DIY U: Edupunks, Edupreneurs, and the Coming Transformation of Higher Education. White River Junction, VT: Chelsea Green.

Smith, P. 2010. Harnessing America’s Wasted Talent: A New Ecology of Learning. San Francisco: Jossey-Bass.


Peter Smith is senior vice president for academic strategies and development at Kaplan Higher Education.


To respond to this article, e-mail liberaled@aacu.org, with the author’s name on the subject line.

Previous Issues