What do you think is the most important finding of the book?

Large numbers of four-year college students experience only limited academic demands, invest only modest levels of effort, and demonstrate limited or no growth on an objective measure of critical thinking, complex reasoning, and written communication. Fifty percent of sophomores in our sample reported that they had not taken a single course the prior semester that required more than twenty pages of writing over the course of the semester; one-third did not take a single course the prior semester that required on average even more than 40 pages of reading per week. Students in our sample reported studying on average only 12 hours per week during their sophomore year, one third of which was spent studying with peers. Even more alarming, 37 percent dedicated five or fewer hours per week to studying alone. These patterns persisted through the senior year and are broadly consistent with findings on academic engagement from other studies. These findings also should be considered in the context of empirical evidence documenting large declines over recent decades in the number of hours full-time college students spend studying.

What is new here? Are your findings in any way a challenge to the focus of existing research and policy in higher education?

The dominant research and policy paradigm has largely focused on the need for increasing access and retention through the promotion of student services and social engagement. From this perspective, when learning is examined at all, the focus is often on inadequate academic preparation during high school, and collegiate academic performance is frequently assessed through persistence, student grades, or self-reports of learning. While improved high school academic preparation and greater student retention are worthwhile goals, our work examines whether students are learning and what may facilitate their learning during college. We rely on an objective measure of learning and emphasize the importance of academic rigor for gains in critical thinking, complex reasoning, and writing during college. This fundamentally shifts the emphasis to academic performance, which has often been neglected or marginalized in recent discussions of higher education outcomes.

What is the most important finding missing from the popular discussion of the book?

While the average trends in our data indicate that too many students are embedded in institutions or programs that place very limited academic demands on them and that limited learning occurs for all too many students during college, there is notable variation across students as well as across institutions. We found many high-performing students from all socioeconomic backgrounds and racial/ethnic groups, as well as students with different levels of academic preparation, who improved their critical thinking, complex reasoning, and writing skills at impressive rates while enrolled in college. In virtually every college examined we found students who were devoting themselves to their studies and learning at rates substantially above the average.

How reliable is the Collegiate Learning Assessment (CLA)?

The CLA, as is the case with other measures of learning, is not a perfect instrument; however, findings based on the CLA are highly valued by many within the research community. The field of learning research continually strives to improve its measurement instruments, but we have not yet reached perfection. We need to move ahead, though, and the CLA provides good, actionable data for schools and policy makers to use. Many of those who have questioned the reliability of the CLA have not publicly provided an alternative objective measure of learning; some have even relied extensively on students’ self-reporting of learning to measure academic outcomes in their own research. Moreover, results from the Wabash Study, using the CAAP measure of critical thinking show similarly low gains in critical thinking over four years of college (Wabash Study reports a 0.44 standard deviation gain in critical thinking over four years, compared to our estimate of 0.47 standard deviations). Similarly, an extensive review of the literature by Pascarella and Terenzini in How College Affects Students using a wide range of measures and empirical strategies, estimated that seniors had a 0.50 standard deviation advantage over freshmen in critical thinking in the 1990s. Using different measures thus produces slightly different results in terms of average gains, but the overall finding of small gains persists.

How valid is the Collegiate Learning Assessment (CLA)?

This question has been addressed by a recent validity study comparing CLA with two other measures of general collegiate skills, CAAP and MAPP. Moreover, whatever the limitations of the CLA, our research demonstrates that it is remarkably sensitive to instruction (e.g., reading and writing requirements as well as time on task) and college contexts (e.g., academic major and institutional selectivity). It is also related to myriad individual-level factors that sociological research have long identified as associated with student learning in elementary and secondary education.

Is the 45% figure of students who showed little or no improvement on the test exaggerated?

The 45 percent figure is based solely on a description of the observed change in CLA test scores. For the purposes of descriptively reporting this data, we defined this measure by identifying the percentage of students who gained less than 8.5 points on the CLA (0.04 standard deviations). CLA scores range over one thousand points in our data. For comparative purposes, if one were to think about a traditional one hundred point test, our level of growth is thus set at less than one point growth on a test ranging from 0-100. Forty-five percent of our sample did not improve by even this meager amount on the CLA assessment of critical thinking, analytical reasoning, and writing. Regardless of the specific cutoff used to describe the lack of observed growth, parents, educators, and policymakers would hope that students were investing time in learning and substantially improving their critical thinking, analytical reasoning, and writing skills during college as measured by objective indicators such as the CLA. Large numbers of them are not.

Can you be sure that 45% of students didn’t really learn in terms of critical thinking, analytical reasoning, and writing?

There is no technique for proving that the measured absence of something observed is definitive evidence that it does not exist. One could do a range of Monte Carlo simulations--referencing the extensive data on the distribution of CLA test scores and growth that we provide in our book for the sample as a whole as well as for 27 subgroups (see Table A2.1)--to generate slightly different percentages under different statistical assumptions. Whether the percentages generated by such statistical exercises yielded estimates slightly higher or lower is largely irrelevant, because our empirical conclusion that large numbers of students did not show improvement on the CLA during their time in college is based on a descriptive finding of student performance on the test. Our finding on CLA performance is also consistent with our extensive evidence that many students are not being exposed to rigorous academic curriculum and are not investing much time in studying.

To what extent do your results reflect resource allocation problems within colleges and universities?

Some institutions in higher education have inadequate resources. For many other colleges and universities, however, the question is more about how those resources are allocated. Over past decades, full-time faculty members have been increasingly moved to the periphery of higher education. The percentage of part-time faculty is nearly half of all faculty and instructional staff in higher education. Moreover, staff by far outnumber faculty in our colleges and universities today. Recent estimates by the U.S. Department of Education indicate that public four-year institutions employ more than three times as many staff as faculty. Thus, in these institutions, the average full time equivalent (FTE) student per FTE staff ratio is 4.3 while the average FTE student per FTE faculty ratio is 14.7. Similar patterns are observed in private institutions, with more staff than faculty and higher student-faculty than student-staff ratios. Organizations can be constrained by resources, but they also make decisions about how to invest resources available to them.

Why would we expect that students would work hard on the CLA?

It is crucial to note that we examine students’ performance over time. Our emphasis is not on some type of a benchmark score but on examining the extent to which students improve their performance from their freshman to their sophomore year, and later to their senior year. Given the voluntary nature of participation in our project, students who were assessed in the sophomore and especially the senior year were selected on their motivation--i.e., if a student really was not motivated to take this assessment seriously, why would they even show up for it, especially for the second and third time? Moreover, any evidence that students become less motivated to perform well on such tasks the longer they are enrolled in school would provide even greater concern about the state of higher education, as one would expect these institutions to instill attitudes and dispositions in students toward hard work and applying oneself to such tasks. And for those who may suggest that students will not perform well on any standardized measure of learning and that we should rely instead on course grades as an indicator of learning, we note that seniors in our sample who reported studying five or fewer hours a week had a 3.16 GPA. 

How do you reconcile your finding regarding peer studying with a long tradition of research and practice emphasizing collaborative learning?

There are contexts and environments in which active and collaborative learning can produce improvement in students’ skills, such as learning communities or programs following Treisman’s Emerging Scholars model. However, these specifically structured contexts are very different from the more common encouragement of students to work together with minimal guidance. How many faculty are pedagogically trained to effectively structure collaborative learning environments or manage groups? To what extent do many students have difficulties either focusing academically or effectively structuring group work with peers? While structured collaborative learning can indeed be effective, we did not observe gains as measured by the CLA when students had studied informally with their peers in current collegiate contexts.

Why aren’t the schools in your study named?

When we approached colleges and universities requesting their participation in the project, we assured them that we would respect their confidentially. This is standard practice in social science research. Respecting confidentiality of collaborating institutional research sites and participating research subjects is essential to ensure that social scientists have similar access to educational institutions and students in the future.

Are students progressing on other measures of learning and development?

We would hope so, but are disheartened by the recent findings from the Wabash Study showing that students improve their critical thinking skills, however modestly, more than virtually any other measure of student outcomes. Moreover, while students may be developing subject-specific knowledge, would we still not want them to develop critical thinking, analytic reasoning, and writing skills? Finally, given their self-reported time use, which indicates limited amount of time spent studying, we worry that too little learning of either general or subject specific skills takes place on our college campuses today. With that said, we encourage studies to use multiple indicators of learning, including higher-order skills such as critical thinking and complex reasoning as well as subject-specific skills.

Why are you arguing against federal accountability?

Existing measures of student learning in higher education are not adequate to base an accountability system upon, and unintended negative consequences resulting from the introduction of such a system would likely be quite pronounced. Moreover, given that the vast majority of variation in student learning is found within schools, it is sensible to focus our efforts on strengthening mechanisms that would require colleges and universities to look first not for exemplary colleges down the street, but for pockets of excellence and areas requiring improvement internally in terms of measured program quality, academic rigor, and demonstrated student learning.

Why have you been demanding greater federal research support for the assessment of student learning?

While one might oppose the use of standardized assessments of student performance (such as the CLA) for accountability purposes, it is quite another matter for the federal government not to collect and make available data with such measures to advance scientific knowledge of factors associated with student learning. The federal government has been collecting and disseminating data of this character for decades on representative random national samples of elementary and secondary school students. It is a national disgrace that such information has not been made available for social science and educational researchers to explore individual and institutional factors associated with improved performance of students in colleges and universities. In terms of federal expenditures on higher education, it would take a relatively modest outlay (likely on the order of $10-15 million) to provide the necessary resources for the National Center for Educational Statistics (NCES) to embed longitudinal measures of student performance while they track individuals as they progress through college. A strategic opportunity presents itself in the current study NCES is conducting that is already tracking students through high school and into college. It is called the High School Longitudinal Study (HSLS). Join us in demanding that the NCES collect and disseminate longitudinal data from a national random sample of students that would track student performance for basic research purposes!

Why does higher education appear to demand so little academically of many students?

The incentive structures in higher education are misaligned with academic rigor. While faculty spend a sizable proportion of their time teaching and preparing for classes, reward structures generally do not focus on these activities. Research is increasingly the key requirement for promotion and tenure in four-year colleges and universities of whatever type. Moreover, when teaching enters faculty evaluation protocols, it is generally in the form of student evaluations. We know that student evaluations are correlated with the grades students expect to receive in the course, and are not necessarily adequate measures of learning. This provides a perverse incentive for faculty to demand little and give out good grades, which has been reflected in grade inflation. And misaligned incentives do not only pertain to faculty. Administrators are rewarded for leading “successful” institutions. This tends to imply increasing the selectivity of the student body, since college-ranking systems place a disproportionate weight on the characteristics of the entering student body and pay no attention to whether and to what extent students are learning. Parents – although somewhat disgruntled about increasing costs – want colleges to provide a safe environment where their children can mature, gain independence, and attain a credential that will help them be successful as adults. Students in general seek to enjoy the benefits of a full collegiate experience that is focused as much on social life as it is on academic pursuits, while earning high marks in their courses with relatively little investment of individual effort. Undergraduate learning is currently not a priority for any of the actors in the system.

Is there anything we can do? What are the first steps colleges and universities can take? What can parents, citizens and college trustees do?

College and university leaders can commit to promoting organizational cultures conducive to academic rigor and undergraduate learning. Administrators can encourage faculty to collectively and individually review program quality to ensure adequate curricular demands (e.g., reading and writing course requirements), enhanced expectations for student academic engagement, and appropriate grading standards. Core curriculum requirements could be reviewed. Resource allocation decisions could be evaluated based on their alignment with undergraduate learning outcomes. Colleges and universities should systematically assess student academic experiences and outcomes to identify areas needing improvement, design plans to address these areas, and monitor their implementation. Faculty should be institutionally supported in efforts to improve instruction and reviewed based on a reasonable balance of research, teaching, and service. When teaching quality is assessed for promotion, tenure and compensation, multiple indicators should be used (such as adequate review of syllabi, peer observation, and sampling of student work), rather than simply relying on student course evaluations which are not aligned with promoting academic rigor. Finally, parents, citizens and trustees should ask these institutions directly: what is being done to assess academic rigor and student learning outcomes across programs at the school? How are weaknesses identified and addressed internally to ensure that all students are exposed to high quality educational opportunities?