Beware of rankings without context
Beware of rankings without context
by Jonathan Brand, Cornell College President
When the U.S. Department of Education released its first College Scorecard in September, the media narrative was that college rankings would become more reliable because they could take advantage of large amounts of federally verified data. A few months later what we have seen is that data without context aren’t very helpful.
As one example, prior to publishing the website, the Department of Education released a large data set that reported information on college attendees—but only on those who received federal financial aid. This data set—again, only for those who received federal financial aid—showed financial and performance outcomes for the nation’s colleges and universities for these students at six and 10 years post-enrollment. However, those who receive federal financial aid come from different family financial backgrounds than those who don’t. Because of those differences, the population not considered in these data may have more connections and more opportunities post-graduation than those profiled. If all students were considered, the mean income might be significantly higher than we see in the current scorecard.
Another issue with how these data are used in the scorecard is that by the time the website was completed, the visible data only reflected the 10-year outcomes. The six-year data (included in the original release) aren’t reflected in the main dashboards of the site, and aren’t easily accessible to the public via the scorecard website. From the very start, this causes confusion for visitors who find that many of the independent stories and rankings (based on the federal data) use both the 10-year and six-year data set, or the six-year data set alone.
Finally, the scorecard has made it easier to produce rankings, but those rankings are not necessarily better information than previously available. Media organizations often apply lenses to the data specific to their audience, but fail to explain why the data they are using are important, or what filters they might have applied to get to their final data subset.
Defining “upward mobility”
Days after the scorecard was released, National Public Radio’s “Planet Money” put together a collection of rankings, including a list of schools that emphasize upward mobility. For each school, the list considered the percentage of students receiving Pell Grants, which go to low-income students; the net price of college for families making less than $48,000; the number of first-generation college students at that institution; its four-year graduation rate; and the median income of its students 10 years after enrolling, among other things.
The top three schools out of 50 were Harvard University, the Massachusetts Institute of Technology, and Stanford University. All three had low net prices for low-income families, graduation rates above 90 percent, and median graduate incomes of more than $80,000. All of those statistics are impressive, but what does that really tell us? To begin with, undergraduate enrollment at these three institutions is less than 19,000, out of a total undergraduate enrollment in the U.S. of more than 12 million. While the outcomes of graduates from these schools are certainly impressive, the overall impact on the national problem is minimal. After all, on average, only 15 percent of the undergraduates of these institutions qualify as low income students, those most affected by a college education’s impact on upward mobility.
It’s all relative.
At the Chronicle of Higher Education, Andy Thomason put together five rankings using the scorecard data. He noted that there are many caveats, including that the income data provided only includes students who received federal financial aid. But still, he created a list of the lowest median incomes among colleges where students scored an average of at least 1400 on the SAT—a very specific ranking indeed.
The first question to be asked when anyone looks at that ranking is, “Why is it valuable”? Why was the lens placed only on colleges with the highest performing students by test scores? Why would someone calculate a median salary that excludes at least 3,000 U.S. institutions to arrive at a ranking for lowest median annual earnings? And why would someone examine a median that excludes the salaries of student body members who did not receive federal financial aid without considering the size of that population and their relative life post-graduation advantages compared to those of students receiving aid?
It’s also important to consider whether median salary should be the final metric of college value in the first place. Our students need to be able to earn a living, of course, but they also need to find fulfilling careers. Should rankings penalize a school that graduates more teachers and social workers than lawyers, engineers, and doctors?
Factors such as salary, graduation rate, debt at graduation, and student loan default rate all matter, but so are harder-to-measure factors such as career satisfaction. Any ranking that doesn’t consider the student experience and the kinds of careers students find after graduation is missing a big part of the picture.
From the questionable to the sensationalized
While some rankings were simply unhelpful, others, unfortunately, were just sensational.
StartClass.com ran a list in November that purported to show colleges whose alumni make less than high school graduates. The site listed 25 schools with the lowest median earnings for students six years after enrolling, many of which were below $30,000, the median salary for high school graduates between 25 and 34.
The largest problem with the StartClass comparison is that the numbers they used are, quite simply, not a valid comparison of data sets. The figure the StartClass site uses for the median salary of high school graduates—$30,000—is calculated for high school graduates 7-16 years post high school graduation (aged approximately 25-34). The data from Government Scorecard were point-in-time data for students six years after they enrolled in college, regardless of whether they graduated or not (aged approximately 24-25).
Their ranking, therefore, compares the salaries of those who have been active in the workforce for between seven and 16 years against those who have been in the workforce for, at most, two years.
It’s even more egregious when you consider that the StartClass site included the appropriate data—median salary 10 years after enrollment—but failed to utilize that information when it came to deciding which colleges’ students “earned less than high school graduates.” If it had, the list would have only been one item long, because only one school in its data set had students earning less than $30,000 10 years after enrolling.
What is the right way to rank?
College rankings will continue to be controversial, and no matter how objective they claim to be, they are the product of imperfect measurements and decisions about the importance of one factor over another. But when rankings offer no context for the data they use, they can be intentionally or unintentionally misleading.
Colleges will continue to welcome accountability for graduate outcomes and recognize the value of a scorecard that offers students the tools to make an informed decision about their college journey. However, until scorecards can both take into consideration factors of college education that can’t be quantified purely by income figures, such as campus experiences, career fulfillment, and contribution to society, and resist the temptation to sensationalize results, they will not present an accurate comparison.