Skip to content Skip to footer site map

Navigate Up
Sign In
 

Treasury Notes

 A Comparison between the College Scorecard and Mobility Report Cards

By: Adam Looney
1/19/2017

Introduction
 
In 2015, the Department of Education launched the College Scorecard, a vast database of student outcomes at specific colleges and universities developed from a variety of administrative data sources. The Scorecard provides the most comprehensive and accurate information available on the post-enrollment outcomes of students, like whether they get a job, the rate at which they repay their loans, and how much they earn.
 
While labor-market success is certainly not the end-all-be-all of higher education, the notion that a college education is a ticket to a good job and a pathway to economic opportunity is intrinsic to the tax benefits and financial support provided by federal and state governments, to the willingness of parents and families to shoulder the burden of college’s high costs, and to the dreams of millions of students. More than 86% percent of freshmen say that “to be able to get a better job” is a “very important” reason for going to college.[1]
 
That is why the College Scorecard is a breakthrough—for the first time, students have access to detailed and reliable information on the economic outcomes of students after leaving college, including the vast majority of colleges that are non-selective or otherwise fall between the cracks of other information providers.
 
The data show that at every type of post-secondary institution, the differences in post-college earnings across institutions are profound. Some students attend institutions where many students don’t finish, or that don’t lead to good jobs.   
 
Moreover, the analysis behind the Scorecard suggested not only that there are large differences across institutions in their economic outcomes, but that these differences are relevant to would-be students. For instance, the evidence in the Scorecard showed that when a low-income student goes to a school with a high completion rates and good post-college earnings, she is likely to do as well as anyone else there. While there are large differences between where rich and poor kids are likely to apply and attend, there is little difference in their outcomes after leaving school: the poorest aid recipients earn almost as much as the richest borrowers. This pattern suggests, at least, that low-income students are not mismatched or underqualified for the schools they currently attend. But it is also consistent with powerful evidence from academic studies that show that when marginal students get a shot at a higher-quality institution their graduation rates and post-college earnings converge toward those of their new peers (Zimmerman 2014, Goodman et al. 2015).
 
Hence, the Scorecard is likely to provide useful information for students, policymakers, and administrators on important measures of post-college success, access to college by disadvantaged students, and economic mobility.  Indeed, the College Scorecard shows that great economic outcomes are not exclusive to Ivy-League students. Many institutions have both good outcomes and diverse origins—institutions whose admissions policies, or lack thereof, take in disproportionate shares of poor kids and lift them up the economic ladder.
 
Nevertheless, the design of the Scorecard required making methodological choices to produce the data on a regular basis, and making it simple and accessible required choosing among specific measures intended to be representative. Some of these choices were determined by data availability or other considerations.  Some choices have been criticized (e.g. Whitehurst and Chingos 2015). Other valuable indicators could not be reliably produced on a regular basis or in a way that evolved over time as college or student outcomes changed.
 
In part to address these issues, we supported the research that lead to the creation of Mobility Report Cards, which provide a test of the validity and robustness of the College Scorecard and an expansion of its scope.
 
Mobility Report Cards (MRCs) attempt to answer the question “which colleges in America contribute the most to helping children climb the income ladder?” and characterize rates of intergeneration income mobility at each college in the United States. The project draws on de-identified administrative data covering over 30 million college students from 1999 to 2013, and focuses on students enrolled between the ages of 18 and 22, for whom both their parents’ income information and their own subsequent labor-market outcomes can be observed.  MRCs provide new information on access to colleges of children from different family backgrounds, the likelihood that low-income students at different colleges move up in the income distribution, and trends in access over time.
 
Background on College Scorecard
 
The College Scorecard provides detailed information on the labor-market outcomes of financial-aid recipients post enrollment, including average employment status and measures of earnings for employed graduates; outcomes for specific groups of students, like students from lower-income families, dependent students, and for women and men; and measures of those outcomes early and later in their post-college careers. These outcome measures are specific to the students receiving federal aid, and to the institutions those students attend. And the outcome measures are constructed using technical specifications similar to those used to measure other student outcomes, like the student loan Cohort Default Rate, which allows for a consistent framework for measurement while allowing institution outcomes to evolve from cohort to cohort.
 
The technical paper accompanying the College Scorecard spelled out the important properties and limitations of the federal data used in the Scorecard, regarding the share of students covered, the institutions covered, the construction of cohorts, the level of aggregation of statistics, and how the earnings measures were used.
 
These choices were made subject to certain constraints on disclosure, statistical reliability, reproducibility, and operational capacity, and with specific goals of making the data regularly available (updating it on an annual basis), using measurement concepts similar to those used in other education-related areas (like student loan outcomes), and providing measures that could evolve over time as characteristics of schools and student outcomes changed. These constraints imposed tradeoffs and required choices. Moreover, the research team producing the MRCs was not bound by certain of these methodological requirements or design goals, and thus could make alternative choices. Despite making different choices, however, the analysis below shows that on balance the outcome measures common to both projects are extremely similar.
 
In brief, the Scorecard estimates are based on data from the National Student Loan Data System (NSLDS) covering undergraduate students receiving federal aid.  NSLDS data provides information on certain characteristics of students, the calendar time and student’s reported grade level when they first received aid, and detailed information on the institution they attended (such as the 6- and 8-digit Office of Postsecondary Education Identification number OPEID). These data and identifiers are regularly used as the basis for reporting institution-specific student outcomes, like the Cohort Default Rate or disbursements of federal aid.  For purposes of constructing economic outcomes using these data, all undergraduate aid recipients were assigned an entry cohort—either the year they first received aid if a first-year college student, or an imputation for their entry year based on the year they were first aided and their academic level. (For instance, if a student self-reported entering their second undergraduate year in the first year they received aid, they would be assigned a cohort year for the previous year.[2]) If a student attended more than one institution as an undergraduate, that student was included in the cohorts of each institution (i.e. their outcomes were included in the average outcomes of each institution—just as is done with the Cohort Default Rate). These data were linked to information from administrative tax and education data at specific intervals post-entry (e.g. 6, 8, and 10 years after the cohort entry year). Adjacent cohorts were combined (e.g. entry cohorts in 2000 and 2001 were linked to outcomes in 2010 and 2011, respectively).  Individuals who are not currently in the labor market (defined as having zero earnings) are excluded. And institution-by-cohort specific measures like mean or median earnings and the fraction of students that earn more than $25,000 (among those working), were constructed for the cohorts (e.g. mean earnings for non-enrolled, employed aid recipients ten years after entry for the combined 2000 and 2001 cohorts). Each year, the sample was rolled forward one year, with the earlier cohort being dropped and a new cohort being added, allowing the sample to evolve over time.
 
This focus on aid recipients is natural for producing estimates related to aid outcomes, like student debt levels or the ratio of debt to earnings. Moreover, these data are regularly used to produce institution-specific accountability measures, like the Cohort Default Rate, which are familiar to stakeholders and authorized and regularly used to report institution-specific outcomes. Constructing the sample based on entry year and rolling forward one year allowed for comparisons within schools over time, to assess improvement or the effects of other changes on student outcomes.
 
The focus of and choices underlying the Scorecard also had several potential disadvantages, which were noted in the technical paper or by reviewers offering constructive criticism (e.g. Whitehurst and Chingos 2015).  These limitations, criticisms, and omissions of the Scorecard include the following specific to the methodology and data limitations. 
 
First, the Scorecard’s sample of students includes only federal student aid recipients. While these students are an obvious focus of aid policies, and comprise a majority of students at many institutions, high-income students whose families cover full tuition are excluded from the analysis. Moreover, schools with more generous financial aid often have a smaller share of students on federal financial aid, implying that the share and type of students included in the Scorecard vary across colleges.
 
Unfortunately, the information needed to assign students to a specific entry cohort at a specific educational institution and to report institution-specific data is not available at the same degree of reliability and uniformity for non-federal-aid recipients.  For instance, Form 1098-T (used to administer tax credits for tuition paid) may not identify specific institutions or campuses (e.g. within a state university system) and does not report information on the academic level or entry year of the student. In addition, certain disclosure standards prevented the publication of institution-specific data. Estimates based on aggregated statistics (as are used in the Mobility Report Cards) include an element of (deliberate) uncertainty in the outcomes, and subjectivity in terms estimation methodology.
 
Second, FAFSA family income may not be a reliable indicator of access or opportunity. FAFSA family income is measured differently depending on whether students are dependent or independent; it is missing for many that do not receive aid; and it can be misleading for those who are independent borrowers. Unfortunately, information on family background is generally only available for FAFSA applicants (aid recipients) who are dependents at the time of application. Mobility Report Cards provide a more comprehensive and uniform measure of family income, but only for the cohorts of students they are able to link back to their parents (e.g. those born after 1979.)
 
Mobility Report Cards
 
The above factors raised concerns about the Scorecard’s reliability and usefulness to stakeholders. In an effort to assess the validity and robustness of Scorecard measures using an alternative sample and with more consistent definitions of family income and more outcomes, we supported the analysis behind the study “Mobility Report Cards: The Role of Colleges in Intergenerational Mobility in the U.S.” (Chetty, Friedman, Saez, Turner, and Yagan 2017).
 
Perhaps most importantly, the Mobility Report Card (MRC) uses records from the Treasury Department on tuition-paying students in conjunction with Pell-grant records from the Department of Education in order to construct nearly universal attendance measures at all U.S. colleges between the ages of 18 and 22. Thus the MRC sample of students is more  comprehensive of this population relative to the Scorecard. However, older students are generally not included in the MRC sample and certain institutions cannot be separately identified in the MRC sample. Furthermore, the MRC methodology relies on producing estimates of institutional outcomes rather than producing actual data on institution outcomes. At certain institutions, particularly those that enroll a disproportionate share of older students (such as for-profit and community colleges) and where a large share students receive Title IV aid, the Scorecard provides a more comprehensive sample of student outcomes.[3]
 
Another area of difference is that the MRC organizes its analysis around entire birth cohorts who can be linked to parents in their adolescence. It then measures whether and where each member of the birth cohort attends college. By following full birth cohorts, cross-college comparisons of adult earnings in the MRC measure earnings at the same age (32-34), unlike the Scorecard which measures adult earnings across colleges at different points in the lifecycle, depending on when the students attended the college.  The advantage of the MRC approach is that it allows a comprehensive analysis of the outcomes of the entire birth cohort at regular intervals.  However, the disadvantage mentioned above is that there is no information on older cohorts born prior to 1980.
 
In addition, the MRC includes zero-earners in its earnings measures, whereas the Scorecard excludes them from their measures of earnings outcomes.[4] Because it is not possible to differentiate individuals who are involuntarily unemployed (e.g. who were laid off from a job) from those who are out of the labor force by choice (in school, raising children, or retired), the Scorecard focused on measuring earnings specifically for those who clearly were participating in the labor market.
 
Finally, family income in the MRC is measured consistently across cohorts using a detailed and relatively comprehensive measure of household income: total pre-tax income at the household level averaged between the kid ages of 15 and 19, as reflected on the parents’ tax forms.
 
The design choices made in developing the MRC come at the cost of published statistics not being exact and instead being granular estimates (see Chetty Friedman Saez Turner Yagan 2016) and of not being as easily replicable over time. However, the MRC’s design addresses many of the critiques made of the Scorecard. If the critiques of the Scorecard are quantitatively important, one should find that the MRC and Scorecard values differ substantially. In other words, the MRC data provide an estimate of how much the data constraints and methodological choices affect the data quality.
 
Comparison of the College Scorecard and Mobility Report Cards
 
The most basic test of the robustness of the Scorecard to the variations embodied in the MRC is to compare the main Scorecard adult earnings measure—median earnings of students ten years after they attend a college—with the analogous measure from the MRC: median earnings in 2014 (age 32-34) of the 1980-1982 birth cohort by college. For shorthand, we refer to these measures as Scorecard median earnings and MRC median earnings, respectively.
 
Figure 1 plots MRC median earnings versus Scorecard median earnings.[5] Both median earnings measures are plotted in thousands of 2015 dollars. Overlaid on the dots is the regression line on the underlying college-level data.
 
 


Figure 1
figure1-median-earnings.png
 
The graph shows an extremely tight, nearly-one-for-one relationship: a slope of 1.12 with an R2 of 0.92. Visually one can see that not only does each extra thousand dollars of Scorecard median earnings typically translate into an extra thousand dollars of MRC median earnings, but the levels line up very closely as well. Hence across the vast majority of colleges, Scorecard median earnings are very close to MRC median earnings.
 
The close correspondence between MRC median earnings and Scorecard median earnings can also be seen when examining college-level comparison lists. For example, among colleges with at least 500 students, almost exactly the same colleges appear in the top rankings using either measure.  (This is natural given the very high R2 reported in Figure 1.) Hence, the Scorecard and MRC share a very tight relationship.
In unreported analysis, we find that two offsetting effects tend to explain this very tight relationship between Scorecard median earnings and MRC median earnings. On the one hand, the MRC’s inclusion of students who earn nothing as adults somewhat reduces each college’s median adult earnings. On the other hand, the MRC’s inclusion of students from high-income families somewhat increases each college’s median adult earnings, as students from high-income families are somewhat more likely to earn high incomes as adults. The two competing effects tend to offset each other in practice, yielding MRC median earnings that are quite close to Scorecard median earnings.
 
While some schools are outliers, in the sense that the measures differ, those examples are often readily explained by differences in methodological choices. For instances, because the Scorecard conditions on having positive earnings, schools where an unusually high share of students voluntarily leave the labor force have different outcomes in the MRC than the Scorecard. The other important contributor to outliers is the MRC’s restriction to students enrolled between ages 18 and 22, which tends to exclude many older, mid-career workers. These individuals tend both to be employed, often have relatively high earnings, and tend to enroll at for-profit schools (or other schools aimed at providing mid-career credentials). The Scorecard includes these students, whereas the MRC tends to exclude them.
 
Conclusion
 
The College Scorecard was created to provide students, families, educators, and policymakers with new information on the outcomes of students attending each college in the United States, and improving the return on federal tax and expenditure programs. Mobility Report Cards expand the scope of the information on the outcomes and the characteristics of students attending American colleges. Our analysis finds a very high degree of agreement at the college level between Scorecard median adult earnings and Mobility Report Card median adult earnings, suggesting that the Scorecard is a reliable tool measuring the outcomes of students and institutions that benefit from federal student aid and tax expenditures.
References
 
Chetty, Raj, John N. Friedman, Emmanuel Saez, Nicholas Turner, and Danny Yagan. “Mobility Report Cards: The Role of Colleges in Intergenerational Mobility in the U.S.”. (2016).
 
Goodman, Joshua, Michael Hurwitz, and Jonathan Smith. “Access to Four-Year Public Colleges and Degree Completion.” Journal of Labor Economics (2017).
 
Whitehurst, Grover J. and Matthew M. Chingos. “Deconstructing and Reconstructing the College Scorecard.” Brookings Working Paper (2015).
 
Zimmerman, Seth D. "The returns to college admission for academically marginal students." Journal of Labor Economics 32.4 (2014): 711-754.
 
Adam Looney, Deputy Assistant Secretary for Tax Analysis at the US Department of Treasury.

[1] https://www.washingtonpost.com/news/rampage/wp/2015/02/17/why-do-americans-go-to-college-first-and-foremost-they-want-better-jobs
[2] This assignment was capped at two years, so that students reported entering their third, fourth, or fifth year were assigned a cohort two years prior.
[3] For instance, in the 2002 Scorecard entry cohort, 42 percent of students were over age 22 when they first received aid.    
[4] The Scorecard data base does include the fraction of borrowers without earnings, which allows for the computation of unconditional mean earnings.
[5] We also restrict to colleges with at least 100 MRC students on average across the 1980-1982 birth cohorts and to colleges that have observations in both the Scorecard and the MRC. For MRC colleges that are groups of Scorecard colleges, we use the count-weighted mean of Scorecard mean earnings across colleges within a group. See Chetty Friedman Saez Turner Yagan (2016) for grouping details.
Posted in:  College Scorecard
Bookmark and Share

Contact Us

Department of the Treasury
1500 Pennsylvania Ave., N.W.
Washington, D.C. 20220

General Information: (202) 622-2000
Fax: (202) 622-6415
Hours: Mon-Fri 8:00am - 5:00pm

Untitled 1
Untitled 1

E-Mail Signup

Sign Up to Receive Treasury.gov News src= Sign up to Receive
Treasury.gov News