A little over a week ago the Obama administration released version 2.0 of its College Scorecard, a dramatic expansion of the one created in 2013 to help students and parents find key information about colleges and universities. The new Scorecard is the end result of the Administration’s ambitious—and misguided—goal of rating each college that participates in federal student aid programs, a goal it finally abandoned after years of criticism from a wide array of observers.

In lieu of actually rating each college and university, the Administration’s new Scorecard offers a look at new data not previously available to the public, most notably the median earnings of alumni from individual colleges ten years after they began their studies. The new data are both available to consumers via a redesigned website and to researchers in a more detailed dataset. Now that the data are posted, outside entities are free to develop their own tools to help students find the right school for them—in fact, some already have.

While the updated Scorecard has received its fair share of praise for injecting new data into the conversation, it has also been subject to heavy criticism, particularly from higher education itself. For example, arguing that the Scorecard might not provide “meaningful information,” Molly Corbett Broad, the president of the country’s largest higher education trade association (the American Council on Education), said “it appears the system only provides a single [earnings] number for an entire institution regardless of whether a student studied chemical engineering or philosophy.” Christopher Nelson, the president of St. John’s College, lamented that the new Scorecard “continues to place a premium on economic rather than noneconomic valuations.”

These criticisms are legitimate. Institution-level earnings data can be misleading—schools vary widely in the programs they offer, so a school that graduates a lot of engineers will have higher median earnings than one that does the best job educating humanities majors. Program-level data on the post-graduation success of alumni would be more informative and could help students make apples-to-apples comparisons. And the Scorecard certainly emphasizes earnings and loan repayment metrics over other indicators of success like student learning, engagement, or satisfaction. Some third-party efforts, like the Voluntary System of Accountability for public colleges and the new Gallup-Purdue Index, have done more on these fronts.

But it’s important to put these real concerns in perspective. The data certainly have limitations, but we do know a lot more than we knew before, especially when it comes to exceptionally awful schools. For example, it’s helpful to know that there are a huge number of colleges where a majority of former students don’t earn more than a high school graduate ten years after attending; that’s a low bar that 53% of the institutions in the data fail to reach. Likewise, it’s useful to identify the nearly 350 colleges where over half of students default on their loans or fail to pay a dollar of principal within seven years.

Such facts may not be helpful in figuring out which college is the absolute best match for your talents, interests, and budget constraints. They will, however, help you learn which ones to avoid—almost like a Surgeon General’s warning for colleges. Clearly, consumers need more than that to make a choice, but narrowing the list is important, too.

To be sure, advocates and policymakers should push ED to improve the data that are available, starting with program-level measures. There is bipartisan legislation in Congress that would do so (and make other improvements), but—ironically—it has been held up by opposition from some of the same trade associations that have criticized the data we do have.

The feds should also focus on making those data available to outside organizations that can then use it to create unique ratings and user-friendly websites. Releasing the data underlying the Scorecard was a first step in this direction.

It goes without saying that policymakers should avoid publishing data that might mislead students. On that, we agree with the critics; program-level outcomes are essential. But arguing for improvements is very different from arguing to keep consumers in the dark. As Congress debates the upcoming reauthorization of the Higher Education Act, reformers should both learn from the Scorecard’s shortcomings and build on the foundation it has laid out.