Of the educational accountability plans, or ESSA plans, submitted to the federal Education Department by sixteen states and the District of Columbia, the three turned in by Arizona, Colorado, and Illinois stand out as the strongest, according to a new analysis released today. On the other hand, North Dakota’s federal education compliance plan earned the lowest marks.
The center-right education advocacy group, the Thomas B. Fordham Institute, is behind the study, “Rating the Ratings: Analyzing the First 17 ESSA Accountability Plans.” ESSA, short for the Every Student Succeeds Act, refers to a law passed in late 2015 that replaced the Bush-era No Child Left Behind Act. Because states are given more freedom under ESSA to chart their own education policy course, the accountability plans they are submitting to the federal Education Department are being closely watched by a number of educational think tanks.
The Fordham study is significant because it narrowly focuses on the elements of the accountability plans that the group finds most important. Broadly speaking, the authors of the Fordham report, Brandon Wright and Michael Petrilli, are proponents of growth measurements over proficiency metrics, and of assigning schools easy to understand summative ratings. Based on these criteria, Wright, who spoke with InsideSources about the report, said that in general the states are moving in the right direction.
“As a whole these first 17 plans indicate that states are learning from No Child Left Behind and are fixing the flaws under the previous law,” he said. (Notably, the Fordham analysis only looks at how the state plans affect grades K-8, cleaving off the high school portions of the plans).
The growth versus proficiency debate is wonky enough to have famously tripped up Secretary DeVos during her confirmation hearing. A simple proficiency measure looks at what percentage of students in a school or school system have hit a pre-established achievement benchmark. The problem with this approach, according to Wright, is that it incentivizes educators to focus only on those students that are “on the bubble,” or just below or just above proficiency. The argument is that students who are already well above proficiency—or are too far behind to have any hope of reaching proficiency—are more likely to be ignored because their schools don’t earn any “credit” for focusing on them under the models. Even accountability models that use the more sophisticated “performance indexes” or “average scale scores” are an improvement, according to Wright, as they give schools more credit for focusing on the achievement of all their students.
Another group that loses under simpler measurments of achievement, according to Wright, are schools that service high populations of economically disadvantaged students. These schools, which may be doing yeoman’s work in improving their students’ outcomes, still often end up with lower raw test scores than their wealthier neighbors, simply because their students entered from a sub-optimal starting position. To ensure that an accountability system is fair to all schools, the Fordham study rewards state plans that emphasize a “growth for all” approach to accountability.
Finally, the Fordham report puts an emphasis on whether state plans include “clear labels” when evaluating the strength of a school system. Under ESSA, states are required to inform the public on school performance, but they have latitude in how they do so. Many states either already have, or are introducing, accountability dashboards that allow parents and policymakers to drill down on a school’s performance across a variety of different metrics. While the report applauds these efforts, Wright and Petrilli make it clear that they also want to see a school’s performance distilled further to a single rating, (which may be an “A” to “F” letter grade or a one to five start scale), that is easy for parents to understand.
While some argue that it is unfair to boil down an institution as complex as a school to a single summative grade, Wright argued that parents should be able to get a quick and easy sense of a school’s quality by looking at one label. Of the seventeen submitted ESSA plans, only North Dakota’s and Oregon’s have indicated that they will definitely not be assigning summative labels to schools. (Michigan still hasn’t made a final determination on its accountability system).
“If the point of a school rating system is to inform the public, inform parents, to inform administrators, teachers, etc. of generally how schools are doing, then the only way states can accomplish that for the most people is to use some sort of summative grades,” said Wright.
One last interesting note from Wright: he expects that the second wave of ESSA plans, which are due in September, to look similar to the first on the whole—despite some of the noise coming out of Washington recently.
Citing the many months most states have already spent working on their plans, Wright said “I think that the environments that the next 34 plans are being created in aren’t much different than the environments the first 17 were created in. So I don’t expect to see big differences between the two batches.”