Tag: D.C. Policy Center

Banishing Bad Anti-School Discipline Reform Reports

Last week’s Dropout Nation exposes of the use of shoddy data and analysis by anti-school discipline reform types such as Max Eden of the Manhattan Institute and Thomas B. Fordham…

Last week’s Dropout Nation exposes of the use of shoddy data and analysis by anti-school discipline reform types such as Max Eden of the Manhattan Institute and Thomas B. Fordham Institute President Michael Petrilli generated a lot of discussion, both on social media and within education policy circles. This is good. Exposing intellectual sophistry, especially the kinds of data manipulation and trumpeting of poorly-constructed research as done by Eden, Petrilli and their ilk (along with their willing ignorance of high-quality studies based on longitudinal student data that they prefer to ignore) is critical to honest policy and practice in the overhaul of American public education.

Yet we must continually remember that the problem with bad studies based on shoddy data is that they don’t disappear. If anything, they are recycled over and over again, both by advocates who deliberately engage in sophistry in order to further their cause (and influence policymakers who want to agree) as well as by well-meaning pundits who only read the executive statements, less-than-thorough news reports and little else.

Two analysts at the D.C. Policy Center, Chelsea Coffin and Kathryn Zickuhr made this mistake earlier this month when they cited several low-quality anti-school discipline reform studies in their otherwise-interesting policy paper advising the District of Columbia’s city council to provide adequate support for implementing a proposed ban on meting out suspensions for minor infractions. As some of you may know, the Nation’s Capital is considering a proposal from Councilmember David Grosso (who chairs the council’s education oversight panel) that addresses concerns raised by families, traditionalists and some reformers, both over the overuse of harsh discipline by both D.C. Public Schools and charter school operators, as well as revelations that some operators have been understating their out-of-school suspension levels.

One mistake made by Coffin and Zickuhr? Citing the Thomas B. Fordham Institute’s latest study of Philadelphia’s school discipline reform efforts. As Dropout Nation pointed out last week, the report’s assertions that asserts that reducing suspensions for non-violent offenses have little effect on achievement is based on two years of school level data that doesn’t actually measure how the reforms impact individual or subgroups student learning. It also doesn’t consider how well individual schools implemented the reforms in that period, a matter that is discussed by the University of Pennsylvania’s Consortium for Policy Research in Education in a similar study also released last month. [By the way: D.C. Policy Center doesn’t even link to University of Pennsylvania’s findings.] As a team of researchers led by Karega Rausch, a leading expert on school discipline who now heads research for the National Association of Charter School Authorizers, pointed out last year in a report for the Center for Reinventing Public Education, longitudinal student data, which shows how children are affected by changes in discipline policies, is the best measure, one that Fordham’s researchers could have accessed if they worked with the City of Brotherly Love’s traditional district.

Another problem with Coffin’s and Zickhur’s report? That it also links to Eden’s ‘study’ released on school climate throughout the city and the school discipline reform efforts undertaken by the New York City Department of Education under Former Mayor Michael Bloomberg and his successor, Bill de Blasio. As your editor also noted last week, it is also too flawed to be taken seriously. One reason why? Eden didn’t just simply measure the raw results from the Big Apple’s school climate studies over the five-year period (2011-2012 to 2015-2016) being measured, which is the most-reliable way of analyzing what is already unreliable subjective data. Instead, Eden cobbled together a “distribution-of-differences” analysis in which any change of 15 percentage points on each of the questions represented “a substantial shift” in attitudes on school safety, especially for each school in the district. How did he arrive at 15 percentage points instead of, say, 20 or 10 or even five? Eden doesn’t explain. The data alchemy, along with the substandard nature of the underlying survey data, makes Eden’s report even less-reliable than it already appears.

Your editor can’t totally blame Coffin and Zickhur for relying on shoddy research. As with everything in education policy, it takes years for the release of high-quality research. In the case of impacts of school discipline reforms that are currently being undertaken in places such as Philadelphia, the need for four-to-eight years of longitudinal student data to gain a good handle on what is happening will make life more-difficult for pundits and wonks who care a lot about policy wins and making big splashes. Which means it will be tempting to base opinions and recommendations on shoddier work product, especially from big-named think tanks who are willing to shovel out shoddy white papers instead of doing solid work.

That said, Coffin and Zickhur could have easily looked at University of Pennsylvania’s report, whose interviews provide much-stronger insights on the challenges districts can face during the first two years of implementing a discipline reform (as well as how schools are implementing them at the beginning), or even gain access to a study of Minneapolis Public Schools’ pilot program to use restorative justice for children facing expulsion for violent infractions (which gives an idea of possible benefits as well as issues in implementation at scale). Both have limitations, but can add some color to the discussion if properly limited. [Happily, Coffin and Zickuhr do cite one of University of Chicago Consortium on School Research’s two reports on school discipline reform efforts in Chicago, which, unfortunately, don’t provide longitudinal student achievement results.] A call to school and community leaders on the ground working on this issue would have also help. This includes Oakland Education Fund Executive Director Brian Stanley, who helped implement the Bay Area district’s ban on suspensions for minor infractions.

As for other wonks and polemicists (as well as traditional news reporters) looking to write more-thoughtful pieces on school discipline reform? Your editor offers some advice. The first? Always read beyond the executive summaries. This includes reading the list of cited references and sources usually in the back of a report or study. Put this way: If the study’s citation and reference lists include the likes of Eden and his Manhattan Institute colleague, Heather Mac Donald (the latter of whom focuses law enforcement and immigration, and tends to dismiss any discussion about racial disparities), ignore it.

Also, if it doesn’t mention work by respected researchers on school discipline such as Russell Skiba of Indiana University, Johns Hopkins University’s Robert Balfanz, John Wallace of the University of Pittsburgh or Rausch (all of whom use longitudinal student data in their research), then it deserves no consideration at all. Therefore, ignore this anti-school discipline white paper on Wisconsin’s efforts making the rounds this week — unless you want to give your child paper for cutting and origami. [Which is what happens to a lot of white papers coming to my office.]

Another alarm bell: When the report or study makes assertions that it later admits cannot be supported either by the underlying data or after going through additional analyses, including stress tests to verify results. In the case of discipline studies using school-level data championed by the anti-school discipline reform crowd, the results are often not going to be “granular enough” (or offer enough detail on how individual or groups of students are impacted by a reform or intervention) to support anything more than the most-tepid assertions.

Additionally, if the study doesn’t admit that other research bears out other sensible reasons supported by research and data for embarking on a school discipline reform, then it shouldn’t be taken seriously. Why? Because the failure to admit this is evidence that the study is little better than the kind of white papers that you would expect out of Forrester Research and other market insight firms whose predictions, as legendary former Forbes Editor William Baldwin would say, won’t come within a country mile of being realized. This is why a study by Boston University grad student Dominic Zarecki, which was used by Eden in an op-ed last week, has little value to anyone seriously discussing school discipline reform.

Finally, school reformers, most-notably those who are champions of discipline reform, must challenge, call out and dismiss shoddy data, especially when used by allies opposed to overhauling how children are corrected in schools. Researchers such as Daniel Losen of the Civil Rights Project at UCLA, along with advocates on the ground, already do this. There’s no reason why colleagues are allowed to engage in patently dishonest data usage, especially when they chant the mantra of using high-quality data when addressing other issues.

Comments Off on Banishing Bad Anti-School Discipline Reform Reports

Type on the field below and hit Enter/Return to search