Tag: Michael Petrilli

Banishing Bad Anti-School Discipline Reform Reports

Last week’s Dropout Nation exposes of the use of shoddy data and analysis by anti-school discipline reform types such as Max Eden of the Manhattan Institute and Thomas B. Fordham…

Last week’s Dropout Nation exposes of the use of shoddy data and analysis by anti-school discipline reform types such as Max Eden of the Manhattan Institute and Thomas B. Fordham Institute President Michael Petrilli generated a lot of discussion, both on social media and within education policy circles. This is good. Exposing intellectual sophistry, especially the kinds of data manipulation and trumpeting of poorly-constructed research as done by Eden, Petrilli and their ilk (along with their willing ignorance of high-quality studies based on longitudinal student data that they prefer to ignore) is critical to honest policy and practice in the overhaul of American public education.

Yet we must continually remember that the problem with bad studies based on shoddy data is that they don’t disappear. If anything, they are recycled over and over again, both by advocates who deliberately engage in sophistry in order to further their cause (and influence policymakers who want to agree) as well as by well-meaning pundits who only read the executive statements, less-than-thorough news reports and little else.

Two analysts at the D.C. Policy Center, Chelsea Coffin and Kathryn Zickuhr made this mistake earlier this month when they cited several low-quality anti-school discipline reform studies in their otherwise-interesting policy paper advising the District of Columbia’s city council to provide adequate support for implementing a proposed ban on meting out suspensions for minor infractions. As some of you may know, the Nation’s Capital is considering a proposal from Councilmember David Grosso (who chairs the council’s education oversight panel) that addresses concerns raised by families, traditionalists and some reformers, both over the overuse of harsh discipline by both D.C. Public Schools and charter school operators, as well as revelations that some operators have been understating their out-of-school suspension levels.

One mistake made by Coffin and Zickuhr? Citing the Thomas B. Fordham Institute’s latest study of Philadelphia’s school discipline reform efforts. As Dropout Nation pointed out last week, the report’s assertions that asserts that reducing suspensions for non-violent offenses have little effect on achievement is based on two years of school level data that doesn’t actually measure how the reforms impact individual or subgroups student learning. It also doesn’t consider how well individual schools implemented the reforms in that period, a matter that is discussed by the University of Pennsylvania’s Consortium for Policy Research in Education in a similar study also released last month. [By the way: D.C. Policy Center doesn’t even link to University of Pennsylvania’s findings.] As a team of researchers led by Karega Rausch, a leading expert on school discipline who now heads research for the National Association of Charter School Authorizers, pointed out last year in a report for the Center for Reinventing Public Education, longitudinal student data, which shows how children are affected by changes in discipline policies, is the best measure, one that Fordham’s researchers could have accessed if they worked with the City of Brotherly Love’s traditional district.

Another problem with Coffin’s and Zickhur’s report? That it also links to Eden’s ‘study’ released on school climate throughout the city and the school discipline reform efforts undertaken by the New York City Department of Education under Former Mayor Michael Bloomberg and his successor, Bill de Blasio. As your editor also noted last week, it is also too flawed to be taken seriously. One reason why? Eden didn’t just simply measure the raw results from the Big Apple’s school climate studies over the five-year period (2011-2012 to 2015-2016) being measured, which is the most-reliable way of analyzing what is already unreliable subjective data. Instead, Eden cobbled together a “distribution-of-differences” analysis in which any change of 15 percentage points on each of the questions represented “a substantial shift” in attitudes on school safety, especially for each school in the district. How did he arrive at 15 percentage points instead of, say, 20 or 10 or even five? Eden doesn’t explain. The data alchemy, along with the substandard nature of the underlying survey data, makes Eden’s report even less-reliable than it already appears.

Your editor can’t totally blame Coffin and Zickhur for relying on shoddy research. As with everything in education policy, it takes years for the release of high-quality research. In the case of impacts of school discipline reforms that are currently being undertaken in places such as Philadelphia, the need for four-to-eight years of longitudinal student data to gain a good handle on what is happening will make life more-difficult for pundits and wonks who care a lot about policy wins and making big splashes. Which means it will be tempting to base opinions and recommendations on shoddier work product, especially from big-named think tanks who are willing to shovel out shoddy white papers instead of doing solid work.

That said, Coffin and Zickhur could have easily looked at University of Pennsylvania’s report, whose interviews provide much-stronger insights on the challenges districts can face during the first two years of implementing a discipline reform (as well as how schools are implementing them at the beginning), or even gain access to a study of Minneapolis Public Schools’ pilot program to use restorative justice for children facing expulsion for violent infractions (which gives an idea of possible benefits as well as issues in implementation at scale). Both have limitations, but can add some color to the discussion if properly limited. [Happily, Coffin and Zickuhr do cite one of University of Chicago Consortium on School Research’s two reports on school discipline reform efforts in Chicago, which, unfortunately, don’t provide longitudinal student achievement results.] A call to school and community leaders on the ground working on this issue would have also help. This includes Oakland Education Fund Executive Director Brian Stanley, who helped implement the Bay Area district’s ban on suspensions for minor infractions.

As for other wonks and polemicists (as well as traditional news reporters) looking to write more-thoughtful pieces on school discipline reform? Your editor offers some advice. The first? Always read beyond the executive summaries. This includes reading the list of cited references and sources usually in the back of a report or study. Put this way: If the study’s citation and reference lists include the likes of Eden and his Manhattan Institute colleague, Heather Mac Donald (the latter of whom focuses law enforcement and immigration, and tends to dismiss any discussion about racial disparities), ignore it.

Also, if it doesn’t mention work by respected researchers on school discipline such as Russell Skiba of Indiana University, Johns Hopkins University’s Robert Balfanz, John Wallace of the University of Pittsburgh or Rausch (all of whom use longitudinal student data in their research), then it deserves no consideration at all. Therefore, ignore this anti-school discipline white paper on Wisconsin’s efforts making the rounds this week — unless you want to give your child paper for cutting and origami. [Which is what happens to a lot of white papers coming to my office.]

Another alarm bell: When the report or study makes assertions that it later admits cannot be supported either by the underlying data or after going through additional analyses, including stress tests to verify results. In the case of discipline studies using school-level data championed by the anti-school discipline reform crowd, the results are often not going to be “granular enough” (or offer enough detail on how individual or groups of students are impacted by a reform or intervention) to support anything more than the most-tepid assertions.

Additionally, if the study doesn’t admit that other research bears out other sensible reasons supported by research and data for embarking on a school discipline reform, then it shouldn’t be taken seriously. Why? Because the failure to admit this is evidence that the study is little better than the kind of white papers that you would expect out of Forrester Research and other market insight firms whose predictions, as legendary former Forbes Editor William Baldwin would say, won’t come within a country mile of being realized. This is why a study by Boston University grad student Dominic Zarecki, which was used by Eden in an op-ed last week, has little value to anyone seriously discussing school discipline reform.

Finally, school reformers, most-notably those who are champions of discipline reform, must challenge, call out and dismiss shoddy data, especially when used by allies opposed to overhauling how children are corrected in schools. Researchers such as Daniel Losen of the Civil Rights Project at UCLA, along with advocates on the ground, already do this. There’s no reason why colleagues are allowed to engage in patently dishonest data usage, especially when they chant the mantra of using high-quality data when addressing other issues.

Comments Off on Banishing Bad Anti-School Discipline Reform Reports

Max Eden’s Shoddy Anti-School Discipline Reform Punditry

Your editor usually doesn’t write immediate follow-ups on commentaries. But yesterday’s Dropout Nation¬†takedown of use of faulty data by Manhattan Institute pundit Max Eden and other opponents of reforming school…

Your editor usually doesn’t write immediate follow-ups on commentaries. But yesterday’s Dropout Nation¬†takedown of use of faulty data by Manhattan Institute pundit Max Eden and other opponents of reforming school discipline generated plenty of discussion both in social media and in e-mails. Thanks to those discussions, the flaws in the studies used by Eden and his counterparts, most-notably Michael Petrilli of the Thomas B. Fordham Institute and Jason Riley of the¬†Wall Street Journal, have been exposed.

As you would expect — and has become his wont — Eden dodged the report and questions raised by other reformers and education policy scholars. Save for arguing that Oakland Unified School District, whose ban on suspensions for disruptive behavior and other minor infractions was mentioned in his piece, supposedly fell behind academically because of that effort, Eden offered little defense of either his US News & World Report op-ed or his overall arguments.

But while Eden said little, what he did say revealed even more sloppiness in his arguments and thinking. Which given that he and other foes of school discipline reform are helping the Trump Administration and U.S. Secretary of Education Betsy DeVos justify their plans to ditch the federal government’s obligation to protect the civil rights of poor and minority children, is worrisome.

In the case of Oakland, Eden declared that research from Stanford University’s Sean Reardon showing that the district’s improvement in student achievement of 4.3 years over a five-year period trailed behind the overall state average made his “case” for his conclusion. The problem? For one, Reardon’s research, which focused solely on how districts improve academic progress for children from third grade to the end of middle school (as well as how poverty affects achievement), never looked at the impact of school discipline policy (or even overuse of suspensions) on achievement. Put simply, there’s no way that Eden can use Reardon’s data to reach or support his conclusions.

It gets worse. As it turns out, Eden probably didn’t mean to mention Reardon’s study, but Boston University grad student Dominic Zarecki’s study of Los Angeles Unified School District’s implementation of a ban on suspensions for minor infractions, the white paper at the heart of Eden’s US News op-ed. The study does mention that it did an analysis of Oakland Unified academic achievement after implementation of its school discipline reform effort to compare results with that of L.A. Unified. Zarecki does note that it found that Oakland Unified trailed the rest of the state in improving student achievement by the 2015-2016 school year, arguing that it proves his study’s declaration that suspension bans damage achievement.

But Zarecki also admits that “we cannot conduct a full difference-in-difference analysis for Oakland because we lack data to measure the change in academic growth”. Zarecki also concedes that Oakland would likely have “had a relatively low growth rate even without the suspension ban”, which, given its decades-long struggles on the education front, goes without saying. As Brian Stanley, executive director of the Oakland Education Fund, noted yesterday, the district “has had fairly low academic growth for a long time.” [Stanley, by the way, offers a rather insightful and data-driven account of Oakland’s school discipline reform efforts that opponents and supporters of school discipline reform should check out.]

This oversight could be considered if Zarecki provided his analysis of Oakland Unified (which is likely based on two years of school-level data instead of at least four years student-level data) in an appendix to the main study. He did not, which means there is no real way for to understand how Zarecki reached this particular conclusion.

It isn’t shocking that Dominic Zarecki’s shoddy research is being championed by Max Eden and other foes of school discipline reform. That’s just what they do.

Of course, this is one of the many flaws Dropout Nation and others have identified. Another is that Zarecki’s study focuses not on increases and decreases in actual achievement and out-of-school suspensions for minor infractions, but on differences in differences, essentially looking at growth over the short time frames being measured. The problem with so-called difference-to-difference research design is that it can inflate what would otherwise be minor increases and decreases in standard deviations during the time periods measured. Especially when measuring two-year periods instead of four years and beyond (which would tell more about the success or failure of any implementation or program).

Put simply, Zarecki’s study, already flawed because of its focus on school level data, lack of granularity and other issues, likely yielded inflated results. Zarecki himself admits this when he notes that the two additional analyses he used to check his work didn’t yield similar conclusions.

Given that Zarecki’s study is really more of a class paper that hasn’t been peer reviewed and probably hasn’t been looked over by his doctoral advisor, you can somewhat excuse those flaws. [The fact that his career has been in education research, including time as research director for the California Charter Schools Association, makes this excuse rather weak.] But Eden, a longtime education policy wonk who spent time working for Rick Hess at the American Enterprise Institute before landing at Manhattan Institute (and who still co-writes pieces with Hess on occasion), can’t justify why he ran with this shoddy work. If your editor can sniff out the weaknesses in Zarecki’s study, then Eden can do so, too.

The fact that Eden ran with Zarecki’s study and conclusions despite all of its flaws isn’t shocking. As mentioned earlier in his wrong citation of Reardon’s study, Eden is sloppy, both in his research and his thinking. This becomes even more clear when you look at his claim to fame, a report released last yeara by Manhattan Institute on school climate throughout the city and the school discipline reform efforts undertaken by the New York City Department of Education under Former Mayor Michael Bloomberg and his successor, Bill de Blasio.

In that report, Eden concludes that the school discipline reform efforts by Bloomberg, de Blasio and their respective chancellors have led to traditional district schools in the Big Apple becoming less safe for teachers and children. How? By comparing responses of teachers and children in the traditional district to peers in charters on the city’s annual school climate survey. As any researcher can immediately note, such surveys have little usefulness as objective evidence, because they are based on subjective opinions that can change based on who is working in classrooms, because survey designs can be flawed with leading questions yielding results favorable to the pollster, and because survey designs can change drastically from year to year. Eden himself admits this in the study when he notes that he could only measure results on five questions from the city’s school climate survey because the wording had been consistent over time.

What makes Eden’s results even less-reliable is the fact that he didn’t just simply measure the raw results from the surveys over the five-year period (2011-2012 to 2015-2016) being measured, which is the most-reliable way of analyzing what is already unreliable data. Instead, Eden cobbled together a “distribution-of-differences” analysis in which any change of 15 percentage points on each of the questions represented “a substantial shift” in attitudes on school safety, especially for each school in the district. How did he arrive at 15 percentage points instead of, say, 20 or 10 or even five? Eden doesn’t explain. This gamesmanship, along with the lack of explanation, makes Eden’s analysis even less reliable than it already is.

If Eden was being intellectually honest and simply compared the raw numbers themselves, he would have reached different conclusions. Between 2011-2012 and 2015-2016, the percentage of teachers citywide (including charter schools) agreeing or strongly agreeing that “my school maintains order and discipline” remained unchanged at 80 percent. Exclude charters results from the survey, and the percentage of teachers just within the New York City district agreeing or strongly agreeing that “my school maintains order and disciplined” increased from 77 percent to 78 percent over that period, according to a Dropout Nation analysis of the city’s survey data from that period. This happened even as the number of out-of-school suspensions meted out by principals¬† in district schools declined.

Even when using subjective data, Eden’s arguments don’t stand up to scrutiny, a point made by Daniel Losen of the Civil Rights Project at UCLA during testimony at a December hearing held by the U.S. Commission on Civil Rights at which Eden also testified. It doesn’t even stand up to the brief on overuse of suspensions in Big Apple schools released today by Center for American Progress, which uses objective data to look at the number of days children lose when they are kept out of school

Again, this isn’t a surprise. In a report on school safety released last October, Eden reached the conclusion that New York City’s charter schools were “safer” than traditional district counterparts not by comparing raw data from the Big Apple’s school climate survey or even using more-objective data such as incident reports over a period of several years. Instead, he cobbled together an index that gave scores to each of the questions on the survey, then crafted a secondary index in which charters that scored five or more percentage points higher on that first index over a traditional district school, would be rated higher. This approach to analysis is amateur hour at its worst.

The thing is that Eden’s shoddy work product could easily be ignored if not for the fact that he, along with Fordham’s Petrilli, is a leader in the effort to convince the Trump Administration and DeVos to reverse the Obama Administration-era Dear Colleague guidance pushing districts to end overuse of suspensions and other forms of harsh school discipline against poor and minority children. The four-year-old guidance, a keystone of federal efforts to spur school discipline reform, has long been the bete noir of so-called conservative reformers everywhere.

Because Eden, along with Petrilli and even Riley’s Wall Street Journal, likely has the ear of DeVos’ appointees (including Kenneth Marcus, the former George W. Bush appointee who will likely end up overseeing the agency’s Office for Civil Rights), the shoddiness of his data and that of his allies matters even more now than ever. Bad policy backed by slipshod data equals damage to children, especially those from Black, Latino, and American Indian and Alaska Native households most-likely to be suspended, expelled and sent to juvenile justice systems (the school-to-prison pipeline) as a result of districts and other school operators overusing the most-punitive of school discipline.

Which is why shoddy polemicism by the likes of Eden and other opponents of school discipline reform deserve to be exposed and denigrated. School reformers know better than to use bad studies to champion worse policies.

Comments Off on Max Eden’s Shoddy Anti-School Discipline Reform Punditry

Max Eden (and other School Discipline Reform Foes) Use Bad Data

There are some amazing things about the internecine battle within the school reform movement over efforts to end overuse of out-of-school suspensions and other forms of harsh traditional school discipline,…

There are some amazing things about the internecine battle within the school reform movement over efforts to end overuse of out-of-school suspensions and other forms of harsh traditional school discipline, and the effort by so-called conservative reformers to overturn the U.S. Department of Education’s Obama-era guidance to districts on school discipline reform. One is the unwillingness of opponents of school discipline reform, especially Michael Petrilli of the Thomas B. Fordham Institute and Max Eden of the Manhattan Institute, to actually engage the three decades of high-quality research that shows that far too many children, especially Black and American Indian kids, are suspended often. The other? That those very opponents attempt to use low-quality research that doesn’t actually prove their defense of such practices, often to ignore the volumes of evidence standing against them.

These two matters become especially clear this morning in an op-ed by Eden in US News & World Report that declares that reducing the overuse of suspensions — especially restrictions on using suspensions for minor infractions such as disruptive behavior that can be addressed through other means — is somehow causing “substantial academic damage” to children in classrooms. Primarily citing a study by Boston University graduate student Dominic Zarecki on Los Angeles Unified School District’s move five years ago to stop suspending children for acting out in class, Eden argues that “suspension bans hurt kids”, hinders the efforts of teachers to manage their classrooms and leads to lower student achievement.

Yet contrary to Eden’s assertions, the study itself doesn’t offer much in the way of hard conclusions. One reason? Because the study doesn’t use student-level academic data. As conceded by Zarecki (who, for some odd reason, goes unnamed by Eden in his op-ed), the study is based on school-level data which doesn’t follow an actual cohort of L.A. Unified students over a period of time. The other problem: That it doesn’t track impact over a period longer than two years. This is a problem especially given that the long-term effects of a reform or an effort can take years (including adjustments in implementation such as improved teacher training) to manifest. Since the study itself doesn’t actually look at student performance over time, or even accounts for matters such as student migration, it “lacks the data granularity” needed to look at how reducing suspensions impacts individual students or even particular groups, much less actually offer any conclusions worth considering. Even Zarecki concedes that based on additional analysis, L.A Unified’s ban “may have had no causal effect” on achievement.

Certainly a study using longitudinal student-level would be hard to do in part because of the efforts by California Gov. Jerry Brown to kibosh more-robust school data systems. But it wouldn’t be impossible. After all, the¬†Los Angeles Times did exactly that in 2010 with its value-added analysis of teacher performance within the district, gaining access to the data after a Freedom of Information request to the school system. Researchers tend to have an easier time obtaining data, especially since they are willing to safeguard privacy and, in many cases, even withhold the name of the district itself (though there are often enough details to figure out which school operator was the subject). Zarecki, who also works for California-based charter school operator Fortune Schools, can easily get in touch with L.A. Unified’s data department if he chose to do so. There is no justifiable reason why the data couldn’t have been obtained for this study.

Put simply, this study is of low-quality. Especially when compared to the research on school discipline that has been conducted over the past decade alone. This includes the 2012 study conducted by a team led by Johns Hopkins University scholar Robert Balfanz that used eight years of student-level longitudinal data to determine that overuse of out-of-school suspensions in ninth grade were positively correlated with likelihood of dropping out of high school, as well as Balfanz’s renowned 2007 study on developing early warning systems with Lisa Herzog of the Philadelphia Education Fund (which also used eight years of student data, this time, from the City of Brotherly Love’s traditional district) to reach the same conclusions.

This lack of high-quality, along with the short time span being measured, is a problem shared by other studies promoted by other opponents of reforming school discipline. Take the study released last month by Petrilli’s Thomas B. Fordham Institute on Philadelphia’s school discipline reform efforts. Eden also cites the study in his piece. The study’s main conclusions — including the assertion that reducing suspensions for non-violent offenses have little effect on achievement — are also based on two years of school level data that doesn’t actually measure how the reforms impact student achievement. [It also doesn’t take into consideration how well individual schools implemented the reforms, a matter that is discussed by the University of Pennsylvania’s Consortium for Policy Research in Education in a similar study also released last month.] That the study uses school-level data instead of student-level data also means that the conclusions have little value.

Manhattan Institute’s Max Eden, along with other opponents of school discipline reform, has a tendency to misuse and overstate data.

In fact, the only useful study the Petrilli-Eden crowd have at their disposal is one conducted last year by a University of Arkansas team led by Gary Ritter. The study, which is based on six years of student-level data, concludes that out-of-school suspensions on their own don’t have a negative impact on student achievement and may lead to “slight” improvement in standardized test performance. But even the Ritter study is little use to them. One reason: Because the study itself doesn’t look at the impact of any particular school discipline reform (the study merely looks at possible impact of suspensions on achievement), it isn’t useful in any argument against those efforts. Another is the fact that the study doesn’t actually measure impact of suspensions based on the number of days kids are kept out of school; in the case of Arkansas, a suspension of more than 10 days is considered an expulsion, which means that thousands of children and their student achievement data have likely been excluded from the study, a limitation conceded by Ritter and his team. [Others have expressed their own concerns about the study.]

Meanwhile Ritter and his team honestly concede that decades of research show that overuse of suspensions damages children when you look at graduation rates and other data. In fact, they concede that school leaders and policymakers can have justifiable reasons for reforming school discipline. Ritter himself publicly stated that his study doesn’t argue for halting school discipline reforms and shouldn’t be used as justification for ending the Obama Administration’s guidance, the bete noir of the anti-school discipline reform crowd.

Despite these caveats, opponents of school discipline reform have insisted on using the study to bolster their case. Eden, in particular, mentioned the Ritter study as a supporting example last month in his testimony to the U.S. Commission on Civil Rights during one of its hearings. But this isn’t shocking. Eden also mentioned a 2014 study by Russell Skiba of Indiana University, the leading scholar on school discipline reform, to support his argument that racial bias wasn’t a factor in why Black, Latino, and American Indian children were suspended at far higher levels than White peers. Eden did this even though Skiba’s study actually focused on student misbehavior and concluded that minority children weren’t worse-behaved than White counterparts, and therefore, didn’t explain why those kids were suspended at higher rates than White children in the first place.

But again, Eden’s seemingly deliberate sloppiness in handling data and evidence, along with that of his allies, is not shocking at all. Eden was called out by Daniel Losen of the Civil Rights Project at UCLA during the Commission on Civil Rights’ hearing for making arguments not borne out by his own data. Meanwhile Fordham and Petrilli, who work alongside Eden on opposing school discipline reform efforts, has been called out several times by Dropout Nation and other researchers for other incidents of reaching conclusions unsupported by data. This includes misusing data from NWEA to claim in a 2011 op-ed that focusing on achievement gaps harmed high-achieving students (as well as a study published months earlier that attempted to do the same).

What does become clear is that Eden, Petrilli and company do all they can to dance around what decades of data has proven beyond dispute: That far too many kids are suspended and expelled from school. That those practices do little to improve student achievement, enhance school cultures, or make kids safer. That children from minority households are more likely to be suspended, expelled, arrested and even sent to juvenile justice systems than White peers, even when they are referred to dean’s offices for the same infractions. That also suspensions are far more-likely to be meted out over minor matters such as disruptive behavior and attendance than for violent behavior and drug activity. That soft and hard bigotries among White teachers toward poor and minority children are underlying reasons why those kids end up being suspended more-often than White counterparts. And that teachers and school leaders often use suspensions and expulsions to¬† to let themselves off the hook for the failure to address the illiteracy that is usually at the heart of child misbehavior.

Given all the facts, it becomes clear that Eden, Petrilli and their allies have little interest in dealing honestly with data and evidence on the damage of overusing harsh school discipline. Which makes them untrustworthy when it comes to the mission of the school reform movement to help all children succeed in school and in life.

Featured photo courtesy of the New York Times.

Comments Off on Max Eden (and other School Discipline Reform Foes) Use Bad Data

The Conversation: Daniel Losen on Reforming School Discipline

On this edition of The Conversation, Daniel Losen of the Civil Rights Project at UCLA discusses his testimony to the U.S. Commission on Civil Rights on school discipline reform, challenges…

On this edition of The Conversation, Daniel Losen of the Civil Rights Project at UCLA discusses his testimony to the U.S. Commission on Civil Rights on school discipline reform, challenges the claims of Max Eden and others opposed to the federal guidance on addressing disparities, surmises why opponents of ending overuse of suspensions and other harsh discipline are unwilling to engage three decades of data proving the need for overhaul, and what districts must do to transform school climates for the better.

Listen to the Podcast at RiShawn Biddle Radio or download directly to your mobile or desktop device. Also, subscribe to The Conversation podcast series and the overall Dropout Nation Podcast series. You can also embed this podcast on your site. It is also available on iTunes, Blubrry, Google Play, Stitcher, and PodBean.

Listen on Google Play Music

Comments Off on The Conversation: Daniel Losen on Reforming School Discipline

When Accountability Isn’t

There is little evidence that states will do a better job of holding districts and other school operators accountable under the Every Student Succeeds Act than they did under the…

There is little evidence that states will do a better job of holding districts and other school operators accountable under the Every Student Succeeds Act than they did under the Adequate Yearly Progress provision of the No Child Left Behind Act. If anything, based on what we are learning so far, states are more-likely than ever to let districts perpetuate harm to poor and minority children. And despite what some reformers want to say, there is way to sugar-coat this reality.

No one can blame you for thinking otherwise if you only pay attention to the Thomas B. Fordham Institute’s analysis of state rating systems proposed in ESSA implementation plans released this week. From where it sits, seven states (Arizona, Arkansas, Colorado, Georgia, Illinois, Oklahoma, and Washington) will implement rating systems that clearly label how well districts and schools are performing, requires a “focus on all students” by looking at test score growth data instead of proficiency levels, and, through growth measures, fairly assess how districts and schools are improving achievement regardless of the children they serve.

Two-thirds of the states reviewed all clearly label district and school performance to Fordham’s satisfaction, and 37 states focus on student growth instead of just on improvements in student proficiency, ensuring to the think tank’s satisfaction that the “high-achieving students” it cares most about are being served. Declares Fordham: “states, by and large, seized the ESSA opportunity to make their school accountability systems clearer and fairer.”

Your editor isn’t exactly shocked about Fordham’s happy talk. After all, the conservative think tank long opposed Adequately Yearly Progress because it focused states on improving achievement for the 64 percent of children (many of them poor and minority) who are poorly-served by American public education. This despite ample evidence that focusing on achievement gaps helps all children — including high performers — succeed academically. So it isn’t a shock that Fordham favors accountability systems that focus less on how well school operators are helping the most-vulnerable. Put simply, Fordham continues to embrace neo-eugenicist thinking long proven fallacious (as well as immoral) that fails to acknowledge that American public education’s legacy practices are not worth preserving.

The flawed thinking is more than enough to render Fordham’s analysis suspect. But there are other problems with the analysis that render it all but useless.

For poor and minority children, strong accountability tied to consequences and clear, high-quality data, matters a lot.

There’s the fact that the rating systems may not actually be as “clear” in identifying school and district performance as Fordham wants to think. This is because the think tank didn’t fully look at how the underlying formulas for measuring achievement will actually play out.

Consider Maryland, the home state of Dropout Nation (as well as that of Fordham President Michael Petrilli, his predecessor, Chester Finn, Jr., who now sits on the state board of education there, and former colleague Andy Smarick, who is president of that body). Fordham rates the Old Line State’s proposed rating system “strong” for being simple and clear with a five-star system that “model immediately conveys to all observers how well a given school is performing”.

But as Daria Hall of the Education Trust noted at a conference last month, a district or school in the state can still receive a five-star rating under the state’s ESSA plan despite doing poorly in improving achievement for Black or Latino children under its care. One reason: Because neither proficiency nor test score growth count towards more than 25 percent of a district’s rating, effectively hiding how districts are actually improving student achievement. Another lies in the fact that while the state will measure all subgroups, it doesn’t explain how it will account for each within the ratings.

Then there’s Maryland’s Plessy v. Ferguson-like proficiency and growth targets, which essentially allow districts to not work toward 100 percent proficiency for all children. The state only expects districts to improve Black student achievement from 23.9 percent in 2015-2016 to 61.9 percent by 2029-2030 (versus 52.9 percent to 76.5 percent over that period for White peers). This means that districts are allowed to subject Black and other minority children to the soft bigotry of low expectations. Add in the fact that the Maryland’s ratings don’t account for how districts and schools are preparing kids for success in traditional colleges, technical schools and apprenticeships that make up American higher education, and the rating system is not nearly as clear as Fordham declares.

This lack of clarity isn’t just a Maryland problem. As Bellwether Education Partners notes in its review of state ESSA plans, the addition of multiple measures of district and school performance (including chronic absenteeism indexes that aren’t broken down by subgroup) means that the rating systems will likely be a muddle that ends up hiding how well or poorly school operators are serving children. This muddle is likely the reason why only Tennessee and Louisiana were able to provide data showing how their ratings would identify failure mills, as well as improvements in student achievement for poor and minority children, in real time.

Another problem: Many states are using super-subgroups (now called supergroups under ESSA), a legacy of the Obama Administration’s shoddy No Child waiver gambit, that essentially lumps all poor and minority children into one category. Because super-subgroups lump children of different backgrounds into one category, the measure hides a district’s failure to help the worst-served children succeed and thus, allows it to not address its failures. Put simply, a state rating system can be simple and clear and yet still not tell the truth about how districts and schools are serving every child in their classrooms.

Accountability is more than just a school rating system. Consequences must be tied together with data and standards for children, families, and taxpayers to be served properly. [Image courtesy of the Education Trust.]

One state using super-subgroups is Florida, whose school rating system uses super-subgroups instead of thoroughly accounting for Black, Latino and other poor and minority children. Essentially, without accounting for either proficiency or growth for each group, the ratings will not fully inform anyone about how well districts are serving children.

The deliberate decision to ignore how districts and schools serve the most-vulnerable (along with the Sunshine State’s request to not use test data from its exams for English Language Learners in accountability) has led Leadership Council on Civil and Human Rights, along with a group that includes EdTrust, NAACP’s Legal Defense Fund, and UnidosUS, to ask U.S. Secretary of Education Betsy DeVos to reject the entire proposal. By the way: Fordham ranked Florida’s school rating system as “strong” in two out of three categories it analyzed.

But the biggest problem with Fordham’s analysis is that continues to embrace a flawed theory of action: That mere transparency suffices as a tool for accountability and, ultimately, holding school operators (and ultimately, states) responsible for fulfilling their obligation to help children succeed.

This approach, which Fordham first embraced during the implementation of Common Core reading and math standards, is based on the idea that only high-quality data on district, school, and even teacher performance is needed for policymakers and others within states to hold bad actors accountable. Essentially, there will be no need for the federal government to force states to fulfill their responsibilities to children, as it did through No Child’s AYP provision.

But as seen with the failed effort to implement Common Core-aligned tests produced by the PARC and Smarter Balanced coalitions, transparency-as-accountability only works if the mechanics are in place. School rating systems aren’t useful if the underlying data doesn’t actually reflect what is actually happening in schools. This will clearly be problems in Maryland and Florida, and will be just as problematic in other states. California, for example, was dinged by Bellwether in its recent round of reviews for failing to longitudinally measure student achievement, a better way to account for changes in school populations over time. [This, in turn, is a result of Gov. Jerry Brown’s moves over his tenure to sabotage the state’s school data system.]

School rating systems and other forms of transparency are also insufficient in spurring accountability if there aren’t consequences for continuous failures to meet the grade. Accountability as Sandy Kress, the mastermind behind No Child, points out, is a three-pronged approach that includes consequences as well as high-quality standards on which school ratings (and the measuring of improvements in student achievement) are to be based. Few states have explained in their ESSA plans how they would force districts and other school operators to overhaul their schools or shut them down altogether and¬†let children go to high-quality charter and district options.

The high cost of the rollback of accountability will be felt by the next generation of children — and even harm the beneficiaries of No Child’s now-abolished Adequately Yearly Progress regime who are now in our high schools.

Few states are going beyond the federal requirement to identify the lowest-performing five percent of schools. Louisiana, for example, plans to go above and beyond by identifying (and forcing the overhaul) of the 17 percent of schools that are failure mills, while New Mexico requires districts to use an array of approaches to turn around low-performing schools. California, on the other hand, hasn’t even submitted a plan on how it will identify failure mills much less hold them accountable. [It supposedly plans to do so by January.]

It gets even worse when it comes to how states will ensure that districts provide poor and minority children with high-quality teachers. As the National Council on Teacher Quality details in a series of¬†reports released Tuesday, just seven states offer timelines on how it will improve¬†the quality of teaching for Black, Latino, English Language Learners and other vulnerable children, as well as the rates by which it will improve teacher quality for them. Given that teacher quality isn’t even a measure in any of the proposed school rating systems, states have missed an important opportunity to bring transparency¬†and¬†consequences to their public school systems.

Given that so few states are being concrete about how it will help kids stuck in failure mills succeed, the school ratings will be little more than some stars and letters on computer screens.

Two decades of research have proven that accountability works best when there are real, hard consequences for districts and schools failing to improve student achievement. No Child’s Adequate Yearly Progress provision, which worked alongside accountability systems states either already developed or had put in place after the provision was enacted, spurred improvements in student achievement that have led to¬†172,078 fewer fourth-graders being illiterate in 2015 than in 2002, the year No Child became law.

Yet what ESSA has wrought so far are school rating systems that are likely to do little on behalf of children who deserve better. The benefits of clear data tied with real consequences have now been lost. If accountability is only toothless transparency, then it is neither sufficient nor necessary to help all of our children succeed in school and in life. There is no good news to be had. None at all.

Featured illustration courtesy of St. Louis Public Radio.

Comments Off on When Accountability Isn’t

Ravitch is a Reflection of Traditionalists

Once-respectable education historian Diane Ravitch has long ago proven that she’ll plumb any depths of intellectual charlatanism and moral demagoguery — even to the point of engaging in blatant race-baiting…

Once-respectable education historian Diane Ravitch has long ago proven that she’ll plumb any depths of intellectual charlatanism and moral demagoguery — even to the point of engaging in blatant race-baiting and politicizing tragedy. So it isn’t shocking to your editor that Ravitch attempted to denigrate the views of former CNN anchor-turned-school reform advocate Campbell Brown in an interview with the Washington Post by claiming that her efforts to end near-lifetime job security for laggard teachers and overhaul teacher dismissal laws aren’t worth considering (and, in fact, “illogical”) because she is “telegenic” and “pretty”. Having already engaged in racism back in May when she wrote that 50CAN honcho and new-era civil rights activist Derrell Bradford should go into “sports or finance or broadcasting”, Ravitch’s sexist remarks against Brown are just another example of her despicable shamelessness.

wpid-threethoughslogoYour editor doesn’t need to defend Brown. For one, she’s proven more than capable of going toe-to-toe with the likes of Ravitch, and Jonathan Chait of¬†New York¬†has already gone to bat for her. There’s also the fact that Ravitch just doesn’t deserve to be taken seriously. Her racial myopia and racialism (along with her dilettantism) has been apparent since she dismissed black families in Ocean Hill-Brownsville attempting to become lead decision-makers in the Big Apple’s traditional district in 1972’s¬†The Great School Wars: A history of New York City schools. So I expect nothing less from the likes of her.

What will be interesting is the reaction from hardcore progressive traditionalists — who as much proclaim themselves to be feminists as they call themselves opponents of racialism — to Ravitch’s latest remarks. If the past is any guide, it is more than likely that traditionalists will not only not call Ravitch on the carpet for her remarks, they will even defend them because Brown is one of those so-called corporate education reformers who are threatening their ideology and finances.

After all, they defended Ravitch after reformers such as Michael Petrilli of the Thomas B. Fordham Institute called her out for racialist remarks against Bradford. A year earlier, they defended another demagogue within their camp, American Federation of Teachers honcho-turned-Albert Shanker Institute boss Leo Casey, after he raised the specter of antisemitism against Brown by accusing her of committing a “blood libel” against teachers by calling out the union and its Big Apple affiliate for defending criminally abusive instructors. And they rallied around both Ravitch and Karen Lewis, the president of the AFT’s Chicago Teachers Union, after they both politicized the massacre of 23 teachers and children in Newtown, Conn., as part of their attempt to smear reformers.

So we shouldn’t expect anything less than a broad defense of Ravitch this time around. In fact, you can already see it in the responses to Chait’s critique of her demagoguery. Which proves this reality: Progressive education traditionalists like to claim to be foes of racialism and other social ills — until their own allies commit such nastiness against those whom they oppose. When their allies behave badly, progressive traditionalists will do everything they can to defend them, even when they should be shaming them and demanding them to apologize. As far as these band of traditionalists are concerned, bigotry and sexism is okay so long as it is committed against what they think are the right kind of people.

Simply put, Ravitch’s sleaziness is a reflection of the rather demagogic worldviews of progressives within traditionalist ranks and, in some ways, traditionalists in general, especially when it comes to dealing with minorities and women who dare oppose their failed policies and practices that have harmed kids for decades. Particularly when it comes to blacks, progressive traditionalists only oppose bigotry against them when they follow in lockstep with their ideology. But this shouldn’t be shocking. For all the proclamations from the Ravitch crowd that they care about children — especially those from poor and minority backgrounds — they continuously defend a system that harms them by perpetuating the legacies of Jim Crow segregation, nativism, and religious bigotry. Which makes all of them anything and everything but progressive.

1 Comment on Ravitch is a Reflection of Traditionalists

Type on the field below and hit Enter/Return to search