research

One of the more-interesting aspects of yesterday’s release of the MetLife survey is how traditionalists and reformers alike try to read into the answers given on the teacher section of the survey and apply them to whatever suits their interest. All in spite of the limitations of the data gathered by the survey.

wpid5956-threethoughslogo.pngThomas B. Fordham Institute education czar Mike Petrilli, in particular, attempted to play what is now his sensible reformer who sympathizes with traditionalists schtick by declaring that reformers shouldn’t “shouldn’t try to explain away the precipitous drop in teacher job satisfaction” and should be more cautious in embarking on teacher evaluation reforms that may upset traditionalists. The problem with Petrilli’s argument, besides the fact that the MetLife survey doesn’t provide much evidence of what is causing dissatisfaction in the first place (or who is really dissatisfied, the point of yesterday’s commentary), is that the long-term trend in the percentage of teachers participating in the survey saying that they were “very satisfied” with their jobs has been a yo-yo. The 39 percent of participants saying they were “very satisfied” is only a percentage point lower than the level in 1984, the year MetLife’s pollsters began asking that question; between 1984 and 2012, the percentage of very satisfied teachers has swung widely, with a low of 33 percent in 1986 (when teacher quality reforms had yet to go beyond certification efforts) and an all-time high of 62 percent in 2008 (five years after the No Child Left Behind Act, a law much-hated among traditionalists in the teaching ranks, had been in full implementation). Like so many traditionalists reacting to the MetLife survey, Petrilli is attempting to read something into the results that cannot be sustained by the data itself.

Let’s note that the MetLife survey doesn’t purport to be anything more than what it is: A survey. But Petrilli’s attempt to draw conclusions from the results that can’t be substantiated is a common occurrence in education. One of the biggest problems in American public education is that there is a tendency to offer conclusions that the data can’t substantiate. Sometimes the data is solid and valuable, yet it cannot support the conclusions researchers and pundits attempt to make. Other times, as seen with the MetLife survey (which actually did a good job in culling the views of school leaders), the data may be shoddy and inconclusive in one area and yet, traditionalists and reformers both attempt to come up with some takeaway that cannot be had. Then there are times when researchers fail to offer much-needed historical context that would reshape their conclusions. No matter the approach, the results end up reinforcing the perception of those outside of education that research in the sector is more, umm, art than actual science.

One example of this problem emerged last month with the Bill & Melinda Gates Foundation’s continuing efforts to spin results from its Measures of Effective Teaching project as supporting the so-called multiple measures approach to teacher evaluations — featuring classroom observations, Value-Added analysis of student test score growth data, and student surveys — that it is touting. As both University of Arkansas researcher Jay P. Greene and I have continually pointed out, the Gates Foundation’s own data actually proves without question that classroom observations so unreliable in evaluating teaching performance that it actually brings down the reliability of an overall evaluation. This fact (along with the move by the U.S. Department of Education’s What Works Clearinghouse, which plays a key role in setting the gold standard for education research, to not bother evaluating the study’s results) does little more than perpetuate a perception that Gates Foundation engages in politically-driven research. Which, in turn, makes it difficult for reformers in their efforts to overhaul teacher performance management.

Another is the Century Foundation report, Housing Policy is School Policy, which has been used by Century’s resident integrationist, Richard Kahlenberg, to tout the use of public housing policies such as those implemented in Montgomery County, Md. and oppose systemic reforms such as the expansion of charters that he opposes. Certainly the research, conducted by Rand Corp. researcher Heather Schwartz, is solid, and the methodology stands up to scrutiny. But as I noted back when the study was released, Kahlenberg (and even Schwartz herself) conveniently ignored the actual data from the report, which showed that simply moving poor kids out of schools with other poor kids to those with middle-class counterparts being the majority isn’t all that effective.  Admits Schwartz: “the academic returns from economic integration diminish as school poverty levels rose.”

Simply put Schwartz (along with Kahlenberg) had already staked out a position in spite of what the research had shown. If this wasn’t a research piece, this would be fine; after all, Kahlenberg and Schwartz have a right to their conclusions. But as in the case of Gates Foundation, it is rather curious to argue for conclusions that your own data won’t support.

Then there is what I call the Year Zero Error, or what happens when researchers decide to choose a period of time to measure the progress or regress of a solution without consideration of the impact of earlier decades, and thus shaping the conclusions being reached in ways that may not square with all the facts. This crops up in a report on the possible impact of charter schools on private and Catholic school enrollment written for the Cato Institute by Rand Corp. researcher Richard Buddin, which focuses on enrollment growth and decline between 2000 and 2008. Even as Buddin’s study insinuated that charter schools were pulling enrollment away from private and Catholic schools — and Cato’s education policy team ran with the data to argue against a form of school choice they do not prefer — Buddin failed to fully note that Catholic and other private schools have long experienced enrollment declines. Between 1960 and 1990 alone — two years before the nation’s first charter school was started –Catholic school enrollment declined by 52 percent and more than 4,000 schools were shuttered, according to the National Catholic Education Association. A myriad of culprits — from the flight of middle-class Catholics from urban areas to suburbia, to the high cost of running schools without low-cost labor in the form of nuns and priests, to the perception among devout Catholics that diocesan schools weren’t sufficiently doctrinaire enough for them — contributed to that longstanding secular decline. Meanwhile private school enrollment overall barely budged between 1959-1960 and 1989-1990, even as the nation’s student population increased by 34 percent in that same period.

Certainly Buddin’s data is solid. It is also true that charters have proven to be competition for Catholic and other private schools which have long seen enrollment declines. [By the Way: The need to build up a strong market of high-quality educational options is why expanding choice, including vouchers and tax credit programs, is important to do.] But because he failed  to provide much-needed historical context other than a throwaway sentence, the study offered conclusions on the impact of charters on private school enrollment that is more than a bit misleading.

One should keep all three examples in mind whenever one reads any research coming from either side of the battle over the reform of American public education. Because bad conclusions from data can be even worse than faulty data itself.