Photo courtesy of Colorado State University

Photo courtesy of Colorado State University

The End of Ed Schools — and Teacher Credentialing, Part II: There are numerous reasons why far too many low-quality teachers end up in classrooms perpetuating educational neglect and malpractice on our children. One is because the nation’s university schools of education do such a shoddy job in recruiting and training aspiring teachers. Another and equally important reason is because the battery of exams (including the PRAXIS tests administered by the Educational Testing Service and exams offered by the Teacher Performance Assessment Consortium that includes the American Association of Colleges of Teacher Education) used in credentialing teachers (and, in some states, even to decide whether an aspiring teacher can be admitted into an ed school), do little to weed out laggard teachers from high quality counterparts. This is a considerable problem because PRAXIS and other exams are usually the only gates available for determining teacher competence; once teachers pass the exams, they land in the classroom, unlikely to leave the profession unless a district is aggressive in weeding out laggards through the use of evaluations using objective student test score growth data. Just as importantly, the exams end up being a drudgery for teachers, who often have to take more than one exam depending on whether or not they are instructing in more than one subject. An instructor in Indiana, for example, may have to take four or more exams, depending on the subjects they are teaching and the setting in which they work; a teacher will take even more tests just to gain National Board recognition (and additional dollars in salary).

wpid-threethoughslogo.pngThe fact that PRAXIS and other exams merely the whether teachers have the minimum level of knowledge needed to instruct in a particular subject is part of the problem; after all, you want to know that a teacher can help kids improve their achievement and that they care for every child in classrooms regardless of background, along with knowing that they have some level of subject-matter competency. But as the U.S. Department of Education highlighted last week in its report on the quality of teacher training and certification, another culprit lies with the fact that state teacher certification agencies poorly utilize the tests to assess subject-matter competency by setting cut scores lower than necessary. As a result, pass rates for aspiring teachers on the exams are likely higher than they should be.

Alabama, for example, set a cut score of 137 points (or a mere 68.5 percent of the 200 points total) for aspiring teachers to pass the Praxis II exam called Elementary Education Content Knowledge in 2009-2010. This is 21 points lower than the national mean passing score of 166 (or 83 percent of the total points) achieved by those who successfully completed the exam. [The average scaled score for those taking the exam was 163 points, or three points below the national mean.] No wonder 98.5 percent of aspiring teachers passed the exam in 2009-2010. In Tennessee, the cut score for the same exam is even lower, with aspiring teachers needing to score 140 points (or 70 percent of total points) to pass. This is 26 points below the national mean score of successful test takers. [The average aspiring teacher in the Volunteer State scored 164 points on the test or two points below the national mean for those passing the exam.] All but three-tenths of one percent of the 1,785 teachers taking the exam passed it. Meanwhile Rhode Island had set a higher passing score for this Praxis II exam, demanding that aspiring teachers score at least 148 points (or 74 percent of total points) on the test. Even then, it is still 18 points lower than the national mean passing score achieved by those who passed the test; on average, teachers in the state taking the exam scored only 158 points on it. Ninety nine-point-three percent of teachers in Rhode Island taking the test passed it.

Not one state using the Praxis II exam on elementary education content knowledge had set a cut score above 150 points. Only half of the 12 states that used this particular Praxis II exam had average scaled scores that were either at or above the national mean for aspiring teachers who passed the test. Certainly one has to be cautious in reading the results, as Education Week‘s Steve Sawchuk noted yesterday in his report. The fact that aspiring teachers can take the test multiple times raises questions as to whether their scores are weighing down the average scaled scores; federal officials noted in one report that the mean score for those who failed the test was 34 points lower than the mean for those who passed. There’s also the reality that Praxis doesn’t really do the job in assessing whether an aspiring teacher can hack it in a classroom. In any case, the fact that the cut scores for this PRAXIS II test and other exams are so low– especially given that that many teacher licensing tests include relatively easy questions such as figuring out the percentage of ninth-graders on a school bus — should be a concern.

Of course, one can argue that teacher credentialing is a waste anyway. As studies, including one on  Florida teachers released two years ago by the Manhattan Institute, there is no correlation between credentials — including certification and attaining graduate degrees  — and student achievement; in fact, studies have shown that have shown that credentials and experience account for only three-to-five percent of student performance, making the credentialing process all but meaningless. This fact is one reason why American Federation of Teachers President Randi Weingarten’s constant (and disingenuous) call for submitting teachers to exams similar to those given to law school graduates to gain admittance to the bar have fallen flat; almost no one believes that teachers should be subjected to taking yet another test that is unlikely to provide information on their levels of competence. The better approach to selecting aspiring teachers is to take a combination of approaches: This includes selecting from those with the highest scores on the Scholastic Aptitude Test, using the approach pioneered by legendary teacher training guru Martin Haberman of placing aspiring teachers in rooms with students from backgrounds different from their own, and embracing Teach For America’s emphasis on selecting teaching candidates who have entrepreneurial self-starter and leadership ability. [Efforts on the recruiting and training front, by the way, would also ease the burden on districts and other school operators, who can then focus on weeding out those few laggards who may have slipped in, as well as supporting high-quality teachers on their payrolls.]

This work is what the nation’s ed schools should be doing. But they are not. In fact, ed schools are failing when it comes to training their teachers, especially in reading instruction. Expecting them to do a better job of selecting teacher candidates is akin to wishing upon stars. After all, growing evidence suggests that ed schools aren’t tightening up standards because it would mean sacrificing revenue; after all, based on the fact that grade point averages for ed school students are higher than those for economics and other courses of study, ed schools may be way stations for collegians, who have figured out that getting a teaching degree is such easy work that they just go in, grab the degree, and then head into another field.

That problem could be mitigated if states did a better job of identifying and shutting down the worst of the abysmal lot of ed schools out there. But that’s not happening. Thirty-five states have never identified an ed school either deemed low-performing or at risk of being shut down. Florida, New York, and South Carolina were the most aggressive in identifying failing ed schools; Florida identified at least one low-performing or at risk ed school every year from 2003-2004 to 2010-2011, while New York  and South Carolina each identified at least one laggard ed school for seven years between 2002-2003 and 2010-2011. Of those that have, a mere 38 were identified in 2009-2010, while another 28 were identified a year earlier. This shouldn’t a surprise. Because ed schools in most states are supervised by teacher certification agencies separate from education departments means that ed schools are not well-scrutinized and regulated; the fact that the certification agencies themselves are also stuck in an old-school mindset (and, until recently, have been banned in nearly all states from even allowing for the use of value-added data in certification) is also a problem. The federal government hasn’t done a good job of holding states responsible either. Because  ed schools are governed by federal law under the Higher Education Act — which deals solely with universities — and not under the No Child Left Behind Act (which deals with teacher quality and its impact on American public education), ed schools escape much-needed scrutiny and accountability.

It is really hard for ed schools to continue justifying their existence. Same too for state teacher credentialing agencies. We need to move to a system of teacher recruiting and training that focuses on performance instead of on paper.

When Hess and Merrow Behave Badly: As you know, Dropout Nation celebrates sparring, especially among reformers over solutions for the nation’s education crisis. After all, healthy conflict is good for the movement. Yet this publication is none too thrilled when conflict borders on the juvenile. Such antics do nothing to shed light on the substantial issues at the heart of discussions.

The latest example of this came earlier this week when Rick Hess, the education policy czar for the American Enterprise Institute, proclaimed that Learning Matters’ John Merrow’s latest reporting on allegations of test-cheating in D.C. Public Schools under the watch of Michelle Rhee — along with earlier, more-positive reporting on Rhee’s work — exemplified what Hess thought was a tendency for “wheeling so hormonally from one extreme to another.” >Merrow, in turn, took to Hess’ Education Week column to insinuate that Hess was just snippy because Merrow had mentioned the supposedly “hidden support” Hess and AEI have received from the Bill & Melinda Gates Foundation for the think tank’s education policy shop. Wrote Merrow: “Neither my company nor I have ever hidden our sources of support, nor do I ever expect to do so.”

Both men should be ashamed of themselves. Certainly one can argue honestly and fairly that Merrow ruined what was an otherwise fine example of investigative reporting (and in the process, became a partisan in the ongoing controversy) with his editorializing against the use of objective student test score data in teacher evaluations and about the success of the reform efforts undertaken under Rhee’s tenure and that of her successor, Kaya Henderson. But in proclaiming that Merrow is merely behaving like a “dumped sophomore”, Hess did little more than engage in the kind of grade school nastiness unfitting of his stature in the education policy arena.

At the same time, Merrow’s declaration that Hess’ criticisms merely stemmed from anger over information on AEI’s funding sources is rather ridiculous. Gates Foundation has long provided a database on all of its funding sources for public perusal. AEI has also fully disclosed Gates Foundation funding in its pieces, including a report released in January 2011 on college completion. As Merrow noted himself, Gates Foundation’s support for AEI’s education policy work has also been reported on by outlets such as the New York Times, a story in which Hess was quoted. Insinuating that Hess and AEI were engaged in unethical behavior, as Merrow (a dean of education reporting) has done, when the evidence doesn’t support it is unacceptable.

Hess and Merrow owe each other an apology. Enough said.