Tuesday, June 18, 2013

Chapter 52: Open Season on Teachers

Chapter Fifty-Two: UFT: Unopposed to Firing Teachers


I was looking forward to the latest edition of New York Teacher (June 13, 2013), the UFT rag sent by mail to teacher residences.  I was looking forward to the skewering of the state imposed "evaluation" system.  I looked from front page to back, however, and found only various articles starting with the one on page 3 headlined "Complex new system unveiled".  I think it's time to take my COPE money back.

The DOE spin in Maishe McAdoo's article starts fast - in the sub-heading: "Designed to support teachers ...."  In fact, of course, the old system was designed to support teachers.  Under the old system, supervisors worked with teachers to implement and analyze teaching strategies in the classroom.  Done well, there is no better way to support teachers.  Of course, many inept or corrupt administrators used the old system for other purposes at the urging of their 3-term mayor.

The new "evaluation" system is a three-dollar bill with Michael Mulgrew's face on it.  Hold it up to the light and Mulgrew's smirk metamorphoses into a snikering image of that other Michael - Bloomberg.  The only thing it's good for is rolling it up into a straw to drink out of your 32 ounce soda.  It's not only water resistant.  It's truth resistant.

"Commissioner John King's 241-page document transformed  the old Satisfactory/Unsatisfactory rating system, which gave teachers little guidance and principals almost sole discretion, into a multi-element review of their practice that can help them improve their teaching."  (paragraph 2)

You only have to read through this second paragraph to understand why Ms. McAdoo didn't attach an email to this article.  (Here is the article in its entirety: "Complex New System Unveiled")

There are plenty of lines to read between in a 241-page document. Fortunately Ms. McAdoo boils it all down to one word: "multi-element".  In other words, there is now a 2nd element in the teacher evaluation process beyond the observation of a qualified supervisor.  Now that's complexity!  Having spent 9 years at the closing Jonathan Levin H.S., I have no illusions about the intentions of administrators.  (See Chapter 35, Observing the Observer and Chapter 42, Observing the Observer 2, for example.)  However, the "multi" in "multi-element" is where the devil creeps in and you don't need to read between the lines in 241 pages to see that.  In fact, it's right there on the pie chart accompanying the article - the one showing that the "multi" in "multi-element" is now 40% of our evaluation - student performance.  This pie is certainly going to end up in teachers' faces.

Teachers will now be rated not on their own performance but on the performance of others.  I repeat.  Teachers will now be rated not on their own performance but on the performance of othrs.  I wonder how Ms. McAdoo would feel if I rated her article not on her own writing but on the work of whoever wrote Michael Mulgrew's press release on page 11.

The new "multi-element review" is a sham.  It takes the evaluation of teacher performance out of the hands of the experts and puts it 40% into the bloodless hands of the vampire addicted, the undead hands of the zombie addicted, the magical hands of the Potter addicted, the steely hands of the super hero addicted, the gossamer fingers of the fantasy addicted, the gossiping fingers of the Facebook addicted, the fidgety fingers of the video game addicted and, yes, even the nerdy hands of the textbook addicted.  In other words, students.  Add in the 5% from the student survey, and the "evaluation" of teacher perfromance is almost 50% in someone else's hands.

How does this support teachers?

"The commissioner did not adopt Mayor Bloomberg's vision of a system that gave the DOE unchecked powers and focused on getting rid of 'ineffecive' teachers."  (para. 10)

Yes, he did.  If a teacher is rated "ineffective" on that 40 - 45% of the evaluation that is out of his/her hands, the teacher will be rated ineffective no matter how highly effective the teacher is rated on the other part of the "multi-element review", the observationThe teacher's performance in the classroom will have nothing to do with his/her "evaluation".  Smirking, unscrupulous administrators will be saying, "It's out of our hands.  I have no choice.  The multi-element RUBRIC dictates that I rate you ineffective."  They'll be washing their hands faster than Lady Macbeth.

 If you want even more proof that Ms. McAdoo's spin on the "complex" new system is pure DOE spin, just read the rest of paragraph ten:

"Announcing his decision, King said, 'Let's be clear.  New York is not going to fire its way to academic success.'"

Yes, let's be clear.  If King calls is white, it's certainly black.  The clearer a politician claims to make something, the murkier it is. The more a politician says,"believe me," the less you believe.  In the case of polit-speak, it's not so much reading between the lines as turning the lines inside out.

"Part of the complexity of the new system stems form the variety of ways that student learning can be assessed."  (paragraph 14)

Yes, and 2 plus 2 plus 2 plus 2 plus 2 plus 2 is complex because there are 6 two's in there.  Nevertheless, it takes three-tenths of a second to see that it still adds up to twelve.

There is nothing complex about this new system.  Teachers will be evaluated on student performance on state tests and in school tests.  It doesn't take a rocket science to figure out the "complexity" of this simple system of gathering data to use against teachers.  Scenario #1:

Principal:   I know this is difficult to comprehend, but try to follow me.
Teacher:     Okay.
Principal:   I'm the principal. 
Teacher:    Right.
Principal:   You're the teacher.  Are you with me?
Teacher:     Yes.
Principal:   Your students took the Regents.  Follow me?
Teacher:     So far.
Principal:    Only 53% of your students passed the Regents.  Let me spell that out for you so that you understand - F-I-F-T-Y-T-H-R-E-E-P-E-R-C-E-N-T.  Still with me?
Teacher:      I think so.
Principal:    The rubric says that if your student pass rate is below 60%, you are 40% ineffective.
Teacher:      Mmm-hmm.
Principal:   The rubric further says that if you're ineffective in that 40%, you're 100% ineffective.
Teacher:     Wait ....
Principal:    Okay, let's try it again.  I'm the principal ....

What constitutes "ineffective" on student performance?  There is nothing in the article under discussion about this question.  This is where the complexity comes in, no doubt.  The convoluted formula for determining what consitutes ineffective student performance would probably make an MIT math professor scratch his head.  In fact, I wrote about this some time ago:

The Real Teacher Evaluation System

What constitutes "progress"?

"Twenty points [out of 100] are based on state measures of student learning grownth such as improvement on standardized test ...."  (para. 12)

Progress, of course, is a good thing.  The real question is: who defines progress?  Who's to say what constitutes progress for an individual?  Who's to say that progress for the zombie addicted is the same as progress for the vampire addicted?  As we all know, the conditions and circumstances under which standarized tests are taken never vary.  Scenario #2:

Principal:     Your student Bob only got 62 on the June Regents.
Teacher:      I know.
Principal:    He got 77 in January.
Teacher:      I know.
Principal:    What happened?
Teacher:      He only got 2 hours of sleep.
Principal:    And?
Teacher:      And he didn't get that much sleep.
Principal:    What are you saying?
Teacher:     You know, that he didn't get enough sleep.
Principal:    What does that mean?
Teacher:      Probably that he stayed up too late.
Principal:    You mean you let him stay up past curfew?
Teacher:      What?
Principal:    Didn't you read the fine print of the new evaluation system?
Teacher:     I thought I did.
Principal:    Well, I'm afraid that going from 77 to 62 does not constitute progress as defined by the state.
Teacher:      I see.
Principal:    According to the state rubric, I have no choice but to rate you ineffective.
Teacher:     But he got 77.  He only took it again because you programmed him to take it.
Principal:    It's out of my hands.

In the hands of the right people - or is it wrong people -  i.e., administrators, it might even be difficult to distinguish progress from regress.  Scenario #3:

Principal:     Your scholarship went from 58% pass rate to 63% pass rate on the Regents.
Teacher:      (smiling)  I know.
Principal:    (scowling)  That's an increase of 5%.
Teacher:      I know.
Principal:     Last time your scholarship went from 51% to 58%.
Teacher:      (smiling broadly)
Principal:    (scowling broadly)  That's an increase of 7%.
Teacher:      I know.
Principal:    That's a decrease of 2%.
Teacher:      What?
Principal:    Your rate of increase decreased.
Teacher:     Well, but ...
Principal:    I'm afraid that you're regressing and I have no choice but to rate you ineffective.
Teacher:     But you rated me highly effective in the classroom.
Principal:    That's only 60% of your evaluation.
Teacher:     So I'm 60% higly effective and 40% ineffective.
Principal:   Correct, which equals ineffective.
Teacher:     But ....
Principal:    It's out of my hands.

Washing your hands is good.  It keeps them clean.  It cleans the dirty work off them.

Teacher support comes in many different forms.  There's the ineffective for the Regents.  There's the ineffective for the decrease in your increase rate.  Then there's the ineffective for the Acuity.  Then there is the ineffective for the school-based tests.  Fortunately, coming soon (they say) will be another component of this complex yet supportive evaluation system: the ineffective for the PARCC quarterly exams.  There's the ineffective for any of the 22 Danielson rubrics  Then there is the ineffective for the student survey.  Isn't it great to get all of this support!

Finally there's the ridiculous Danielson rubrics.

"Principals or other administrators who conduct classroom observations must be trained to use all 22 components of Charlotte Danielson's well-regarded Framework for Teaching rubric."  (para. 18)

McAdoo goes on to say that it's a good thing that administratros will have all 22 options for ineffective ratings since they were threatening to hold it down to "a small fraction".  Thanks for that.  Now they've got 22 fake rubrics to use against us instead of 8 or 10 or 15.

Pretending that you can objectify human interactions is the biggest lie of all in this new, supportive evaluation system.  I made this point some time ago in Chapter 31:

The Charlotte Danielson Rubric for the Highly Efective Husband

You cannot objectify human interaction and teaching is nothing if not human interaction.  But what's the sense in pretending that the 40% based on "data" is objective if you can't claim that the other 60% is also "objective".  So they've called the Danielson rubrics "low inference" and "nonjudgmental" as if calling stinkweed a rose makes it smell sweet.

There is no such thing as a low inference observation.  And, of course, inferences are essential to observations and evaluations.  Inferences are good.  Are you following me, Charlotte?  I know it's complicated but let me make it clear - inferences are good and essential.  They aren't objective and they can't be described as "data".  They're by definition anecdotal.  Anecdote is good when observed by an honest, competent observer.

Telling adminsitrators to observe a teacher's performance without making inferences is like telling students to read Of Mice and Men without making inferences.  That, of course, makes both George and Lennie cold blooded killers.




No comments:

Post a Comment