Sunday, December 9, 2012

Chapter 34: Teaching to the Data


Chapter Thirty-Four: Teaching to the Data

Good teachers love to get kids to think for themselves, to think analytically and to support their thinking with evidence from the text.  Teachers often hate to be told to “teach to the test” because it makes teaching formulaic by removing precisely the thing that makes sitting in a classroom worthwhile – the freedom to allow your mind to roam the universe in search of new ideas or ways of looking at things.  Worse than teaching to the test, however, in the current data driven rage in education is teaching to the data.  I’m not kidding.  I know this is going to sound like some absurd satire lifted from the pages of the Onion, but this is a true story.  I’ve been told not to teach students but to teach for the purpose of gathering data.
I’m an English teacher who has been instructed by my “superiors” to teach strictly from the Prentice Hall / Pearson anthologies both physical and on line (where Pearson must be making a killing).  The teacher editions of these books provide guidelines for instruction but you can never take the teacher out of teaching.  The goal in education today is to make all classroom instruction generic enough so that an automaton can perform the job as well as a human being but there is no teaching without a teacher.  If we could just put enough monkeys with teacher editions into the classroom!
For the most part I have used these anthologies as directed but one thing I still do even though Pearson doesn’t order us to do it is to ask kids to support their answers to multiple choice (MC) questions with evidence from the text.  I don’t want them guessing.  I want them to understand why “a” is correct and why “b”, “c” and “d” are incorrect.  I want them to be able to prove that they’re right when they circle answer “a”.
Pearson, you see, offers up multiple choice “selection” tests for all of the selections in their anthologies.  Since I’ve been ordered by my administration to use these books and materials, that’s what I do.  Now I’ve been told to use these tests to gather data rather than as a teaching tool.
The teacher evaluation system is supposed to work like this.  A teacher meets with a supervisor to discuss a way of presenting a lesson.  The supervisor then observes the teacher present this lesson after which a follow-up meeting is held to review the success or failure of the presentation.  These three activities should take place within a two-week time frame.
My most recent pre-observation meeting took place on Sept. 13, 2012.  I was instructed to bring along to that meeting primarily attendance records and phone call logs.  Nothing was said about any lesson to be observed; however a “pop-in” surprise observation took place over a month later on Oct. 16, 2012.  As luck would have it, I was modeling (using the newly in vogue “gradual release” model) how to find evidence in the text for answers to Pearson MC questions.  The text was “The Washwoman” by I.B. Singer.  The test was Pearson Selection Test A.
My A.P. (assistant principal) of instruction – a former math teacher who has not taught in the classroom for over a decade – observed from the back while furiously typing on her computer as students sifted through the story looking for evidence to prove correct their answers.  In fact we’d been told to do this at a professional development (PD) session earlier in the year.  The term in vogue for it at the moment is “text dependent writing”.  Of course, it’s nothing new and nothing that teachers haven’t been doing since the dawn of education.
On Friday, Dec. 7, 2012 – almost 2 months after this observation – I was called into this A.P.’s office for the “post-observation” meeting.  I assumed that I would be applauded for preparing students for the sort of work that college is going to demand of them where merely circling a letter isn’t going to cut it.  In fact, we are expected to be thinking of our “college and career readiness” data, part of a NYC school report card.  Applause, however, was not on my A.P.’s mind this day.
The interview began something like this: [1]

A.P.:            Read to me your aim, please.
Me:              “How do I use text to support MC answers?”
A.P:            Were you giving a survey of some kind?
Me:            No, I was giving the test you have there in front of you.
A.P.:            But you said to support “MC answers”.
Me:            Right, and …?
A.P.:            That sounds as though you’re asking for answers to a survey where they can give multiple answers.
Me:            What?
A.P.:            On surveys, they give multiple answers.  You used the word “answer” in your aim.
Me:            Well, since it says “test” on the paper and since they knew they were getting a test, the students understood that it wasn’t a survey.
A.P.:            You should have said to support “MC questions”.
Me:            Huh?
A.P.:            They were answering questions, weren’t they?
Me:            You were there.
A.P.:            Then you were actually asking them to support MC questions.
Me:            No, they were supporting their answers with evidence ….

This conversation took up the first 10 minutes of the meeting.  One monkey with one typewriter could have made more sense.  I could tell from the tone that this meeting wasn’t going to go well for me – not to mention the absurdity of the criticism.
Next we looked at the 3 samples of the test done that day.  As directed, I’d photocopied the test from 3 students’ folders.  For each MC question, the kids had written a quotation from the story itself or an explanation from the textbook to show why the answer (not the question) was correct.  They had written quotations and explanations to support their answers right on the test paper itself.
The conversation picked up with these papers in front of us:

A.P.:            What sort of assessment was this?
Me:            I was modeling for them how I want them to take tests.
A.P.:            But what sort of assessment was it?
Me:            It was both a pre-assessment and a post-assessment since I’d given it out when we started reading the story.  I wanted them to know what they would be looking for by seeing the questions beforehand.  This is called “reading with purpose”.
A.P.:            They wrote the date Oct. 11 on it.
Me:            Right.
A.P.:            And on the 16th ….
Me:            They had read the story by then and were going back to find evidence for their answers.  That’s what you observed.
A.P.:            But it doesn’t say in the teacher edition to ask the students to support their answers with text.
Me:            Really?
A.P.:            So why were you doing that?
Me:            Because I want them to be able to justify their responses.

It continued in this vein for a bit and then the A.P. turned to a document concerning the class observed.  She had emailed me an 11-page document, much of which consisted of her transcription of the observed class session.  I’ve attached this document at the end of this chapter and called it an appendix although it’s actually more like appendicitis. [2]  Of course, after 2 months, it was impossible for me to know how accurate this transcription was but I didn’t quibble over that point.  It’s worth as much as any other hearsay.
As you can see from glancing at the appendix, a great deal of transcribing took place in one way or another.  I might ask this question: how much could have been “observed” by someone doing so much typing?  But I’ll leave that for another chapter.
The current term for this sort of “observation” wherein the observer attempts to record everything said and done within sight and hearing, is “low-inference”.  A low-inference observation is said to do nothing but record factual information a la Joe Friday.  A low-inference observation is said to be objective and non-judgmental.  But as we know the only one who can truly observe objectively and non-judgmentally is the monkey who composed Hamlet.
            The meeting continued with reference to this document.

A.P.:            Would you look at line 70 in the document I emailed you.
Me:            Okay.

I scrolled down to line 70 of this document.
For some reason the A.P. was focusing on lines 70 – 85 or so where a student suggests text-to-text as a possible solution to something.  The term “text to text” refers to relating one story to another piece of literature.  This was a very good point, of course, since the week before they had read “A Giant’s House”.  Both stories in the first unit of the Pearson anthology are “narrative essays” and are meant to be compared.  The student was showing signs of actual learning.  What point the A.P. was making, however, by going over and over that part of the transcript eluded me.  The ludicrous conclusion she drew somehow from it a moment later did not.

A.P.:            This was an “unsatisfactory” lesson, Mr. Haverstock.
Me:            Let me get this right.  I’m not allowed to ask students to use evidence from the text to answer MC questions?
A.P.:            That’s right.  They should circle an answer and nothing more.
Me:            So I’m not allowed to teach “text dependent” writing?
A.P.:            No, you are to use this chart ….

Here she pulled out a Pearson chart showing the type for each test MC question: recall, inferential, analytical, “reading” – yes one of them was simply called a “reading” question – etc.

A.P.:            The students circle answers and you look at this chart and find out what type of question they have trouble with.
Me:            No text support.
A.P.:            No.
Me:            Even though asking for textual support doesn’t interfere with the data you want?
A.P.:            That’s right.

I’ve now been directed to “teach to the data”.  The purpose of the Pearson test is not to teach students how to work, study, think and use text to support their conclusions.  The purpose of the Pearson test is to gather data.
I won’t name names other than to say that I teach at the Jonathan Levin High School for Media and Communications – easily discovered anyway since I’m using my real name for this memoir / blog.  The school is located in the Taft building on 172nd St. in the Bronx where Stanley Kubrick once cut class, preferring movies to lectures and homework.  Taft no longer exists.  There are now 7 small schools in the building, mostly the academies of this or that.  JLHS has not fared so well over the past 3 years, as far as the DOE is concerned.  You can see for yourself by typing in the name of the school at the NYC DOE website: http://schools.nyc.gov/Accountability/tools/report/FindAProgressReport/default.htm 
Since these grades are based on “data”, of course, they are meaningless.  Since the AYP numbers for “adequate yearly progress” are set by the very people who want to close schools, the system is entirely corrupt from top to bottom.  Teachers in the system know that none of the critical numbers used by these corrupt bureaucrats, particularly graduation rates and Regents scores, say anything at all about how well the school is performing.  It’s easy to convince outsiders, however, that numbers don’t lie.  Maybe numbers don’t but the people manipulating them sure do.


Appendix [3]


For the complete transcript of this observation document go to:

Complete Transcript of Observation Report



            NOTE: This blog contains an excerpt of the first draft of this book.


[1] All dialogue is paraphrased.  Theoretically if I had enough monkeys, I could recreate the exact words used.
[2] I have no idea whose intellectual property this document would be.  She typed it up but most of it consists of what I and students might have said in the class.  I don’t know to whom it belongs, as I said, but calling it “intellectual” property ought to be against the law.  I’ve edited out student names in those cases that refer to real people.
[3] I’ve replaced names with the phrase “student name” for those who were real people.
[4] There was no “Daisy” in the room.
[5] There was no “Diane” in the room.
[6] There was no “John” in the room.

Saturday, March 3, 2012

Chapter 31: The Charlotte Danielson Rubric for the Highly Effective Husband


My Life as an NYC Teacher

Danielson Exposed!!

Chapter 31: The Charlotte Danielson Rubric for the Highly Effective Husband

        Where will the ongoing pretense that human interactions can be objectified lead?  I’ve been thinking about this while looking over the Danielson rubric for classroom management, a hilarious attempt to pretend that you can categorize and rate teacher – student interactions.  The Danielson Group and whatever academic eggheads and deep pockets are behind them apparently (low-inference observation) believe that human behavior can be observed and described objectively and without making judgments or inferences.  They apparently believe that codifying it makes it meaningful and less ridiculous.  Is there a better argument against allowing people like this to design a new teacher evaluation / rating system than this piece of paper?  I should have called this “Exhibit A”.
        More likely, of course, it's a lie that someone wants to hear so they're more than happy to take the money and fabricate.  So here it is, the Danielson “rubric” for classroom management.  (I’m not kidding!  This is a real document – I have a copy - and it is being promulgated as the answer to something!)



Danielson 2011 rubric – Adapted to New York State Levels of Performance


COMPETENCY
2d
INEFFECTIVE
DEVELOPING
EFFECTIVE
HIGHLY EFFECTIVE
Managing
Student
Behavior
There appears to be no established standards of conduct, and little or no teacher monitoring of student behavior.  Students challenge the standards of conduct.  Response to student behavior is repressive or disrespectful of student dignity.
Standards of conduct appear to have been established, but their implementation is inconsistent.  Teacher tries, with uneven results, to monitor student behavior and respond to student misbehavior.  There is inconsistent implementation of the standards of conduct.
Student behavior is generally appropriate.  The teacher monitors student behavior against established standards of conduct.  Teacher’s response to student misbehavior is consistent, proportionate and respectful to students and is effective. [1]
Student behavior is entirely appropriate.  Students take an active role in monitoring their own behavior.  Teacher’s monitoring of student behavior is subtle and preventive.  Teacher’s response to student misbehavior is sensitive to individual student needs and respects students.
Critical
Attributes
·   Classroom environment is chaotic, with no apparent standards of conduct.
·   The teacher does not monitor student behavior.
·   Some students violate classroom rules without apparent teacher awareness.
·   When the teacher notices student misbehavior, s/he appears helpless to do anything about it.
·   Teacher attempts to maintain order in the classroom but with uneven success; standards of conduct, if they exist, are not evident.
·   Teacher attempts to keep track of student behavior, but with no apparent [2] system.
·   The teacher’s response to student misbehavior is inconsistent: sometimes very harsh, other times lenient.
·   Standards of conduct appear to have been established.
·   Student behavior is generally appropriate.
·   The teacher frequently monitors student behavior.
·   Teacher’s response to student misbehavior is effective. [3]
·   Teacher acknowledges good behavior.

In addition to the characteristics of “Effective”:
·   Student behavior is entirely appropriate; no evidence of student misbehavior.
·   The teacher monitors student behavior without speaking – just moving about.
·   Students respectfully intervene as appropriate with classmates to ensure compliance with standards of conduct.

Possible
Examples
·   Students are talking among themselves with no attempt by the teacher to silence them.
·   An object flies through the air without apparent teacher notice.
·   Students are running around the room, resulting in a chaotic environment.
·   Their phones and other electronics distract students and teacher doesn’t do anything.

·   Classroom rules are posted, but neither teacher nor students refer to them.
·   The teacher repeated asks students to take their seats; they ignore him / her.
·   To one student: “Where’s your late pass?  Go to the office.”  To another: “You don’t have a late pass? Come in and take your seat: you’ve missed enough already.”

·   Upon a non-verbal signal from the teacher, students correct their behavior.
·   The teacher moves in every section of the classroom, keeping a close eye on student behavior.
·   The teacher gives a student a “hard look” and the student stops talking to his/her neighbor.

·   A student suggests a revision in one of the classroom rules.
·   The teacher notices that some students are talking among themselves, and without a word, moves nearer to them; the talking stops.
·   The teacher asks to speak to a student privately about misbehavior.
·   A student reminds his/her classmates of the class rules about chewing gum.




   [If Charlotte Danielson or the Danielson Group or the Milken Group or whoever is behind these crazy rubrics feels that I am infringing on their copyright by posting this rubric, just let me know and I’ll remove it.  I’d be embarrassed to have it shown to the public, too.]

        Before pointing out just one or two of the more glaringly Kafkaesque aspects of this teacher evaluation tool, I’m a little curious about just what that object flying "through the air" of the ineffective teacher's classroom is.  Wait - I recognize it.  It’s the teacher’s sanity.
        Notice that in the “highly effective” teacher’s classroom there is, quote, “no evidence of student misbehavior” and yet when it happens, either the teacher wordlessly takes care of it – relatively easy to do since it isn’t really happening, according to this rubric – or the students remind themselves that it isn’t really happening, since there is no evidence of it.
        Notice that the effective “hard look” technique is inferior to the more highly effective non-verbal technique, though the “hard look” is, of course, by definition non-verbal.  Any teacher who has to actually speak to his / her students, by this rubric, has a long way to go.
        But enough of pointing out the obvious.  If Charlotte Danielson actually exits – see chapter “The Danielson Performance Puppet” – and actually believes that this rubric can and should be used as a tool to evaluate teacher performance – a puppet can be made to say and act as if it believes anything, of course – then where will it end?  I mean, why stop with teacher - student interactions?  Isn’t the husband – wife intercourse just as significant, perhaps even more so?  Shouldn’t we be able to know when our intimate partner is performing in a “highly effective” manner?  Evidently, most of us can’t tell such things subjectively.
  
[Legal disclaimer: Although all of the stories about schools in this book are true, the scene described here is another purely imaginative, i.e., fictional account.  I’ve never met Charlotte Danielson and had never heard of her before she was foisted on us and became my de jure educational guru last September - 2011.]

SCENE: The Danielson Research Laboratory, i.e., her bedroom.
SUBJECT(S):  self; Mr. Danielson [4]
AIM: Copulation
OBJECTIVE: Satisfaction (as opposed to impregnation- see Domain -3c)
STANDARDS: FP 2.3: partner is aroused through physical intimacy prior to penetration
         PEN 1.2: penetration is measurable and pleasurable for both parties
         EJ 3.3: ejaculation elicits moans of satisfaction
Do Now:      Disrobe; put on nightgown; leap into bed; await husband’s entrance.

Charlotte is sitting in bed and smoking a cigarette to simulate actual conditions as closely as possible – you know, the way they pretend that the scenes in all the classroom videos are “realistic”.  Her husband lies at her side, snoring quietly and with the hint of a smile on his drowsy lips.
She is going over the low-inference, non-judgmental notes she made during the activity just consummated:

1.              Falls while hastily stepping out of trousers – 6:17:44
2.              Jumps on bed, tears off my nightgown / underclothes – 6:18:04
3.              Kisses my neck repeatedly – 6:18:23
4.              Breath smells like …. [crossed out – inferential]
5.              Attempts penetration – 6:18:38 – 6:23:53
6.              Penetrates – 6:23:54
7.              Begins rapid, repetitious in and out motion – 6:23:55
8.              In and out occurs – lost count at 78 repetitions: 6:25:12
9.              Low-pitched and high-pitched vocalizations, i.e., accountable talk, heard throughout activity
10.           Ejaculation occurs accompanied by vocalized “Owwwwww!  – 6:26:03
11.           Falls onto his side of the, I mean, left side of the bed – 6:26:05
12.           Begins snoring as usual – 6:26:49 – NOTE: scratch “as usual”

“Hmm,” she thinks to herself, “a few of these terms are slightly judgmental.”  NOTE TO SELF, she writes: change “hastily” to “with rapid hand and foot movements”.
Since these notes are meant strictly as a tool for discussion and reflection rather than for evaluation and she is uncertain about the level of satisfaction she is feeling, Charlotte pulls out the actual rubric in order to determine if the objectives were accomplished and the standards met.


Danielson 20—Rubric – Adapted to NYS Levels of Performance [5]

COMPETENCY
-2d
INEFFECTIVE
DEVELOPING
EFFECTIVE
HIGHLY EFFECTIVE
Managing
Human
Intercourse
There appears to be no established standards of conduct, and little or no female monitoring of male behavior.  Male challenges the standards of conduct.  Response to male behavior is repressive or disrespectful of male dignity.
Standards of conduct appear to have been established, but their implementation is inconsistent.  Female tries, with uneven results, to monitor male behavior and respond to male misbehavior.  There is inconsistent implementation of the standards of conduct.
Male behavior is generally appropriate.  The female monitors male behavior against established standards of conduct.  Female’s response to male misbehavior is consistent, proportionate and respectful to male and is effective.
Male behavior is entirely appropriate.  Male takes an active role in monitoring his own behavior.  Female’s monitoring of male behavior is subtle and provocative.  Female’s response to male misbehavior is sensitive to individual male’s ego.
Critical
Attributes
·   Bedroom environment is chaotic, with no apparent standards of conduct.
·   The female does not monitor male behavior.
·   Male violates bedroom rules without apparent female awareness.
·   When the female notices male misbehavior, she appears helpless to do anything about it.
·   Female attempts to maintain order in the bedroom but with uneven success; standards of conduct, if they exist, are not evident.
·   Female attempts to keep track of male behavior, but with no apparent system.
·   The female’s response to male misbehavior is inconsistent: sometimes very harsh, other times lenient.
·   Standards of conduct appear to have been established.
·   Male behavior is generally appropriate.
·   The female frequently monitors male behavior.
·   Female’s response to male misbehavior is effective.
·   Female acknowledges good behavior.

In addition to the characteristics of “Effective”:
·   Male behavior is entirely appropriate; no evidence of male misbehavior.
·   The female monitors male behavior without speaking – just twisting and squirming while cooing, “Oooo, ahhhh.”
·   Male respectfully intervenes as appropriate with female to ensure compliance with standards of conduct and position variations.

Possible
Examples
·   Male objects to disrespectful criticism of his performance and “pulls out”.
·   Male gives up and watches football.
·   Male prematurely ejaculates and then goes to neighborhood bar to brag about hours-long sex session.
·   Female at one point purrs, “Oh, honey,” but a moment later screams, “You insensitive bastard!”
·   Female uses her feminine charms to urge male on but gets headache just before ejaculation, leaving male frustrated and horny.  He resorts to porn.
·   Female reads magazine during activity with little or no apparent monitoring of male performance.

·   Upon a non-verbal signal from the female, male corrects his behavior by changing position appropriately.
·   The female moves in every section of the sheets, keeping a close eye on male behavior to ensure lengthy (in both senses of the term) erection.
·   Female compliments male on performance, saying, “That was great, baby!”

·   Female hires film crew to record performance for internet posting.
·   Female tweets “ooo’s” and “ahhh’s” at 20-second intervals.
·   Female is interviewed on SPIKE t.v.; she gives non-verbal advice by physically and graphically modeling effective positions, using the “tableau” activity.
·   Female wins Milken award for “Most Positions Achieved Before Initial Ejaculation”.




        Dishearteningly, based on the objectives and standards, Charlotte is forced to rate this husband as “developing” in foreplay (“bad breath”), “ineffective” in penetration (“took too long”), but “highly effective” in ejaculation (“great scream”).
        Back to reality: The people who came up with this “classroom management rubric” and can send it out to schools with a straight face are the people in charge of training and evaluating teachers.  Charlotte Danielson is now the lead consultant for the national push for common core standards.  Given that, I’m curious about her credentials for holding this position.  In other words, I want to know if she ever taught.  You can’t be a teacher guru with no teaching experience and without the kind of experience that real teachers get day in and day out.  At least, logically, you can’t.
        So I’ve googled “Charlotte Danielson” and “Charlotte Danielson biography” and I’ve gotten the same line every time.  Here it is, taken from


            “She has taught at all levels, from kindergarten through college ….”

        Wow!  All levels – sounds pretty impressive.  I don’t know how old she is – it’s hard to guess the age of a puppet – but given all of the other things she has done according to these biographies:

… has worked as an administrator, a curriculum director, and a staff developer. In her consulting work, Ms. Danielson has specialized in aspects of teacher quality and evaluation, curriculum planning, performance assessment, and professional development.
Ms. Danielson has worked as a teacher and administrator in school districts in several regions of the United States. In addition, she has served as a consultant to hundreds of districts, universities, intermediate agencies, and state departments of education in virtually every state and in many other countries …  (same web site)

        Given all of this, it’s hard to imagine that she has had time to actually teach at all levels from “kindergarten through college”.  Let’s see, if taken literally, that would be a minimum of 13 years (K – 12) plus at least another 4 years to cover “college level”.  That’s a minimum of 17 years of teaching if she only lasted one year at each level.  I guess she started in kindergarten and worked her way up.
        Clearly it’s a ruse.  Ms. Danielson hasn’t taught “at all levels” and may not have taught at all.  If she has, why don’t they say where, when, for how long and who her students were?  I don’t mean to sound cynical but I remember Joel Klein’s and Cathie Black’s lengthy educational resumes upon taking over the leadership of the NYC public school system.  Charlotte Danielson - she / it / they are aware that anyone claiming to be a teacher guru will be asked the question, “How, where and for how long did you teach?”  So she / it / they have supplied an all-encompassing answer meant to side-step any such question before it’s asked and anyway, she’s said to be from Princeton and she’s written some books.  Isn’t that good enough?
        I’m reminded of an Aussie coach I once had.  Although it’s more absurd than the Danielson sex rubric, this is a true story.  Anyone with an Australian accent was once considered a candidate for American teacher guru.  I guess since the Aussies were considered the best crocodile fighters, it just seemed natural for them to coach teachers.
        I and three others were designated for Ramp Up, a remedial English program for over-aged / under-credited students.  Our Aussie coach met us with the thick Ramp Up binder.  The Aussies were said to know everything about Ramp Up and were also said to be making big money doing nothing more than coaching teachers.
        The first thing we asked, naturally, was, “Tell us about your experiences teaching Ramp Up.”
        “Actually,” our coach admitted sheepishly, “I’ve never taught it.”
        “Well, then,” we continued, “tell us about the teachers you’ve observed teaching Ramp Up.”
        “In truth,” he said even more sheepishly, his accent growing stronger with each reply,  “I’ve never observed it in a classroom.”
       “Okay, then,” we said, “tell us what is in this big binder.”
       “I haven’t,” he replied, “had a chance to read it yet.”
       We looked at each other wondering what to say next.  One of us was well-known in the school for his hair-pin trigger and bursts of rage.  He could hold back no longer.
       “Then why in the world do we need someone from Australia to tell us how to teach?”  I wish I could convey the raging tone of voice in this question.
       “Actually,” our faux Aussie said, “I’m from Detroit.”
       He’d married an Australian woman and assimilated her accent.
       A few years ago it was the Aussies; now it’s Charlotte Danielson, who may, in fact, be a puppet – whether hand-held or dangling from strings I haven’t been able to discern yet.  Who’s next?  That guy I read about in the paper who was in the bar fight last week?  Or would he be overqualified since I’ve heard of him?



            NOTE: This blog contains an excerpt of the first draft of this book.


[1] Note: the definition of the “effective” teacher is that his/her response is “effective”.
[2] Note: the word “apparent” is apparently meant to indicate that a meaningful inference can be drawn without making an actual inference.
[3] Note: the definition of the “effective” teacher is that his/her response is “effective”.
[4] Another disclaimer: I know nothing about Charlotte Danielson – never heard her talk other than on a couple of videos that have been shoved down our throats at various “professional development” meetings where she tends to back up and correct herself frequently, don’t know if she’s married, has kids, smokes cigarettes – don’t fully believe she actually exists.  This scene is fiction meant to spoof a public figure or quasi-meta-public figure.
[5] Thankfully we won’t be held to Parisian levels of performance.

Saturday, February 25, 2012

Chap. 30: The Real Teacher Evaluation System

    Chapter Thirty: Reform School, Part 5: Teacher Evaluations

[Although all of the stories about schools in this book are true, this chapter is another purely imaginative, i.e., fictional account of a conversation that might have taken place between NYC Mayor Michael Bloomberg and various employees, former employees, wannabe employees and lackeys on Feb. 24, 2012, the day teacher ratings for 12,000 4th – 8th grade teachers were made public.]

Former chancellor Joel Klein enters NYC mayor Michael Bloomberg’s office.  An aide is present.

JK:            Mike, I just got the news.  Congratulations!
MB:            (Admiring a bottle of wine.)  Thanks, Joel.  1972 - and you know how much I paid for it?
JK:            I’m talking about the teacher ratings, Mike.
MB:            Oh, right.  Thanks.  (Puts down the bottle.)
JK:            I just wish we could have gotten it done during my tenure ….
MB:            Please, Joel, you know how I hate that word.
JK:            Sorry – during my term as chancellor.  But you know, Mike, they’re all talking about how unreliable these ratings numbers are.
MB:            Let ‘em talk.  At least I’ve got some objective data now.  I can start firing people.
JK:            How many teachers have you fired, Mike?
MB:            Well, none yet, but now we can really get going.  I’m dying to fire some of those lazy bastards.  It’s my third term, for God’s sake, Joel.
JK:            Tell me about it.  I know how rough the union is.  But didn’t you get rid of that real estate guy finally?
MB:            You mean that guy who spent 10 years in the rubber room managing a million dollar real estate portfolio?
JK:            Yeah.  I heard you got him.
MB:            Well, not exactly.  He retired before I could fire him.
JK:            Retired?
MB:            Well, actually he’s working for me now.  With his ability to manipulate the system ….
JK:            Good move, Mike.
Aide:            Here it is, Mr. Bloomberg.
MB:            Let’s have it.
Aide:            Well, I’ve added another constant.
MB:            That’s a word I like.
JK:            You mean sort of like how you want to rule the ci ….
MB:            [Gives Klein a level 4 intimidation gesture.]
JK:            Sorry.
MB:            That’s okay.  Just listen to this, Joel.  This is Horace.
Aide:            Nice to meet you, Mr. Klein.
JK:            Likewise.
MB:            Go ahead, Horace.
Aide:            Okay.  Now, you take the total scores on the state tests, add them all together.  That yields the base number.
JK:            What is this?
MB:            The formula for evaluating teachers.  Go on, Horace.
Aide:            You take their total test score number, subtract the number of in school suspensions and multiply by the number of report card grades above 75.  Then you subtract the total number of detentions, counting reprimands from teachers as .5019 of an actual suspension, reprimands from school aides as .6321209, from school security as .723643, from A.P.’s as .832398, deans, .86343, and principals .9312194213.
MB:            That makes sense.
JK:            What about a superintendent reprimand?
Aide:            Well, you know how rarely superintendents ever come near a real school.  We’ve only had 23 of those in the entire school system so we’ve left it out of the equation – statistically negligible.
JK:            But isn’t that important?
Aide:            Only for the teachers of those 23 kids.
MB:            No one will notice.  Go on.
Aide:            Well, you take that figure and add in the number of years of education of each teacher, then divide by their overall undergraduate G.P.A., then triple that to weigh it a bit more and add in the number of years of graduate work, doubling the weight of post-graduate work, tripling it if it was done at Harvard, quadrupling for Yale and factoring in the negative prorated state college constant.  You divide ….
JK:            What’s that for?
MB:            So we can claim that we’re taking into account the education level of the teacher.  We don’t want them complaining about that.
Aide:            Then we subtract the number of years the teacher has been a paying member of the UFT.
MB:            Excellent!
JK:            What does that have to do with it?
MB:            Who cares?  Go on.
Aide:            We weigh that using the amount each teacher contributes to COPE and throw in a constant there. Call it the COPE constant.  That’s to make sure that that number weighs them down.  If they’re paying more than ten dollars a paycheck into COPE, for example, they can score no higher than a 44th percentile no matter how their students score on the tests.
MB:            Can Mulgrew figure that out?
Aide:            Did he go to M.I.T.?  I’ve disguised it under “miscellaneous criteria”.
MB:            Good, go on.
JK:            They’ve got some sharp people working with the union, Mike.
MB:            [Smirking.]  Tell him, Horace.
Aide:            Well, Mr. Klein, at age 3 and a half my I.Q. was estimated at 275.44.  I’m sure I don’t have to inform you that that is exactly 2.6343 times the I.Q. of the average public school teacher.
JK:            I knew that.           
Aide:            I graduated magna cum laude from Case Western Reserve at the age of 9, did my graduate work at Yale.  At age 13 I was a Rhodes Scholar and when the next administration comes in, I’m planning to realign the universe according to my new theory of quasi-relativity.  Do you know that there never was an actual “big bang”?
JK:            You don't say.
Aide:            And I’ve got the formulas and constants to prove it.  I invented one that I call the “God constant”. You can insert it into any equation and that equation will always yield pi minus 14.
JK:            How’s that?
MB:            Don’t ask.  Go on, Horace.
Aide:            In fact, Mr. Mayor, I was thinking of inserting my God constant into the teacher ratings formula for any teacher that exceeds 1 sick day a month.  I could make it reduce their actual rating, whatever that is, by pi minus 14.
MB:            Let’s see those union guys figure that out.  Go on, Horace.
JK:            Are you a math teacher?
MB:            Come on, Joel.  He can DO!  I wouldn’t let him anywhere near a classroom.  He knows too much about education.
JK:            That’s what you said when you hired ….
MB:            Go on, Horace, man.
Aide:            Well, Mr. Mayor, you take that number and divide by the number of days absent from school for each student and then prorate that number by the years listed as ELL.
JK:            That’s good.  They’re sure to complain about that.
Aide:            Then you multiply by the income of the family minus food stamps, housing allowance and any other state subsidy.  You take that number and average it against the average income tax return for all tax payers in the state with children in the public school system ….
MB:            They won’t be able to claim that we don’t take into account socio-economic status.
JK:            What about charter kids?
MB:            We’re leaving them out for now.
JK:            Why, Mike?  They’re certainly going to come out well above average.
MB:            Exactly.  We don’t want to let it out yet that we’re targeting the over-achieving kids for charter schools in order to make the public schools look bad by comparison.  Come on, Joel.  How many times did we talk about that?
JK:            Oh, right.
MB:            Want to taste that wine?
JK:            Not yet.
Aide:            Okay, so then you double the number of miles traveled by each student to and from the school, multiplying by 5.898723 if by subway, by 4.123423498 if by MTA bus, by 3.213476 if by school bus and by -2.31123 if they’re driven by their parents ….
JK:            What?
MB:            Well, come on, Joel.  We have to make them think that we're making a fair comparison between our kids and ….
JK:            Right.
Aide:            Then we multiply by the income figure and then set a ratio between that number and the original sum of all test scores, divide by the number of services proscribed by the I.E.P. or 2.45672 if there is no I.E.P….
JK:            Wait ….
MB:            No, Joel, that number comes straight from the Danielson group.
JK:            Oh.
Aide:            You take that number and divide it into the percentages of classroom work done for each teacher ….
JK:            You mean, if the teacher only taught them 30% of the time ….
Aide:            Exactly.  The difference is negligible but at least it’s in there.  Now, you take that number, add in the parent teacher conferences, phone calls and meetings with parent coordinators ….
JK:            I like that.
Aide:            Multiply by the average weight of the book bag and sneaker size – and this is where the constant comes in.
MB:            Go on.
Aide:            Well, if we use a constant of 13.8917326732619120309098123, we come out with a figure that has a margin of error of 52% for English teachers and 41% for math over six years.

            [Enter Chancellor Dennis Walcott.]

MB:            So that improves the margin of error for English teachers but not for math.
Aide:            Right but I’m still tweaking.  I think we can get English down to 45% over 4 years without losing any ground in math.
DW:            The formula?
MB:            Right.  Go on.
Aide:            I think if we insert a new constant between the backpack and sneaker figures, we can really start to get somewhere.
MB:            Well, what are you waiting for?  Hi, Dennis.
DW:            Hi, Mike.  Hi, Joel.
JK:            [Sneers.]
DW:            Can I help it, Joel, if I’m an educator and you’re not?
JK:            Didn’t I provide you with the software for the ATRs, the Accelerated Teacher Removal?
DW:            Yes, Joel, and I meant to thank you for that.  We’ve got those teachers running for cover now.  All the pundits are calling the ATRs “ineffective”.
MB:            Wait!  Is that true?  I mean, according to the objective formula?
DW:            What?
MB:            That all the Accelerated Teacher Removals are “ineffective”?
DW:            Of course not.  They’re ATRs because we closed their schools.  You know that, Mike.  Most of their schools – you opened them yourself specifically so that we could close them down and start excessing as many veteran teachers as possible.  Most of them are excellent, experienced teachers – you know, the ones making $80,000 or more.
MB:            The ones we’ve targeted.
DW:            Exactly.
MB:            But what if we publish ratings for them?  What if they come out at the top?
Aide:            Don’t worry, Mr. Mayor.  I forgot to mention the ATR constant.  We threw in a number for ATRs to make sure that if you get excessed, you can’t score higher than a 17th percentile no matter how many successful years in the system you’ve had.
MB:            So if you’re making more than $80,000, you’re rating is going to be less than ….
Aide:            That’s right, sir.  Didn’t you hear me mention the negative Tier 1 and Tier 2 figures?
MB:            I must have been thinking about the wine.  Hey, anyone want some wine?  I’ve got this vintage bottle ….

            [Enter Cathie Black.]

CB:            Did I hear someone say wine?
MB:            Yes, Cathie.  I must have heard you coming.
DW:            Hello, Ms. Black.
JK:            Hi, Cathie.  Did you get the memo I sent?
CB:            I haven’t been to the office in a couple of weeks, Joel.  I’ll look for it.
MB:            Any other figures I should know about?
Aide:            Well, just the rubber room constant.  I’m still trying to work that one in.  The problem is that we’ve already weighted the numbers so heavily against the teachers ….. sorry.
MB:            Oh, don’t worry about Cathie. She knows what we’re doing.  If she had brought you in ….
CB:            Come on, Mike.  I got it on my resume.  You know that’s all I ever wanted.
MB:            Glad to be of service, Cathie.
Aide:            Well, any new constant tends to throw all of the teachers into a negative number.  I’m trying to come up with just one more constant for time spent in the rubber room that will shift everything just enough but not too much.
MB:            Keep working on it.  That’s important.  We can’t have someone who has been sitting in the rubber room for years coming up with a high rating.
Aide:            That’s the problem, if they were actually good teachers before someone accused them of something ….
MB:            You’ll figure it out, Horace.  It can’t be as complicated as proving that God is a constant.
Aide:            I didn’t say ….
MB:            Now, wine anyone?




            NOTE: This blog contains an excerpt of the first draft of this book.