Chapter 35: Unsatisfactory
Response to “U” Observation
Jonathan Levin H.S. for Media and Communications (09x414)
Pre-observation date: Sept.
13, 2012
Observation Date: Oct.
16, 2012
Post-observation date: Dec.
7, 2012
Principal: Nasib
Hoxha
Assistant Principal: Erica
Clarke
Teacher: David Haverstock
A negative times a negative
equals a positive.
I was observed by A.P. Clarke while
teaching a 9th grade English class in room B50A on Oct. 16,
2012. I remember that she entered
the room with her computer, sat in the back and spent most of the period typing
furiously. What I don’t remember
is saying what she typed almost immediately in line 3 of her “transcript”
(appended below with the names of “real” students removed):
T All
right, Daisy, look up.
Since there was no one named Daisy
in this class – or in the school, for that matter – I surely would remember if
I’d called on this imaginary student.
Perhaps Ms. Clarke simply typed the wrong name or misheard the
name. We all make mistakes. However in line 9 she claims that I
said:
T Excuse
me, Diane [FS4] please take your hat off? {sic}
Again, there was no “Diane” in that
class and I know of none in the school although, according to line 10 of the
transcript, Diane took off her hat.
Again, let’s give Ms. Clarke the benefit of the doubt. Perhaps she just misunderstood the
name. But then how do you
explain line 54 of this transcript?
T Everybody
page 29. John page 29.
No Daisy, no Diane, no John –
three’s not always a charm. [1] As I said, I remember that Ms. Clarke
was in B50A that day. I remember
that she was typing away on her computer.
What she was typing, however, I never saw. Maybe she was writing to her pen pal. Maybe she was composing a sonnet. Maybe she was tweeting something about
Diane or Daisy. Maybe she was
trying to compose the complete works of Shakespeare on only one
typewriter. I don’t know what she
was typing but maybe Ms. Clarke forgot what room she was in and was thinking of
some other class, one where there might have been a John or a Diane. Or maybe she attached the wrong
transcript to my observation report.
[Click here if you can't believe this and figure that I must have ripped it from the pages of "The Onion": Observation Report - it's on page 2.]
[Click here if you can't believe this and figure that I must have ripped it from the pages of "The Onion": Observation Report - it's on page 2.]
A negative times a negative
equals a positive.
If this were a court of law, I’d
need go no further. The witness
would be excused, her hearsay testimony ruled inadmissible. The jury would be instructed to ignore
Ms. Clarke’s comments or more likely the judge would simply dismiss the
case. The defendant (me) would
have his cuffs removed and walk out of the building a free man. Unfortunately, the DOE doesn’t adhere
to rules of order or even common sense.
Therefore Ms. Clarke was able to conclude, based on this faulty,
specious document:
“The teaching I observed on Oct.
16, 2012 was unsatisfactory.” (p.
6 below)
Ms. Clarke, the observation I
observed on Oct. 16, 2012 was UNSATISFACTORY.
If identifying no less than 3
imaginary students isn’t enough to disqualify this “observation” as inaccurate,
inadmissible and unfit to be used as any sort of evaluation tool, there are
many other grounds for the impeachment of this “observation”. I’ll begin with the time sequence. The “pre-observation” took place more
than a month before the “observation”.
The post-observation meeting took place almost 2 months after the
“observation”. Allowing such a
length of time to elapse between the initial meeting and the observation
defeats the purpose of the pre-observation meeting, which is meant to be a
coordinated attempt to refine a teaching method or strategy. Allowing such a length of time to
elapse between the observation and the follow-up meeting renders serious
discussion of the lesson impossible, particularly given the faulty nature of the
alleged transcript.
This “pre-observation” was no such
thing. As admitted on page 1 of
Ms. Clarke’s report below, that meeting concerned administrative and logistical
details such as setting up a grade book, professional goals, accessing Prentice
Hall on line, setting forth classroom rules and so forth. Nothing was said about the lesson to be
observed.
Clearly this process had nothing at
all to do with teaching a lesson, which is the purpose of the entire
observation process. Teaching a
lesson was never discussed and has never been discussed at any of these bogus
“pre-observation” meetings in the 8 ½ years that I’ve been a teacher at
Jonathan Levin H.S.
What, then, was the purpose of this
“pre-observation” meeting? I can
only speculate. Maybe it was just
to remind us that she or she together with the principal (as has mostly been
done in the past) would be popping in one of these days. In fact, Ms. Clarke popped into room
115 where I was teaching on Sept. 24, 2012 and stayed from 10:50 a.m. until 11:15
a.m. This was not an
“observation”, I guess. Later the
same day she popped into room 117 where I was teaching with Desiree Anderson
and stayed from 2:20 p.m. until 2:27 p.m.
As she left, she asked for my lesson plan for the earlier class in 115,
which I emailed to her the next morning.
On Oct. 10, 2012 Ms. Clarke entered room B50B where I was teaching and
stayed from 2:40 p.m. until 2:50 p.m.
Again she asked for my lesson plan, which I emailed the next day, but
neither was this an “official” observation. I wonder if she saw Diane or Daisy in any of these rooms.
A negative times a negative
equals a positive.
Pages 8 and 9 of the “observation”
report concern the newly in vogue “gradual release” model, formerly called
“scaffolding” and other pedantic terms.
In the 2nd recommendation on page 6 of the report she
mentions this “gradual release” model and says, “I recommend you go to this
link, again review this method of instruction and use it to plan, write and
implement your lessons.” (Last
sentence)
As I tried to point out to Ms.
Clarke during that Dec. 7 meeting, what she would have seen in the class if she
had looked up from her typewrite or had discussed with me my method of using
Prentice Hall Multiple Choice (MC) tests as a teaching tool to teach “text
dependent writing” (another newly in vogue term), she might have realized that
she was watching (if not observing) the gradual release model in action. The 9th grade students I’ve
taught at JLHS have not been in the habit of proving the answers they choose on
MC tests. I teach them to use the
text for quotations and textbook for explanations so that they can explain why
a correct answer is correct and an incorrect answer is wrong. I insist that they write these
explanations and quotations directly on the test paper – see examples here:
Student work.
[The student work at this link are the tests that the students were working on when A.P. Clarke observed their class in B50A. These were done not by imaginary students Daisy, Diane and John but by real students Stephanie, Laura and Christopher.]
Student work.
[The student work at this link are the tests that the students were working on when A.P. Clarke observed their class in B50A. These were done not by imaginary students Daisy, Diane and John but by real students Stephanie, Laura and Christopher.]
At that Dec. 7th meeting
Ms. Clarke said that students were to circle answers on tests and NOTHING
MORE. I am not to ask them to
explain or prove their answers. As
I wrote previously, this amounts to using the test to gather data rather than
to teach students how to study and write using references and sources. For a more detailed discussion of this
aspect of this unsatisfactory observation see chapter 34 of my memoir at “Teaching to the Data”.
Though she didn’t know it, Ms.
Clarke was observing the very gradual release model she is recommending that I
use. Maybe Diane or John could
have explained that to her.
This was the gradual release model
in action but in long-term action.
My goal is for students to take these tests individually by the end of
the semester. But since they are
used to just circling answers on MC test, I first have to “show” them how I
want these tests done. Therefore I
begin by “modeling” for the entire class.
Once they see what I am asking them to do, I put them into groups and
they do the next 2 or 3 tests in that way. I was at that point of the “gradual release” by the time
this Dec. 7th meeting rolled around. Finally when I have observed them working well in groups, I
ask them to take the tests individually.
By that time I want them all to be able to go into the text and use
detail to support their answers for MC or any other kind of question.
Of course, one would have to
observe the class taking one of these tests for the first time and then again
for the 4th time and then again for about the 8th time in
order to see this particular form of “gradual” release over the course of the
semester. More importantly, one
would have to value the concept of using tests as teaching tools rather than as
data management.
Therefore, as for the 2nd
paragraph of the recommendation in this report (p. 6), I submit that not only
do I frequently utilize the “gradual release” model in planning both long-term and
short-term activities, I also submit that Ms. Clarke actually observed it in
that classroom on Oct.16 but did not recognize it. Had there been a proper pre-observation meeting, I could
have outlined my use of the gradual release model for the purpose of teaching
text-dependent writing and which stage of that process she would be observing.
A negative times a negative
equals a positive.
The last 2 lines of Ms. Clarke’s
ridiculous transcript have me attending to attendance. In line 198 I (or perhaps MS2) allegedly asked a student
to add “attendance data” to the chart that is kept on the wall of every
classroom. JLHS teachers have been
mandated to spend a few minutes of every class putting up the number of
students present, the percent present and a goal for the next day’s attendance,
which surpringly isn’t necessarily 100% because 100% would not be a “SMART”
goal. (Since the SMART “r” stands
for realistic, it is acceptable to set a goal below the ideal.) We are supposed to talk about this each
day with the students who are present, i.e., preach to the choir about attendance. If five minutes of each of 8 classes is devoted to this
repetitious inanity, 40 minutes of the instructional day is wasted. That’s nearly a full class period.
The last 2 lines of Ms. Clarke’s
“transcript” in which she “observed” attendance related behaviors illustrate
the fact that there is no such thing as objective, non-inferential
observation. All observation is
subjective. We’ve been told that
these “objective”, “low-inference” transcripts are not to be used for the
purposes of evaluation. We’ve had
professional development classes in which we are told to do this very thing –
write down what we see and hear – but then not to make any evaluative inferences
from it. As I said in chapter 34,
however (which see), “… the only one who can truly observe objectively and
non-judgmentally is the monkey who composed Hamlet.”
However, since Ms. Clarke has gone
against protocol and used this as part of her unsatisfactory “observation” of
my “unsatisfactory” lesson, I would like to draw my own inference from her last
2 lines, which read (p. 5):
MS2 198 Walked over to a student and asked him to add the attendance data to the board
199 [moved the screen and wrote the attendance data then covered it back up with the projector screen.] {sic}
Note: Ms. Clarke has “MS2” walking
to a student and asking him to add attendance data to the board. Again after so many months I don’t
remember how this happened, but it is highly unlikely that a student performed
this act.
This “objective description”
greatly distorts what actually happened.
Although it is not clear who actually performs this act, the reader
might logically draw the inference that the student “MS2” in the previous line
is the perp. The distortion creeps
in through the words “covered it back up”. I don’t remember this specifically months after the alleged
event, but if it happened, the student did no such thing. In room B50A where this “observation”
took place, when pulled down, the screen conceals the attendance chart. I and many other teachers routinely
project our lessons onto this screen.
That is what it is there for.
The screen was down for the entire class. If the student did write something on the attendance chart,
he would have had to pull back the screen to do that. Allowing the screen to fall back into place in no way covers
up anything at all. The
“observer’s” bias is clearly evident in her choice of words. She wanted to fabricate evidence that I
was not following school policy on discussion of attendance during class,
suggesting that the data was “covered … up”. While it is true that I consider this attention to
attendance a waste of precious classroom time, I certainly didn’t and don’t attempt
to cover it up. In fact, I publish
this in order to expose this ludicrous policy.
A negative times a negative
equals a positive.
I could point out many such
distortions in this alleged transcription of my classroom. But why waste time on such an
inaccurate, incoherent document that is nothing more than hearsay anyway? I would like to make one or two final
points, however, about the first paragraph of the “recommendations” (p.
6). I refer to the 2nd
and 3rd lines of this paragraph:
“Your aim, ‘How do I use text to
support MC answers?’ suggested you would address questions with multiple
responses, however {sic} none of the
questions you addressed had more than one correct answer.”
Ms. Clarke spent the first 10
minutes or so of that Dec. 7th meeting trying to explain to me what
she meant by this statement. I
never got it. While holding the
Prentice Hall test in her hand, she asked me if I were giving a test or a
survey. Surveys, she explained,
can have multiple answers. It was
then that I realized that I ought to have been using the “gradual release”
model during this meeting.
Evidently after spending 45 minutes observing students working on a
standard multiple choice test, a test that had the heading “Selection Test”
stamped clearly at the top, Ms. Clarke was unable to determine if the students
were working on a test or a survey.
No student, however, asked if they were working on a survey. They all knew that the test was a test.
Finally, as if to confirm my charge
that I have been directed to use tests not for teaching but for data gathering,
Ms. Clarke includes in this same paragraph the Pearson chart that purports to
describe what each question is supposed to reveal about the test taker:
Questions 1,3, 4 Literary
Analysis
Questions 2, 9,
11 Interpretation
Questions 5, 7, 8 Comprehension
Questions 6, 10 “Reading”
[2]
Questions 12, 13 Vocabulary
Questions 14, 15 Grammar
I was directed never to ask
students to use the text to support their answers to MC questions. I was directed to use this chart to
gather data on which questions were answered correctly or incorrectly by each
student. How well a student reads
therefore is not as important as the statistics derived from tests.
A negative times a negative
equals a positive.
In the current Bloomberg-esque,
data driven corporate mentality in which education is an industry and students
merely “product”, there is a move to evaluate teachers based on “data”, a
euphemism for meaningless statistics.
Michael Mulgrew and the UFT in New York City are rightly contesting this
mindless pretension that teaching is akin to manufacturing. Education is social interaction at a
very intimate level. It cannot be
depicted by graduation rates, test scores or any other data.
Therefore, in spite of the fact
that I have been rated “unsatisfactory” by an administrator, I nevertheless
believe that the only way to evaluate teachers is for administrators to make
admittedly very subjective evaluations based primarily on the social
interaction in the classroom, the relevance of the lesson presented and on the
attempt to reach the students at a level on which they can receive it rather
than on statistics. An honest,
competent administrator should be in a position to know his/her students, know
what they need and evaluate the performance of a teacher based on the needs of
that population. The needs of any
group of students vary drastically from neighborhood to neighborhood, school to
school, even classroom to classroom.
Just as you cannot take the human aspect out of teaching, neither can
you take it out of administrating.
Administrators who merely parrot the latest fashions in education will
overlook or even ignore the reality before them. This is what teachers are now up against.
A negative times a negative
equals a positive.
An unsatisfactory observation that
rates a lesson “unsatisfactory” = an excellent lesson.
I’m tempted to sign my name “Franz
Kafka” but must resist that urge.
NOTE: This blog contains an
excerpt of the entire book.
[1] I have attached the class roster to the copy of this
response going to my official DOE file.
However, I withhold the names of what might have been “real” students
for the purposes of this blog – though I feel free to name the imaginary ones.
[2] Stop laughing and look at the document below. It really says “reading”! Obviously every question is a reading
question.
Did you actually send this to your administrator? If you did, you're my hero! It's total insanity.
ReplyDeleteI handed a physical copy of it to her personally today with the link to the blog right there at the top. I also gave a copy to the principal to put into my file. When I told her that everyone I've shown this to just laughs and asks if an assistant principal really wrote it," she responded with a straight face, "Thank you for sharing that."
DeleteThat is hilarious! I love it. Thanks for relieving some administrator-induced stress. You rock!
ReplyDelete