Chapter
Thirty-Four: Teaching to the Data
Good teachers love
to get kids to think for themselves, to think analytically and to support their
thinking with evidence from the text.
Teachers often hate to be told to “teach to the test” because it makes teaching
formulaic by removing precisely the thing that makes sitting in a classroom
worthwhile – the freedom to allow your mind to roam the universe in search of
new ideas or ways of looking at things.
Worse than teaching to the test, however, in the current data driven
rage in education is teaching to the data. I’m not kidding.
I know this is going to sound like some absurd satire lifted from the
pages of the Onion, but this is a true
story. I’ve been told not to teach
students but to teach for the purpose of gathering data.
I’m an English
teacher who has been instructed by my “superiors” to teach strictly from the
Prentice Hall / Pearson anthologies both physical and on line (where Pearson
must be making a killing). The
teacher editions of these books provide guidelines for instruction but you can
never take the teacher out of teaching.
The goal in education today is to make all classroom instruction generic
enough so that an automaton can perform the job as well as a human being but
there is no teaching without a teacher.
If we could just put enough monkeys with teacher editions into the
classroom!
For the most part
I have used these anthologies as directed but one thing I still do even though
Pearson doesn’t order us to do it is to ask kids to support their answers to
multiple choice (MC) questions with evidence from the text. I don’t want them guessing. I want them to understand why “a” is
correct and why “b”, “c” and “d” are incorrect. I want them to be able to prove that they’re right when they
circle answer “a”.
Pearson, you see,
offers up multiple choice “selection” tests for all of the selections in their
anthologies. Since I’ve been
ordered by my administration to use these books and materials, that’s what I
do. Now I’ve been told to use these
tests to gather data rather than as a teaching tool.
The teacher
evaluation system is supposed to work like this. A teacher meets with a supervisor to discuss a way of
presenting a lesson. The
supervisor then observes the teacher present this lesson after which a
follow-up meeting is held to review the success or failure of the
presentation. These three
activities should take place within a two-week time frame.
My most recent
pre-observation meeting took place on Sept. 13, 2012. I was instructed to bring along to that meeting primarily
attendance records and phone call logs.
Nothing was said about any lesson to be observed; however a “pop-in”
surprise observation took place over a month later on Oct. 16, 2012. As luck would have it, I was modeling
(using the newly in vogue “gradual release” model) how to find evidence in the
text for answers to Pearson MC questions.
The text was “The Washwoman” by I.B. Singer. The test was Pearson Selection Test A.
My A.P. (assistant
principal) of instruction – a former math teacher who has not taught in the
classroom for over a decade – observed from the back while furiously typing on
her computer as students sifted through the story looking for evidence to prove
correct their answers. In fact
we’d been told to do this at a professional development (PD) session earlier in
the year. The term in vogue for it
at the moment is “text dependent writing”. Of course, it’s nothing new and nothing that teachers
haven’t been doing since the dawn of education.
On Friday, Dec. 7,
2012 – almost 2 months after this observation – I was called into this A.P.’s
office for the “post-observation” meeting. I assumed that I would be applauded for preparing students
for the sort of work that college is going to demand of them where merely
circling a letter isn’t going to cut it.
In fact, we are expected to be thinking of our “college and career
readiness” data, part of a NYC school report card. Applause, however, was not on my A.P.’s mind this day.
The interview
began something like this: [1]
A.P.: Read
to me your aim, please.
Me: “How
do I use text to support MC answers?”
A.P: Were
you giving a survey of some kind?
Me: No,
I was giving the test you have there in front of you.
A.P.: But
you said to support “MC answers”.
Me: Right,
and …?
A.P.: That
sounds as though you’re asking for answers to a survey where they can give
multiple answers.
Me: What?
A.P.: On
surveys, they give multiple answers.
You used the word “answer” in your aim.
Me: Well,
since it says “test” on the paper and since they knew they were getting a test,
the students understood that it wasn’t a survey.
A.P.: You
should have said to support “MC questions”.
Me: Huh?
A.P.: They
were answering questions, weren’t they?
Me: You
were there.
A.P.: Then
you were actually asking them to support MC questions.
Me: No,
they were supporting their answers with evidence ….
This conversation
took up the first 10 minutes of the meeting. One monkey with one typewriter could have made more
sense. I could tell from the tone
that this meeting wasn’t going to go well for me – not to mention the absurdity
of the criticism.
Next we looked at
the 3 samples of the test done that day.
As directed, I’d photocopied the test from 3 students’ folders. For each MC question, the kids had
written a quotation from the story itself or an explanation from the textbook
to show why the answer (not the question) was correct. They had written quotations and
explanations to support their answers right on the test paper itself.
The conversation
picked up with these papers in front of us:
A.P.: What
sort of assessment was this?
Me: I
was modeling for them how I want them to take tests.
A.P.: But
what sort of assessment was it?
Me: It
was both a pre-assessment and a post-assessment since I’d given it out when we
started reading the story. I
wanted them to know what they would be looking for by seeing the questions
beforehand. This is called
“reading with purpose”.
A.P.: They
wrote the date Oct. 11 on it.
Me: Right.
A.P.: And
on the 16th ….
Me: They
had read the story by then and were going back to find evidence for their
answers. That’s what you observed.
A.P.: But
it doesn’t say in the teacher edition to ask the students to support their
answers with text.
Me: Really?
A.P.: So
why were you doing that?
Me: Because
I want them to be able to justify their responses.
It continued in
this vein for a bit and then the A.P. turned to a document concerning the class
observed. She had emailed me an
11-page document, much of which consisted of her transcription of the observed
class session. I’ve attached this
document at the end of this chapter and called it an appendix although it’s
actually more like appendicitis. [2] Of course, after 2 months, it was
impossible for me to know how accurate this transcription was but I didn’t
quibble over that point. It’s
worth as much as any other hearsay.
As you can see
from glancing at the appendix, a great deal of transcribing took place in one
way or another. I might ask this
question: how much could have been “observed” by someone doing so much
typing? But I’ll leave that for
another chapter.
The current term
for this sort of “observation” wherein the observer attempts to record
everything said and done within sight and hearing, is “low-inference”. A low-inference observation is said to
do nothing but record factual information a la Joe Friday. A
low-inference observation is said to be objective and non-judgmental. But as we know the only one who can
truly observe objectively and non-judgmentally is the monkey who composed Hamlet.
The
meeting continued with reference to this document.
A.P.: Would
you look at line 70 in the document I emailed you.
Me: Okay.
I scrolled down to
line 70 of this document.
For some reason
the A.P. was focusing on lines 70 – 85 or so where a student suggests
text-to-text as a possible solution to something. The term “text to text” refers to relating one story to
another piece of literature. This
was a very good point, of course, since the week before they had read “A
Giant’s House”. Both stories in
the first unit of the Pearson anthology are “narrative essays” and are meant to
be compared. The student was
showing signs of actual learning.
What point the A.P. was making, however, by going over and over that
part of the transcript eluded me.
The ludicrous conclusion she drew somehow from it a moment later did
not.
A.P.: This
was an “unsatisfactory” lesson, Mr. Haverstock.
Me: Let
me get this right. I’m not allowed
to ask students to use evidence from the text to answer MC questions?
A.P.: That’s
right. They should circle an
answer and nothing more.
Me: So
I’m not allowed to teach “text dependent” writing?
A.P.: No,
you are to use this chart ….
Here she pulled
out a Pearson chart showing the type for each test MC question: recall,
inferential, analytical, “reading” – yes one of them was simply called a
“reading” question – etc.
A.P.: The
students circle answers and you look at this chart and find out what type of
question they have trouble with.
Me: No
text support.
A.P.: No.
Me: Even
though asking for textual support doesn’t interfere with the data you want?
A.P.: That’s
right.
I’ve now been
directed to “teach to the data”.
The purpose of the Pearson test is not to teach students how to work,
study, think and use text to support their conclusions. The purpose of the Pearson test is to
gather data.
I won’t name names
other than to say that I teach at the Jonathan Levin High School for Media and
Communications – easily discovered anyway since I’m using my real name for this
memoir / blog. The school is
located in the Taft building on 172nd St. in the Bronx where Stanley
Kubrick once cut class, preferring movies to lectures and homework. Taft no longer exists. There are now 7 small schools in the
building, mostly the academies of this or that. JLHS has not fared so well over the past 3 years, as far as
the DOE is concerned. You can see
for yourself by typing in the name of the school at the NYC DOE website: http://schools.nyc.gov/Accountability/tools/report/FindAProgressReport/default.htm
Since these grades
are based on “data”, of course, they are meaningless. Since the AYP numbers for “adequate yearly progress” are set
by the very people who want to close schools, the system is entirely corrupt
from top to bottom. Teachers in
the system know that none of the critical numbers used by these corrupt
bureaucrats, particularly graduation rates and Regents scores, say anything at
all about how well the school is performing. It’s easy to convince outsiders, however, that numbers don’t
lie. Maybe numbers don’t but the
people manipulating them sure do.
Appendix [3]
For the complete transcript of this observation document go to:
Complete Transcript of Observation Report
Complete Transcript of Observation Report
NOTE:
This blog contains an excerpt of the first draft of this book.
[1] All dialogue is paraphrased. Theoretically if I had enough monkeys,
I could recreate the exact words used.
[2] I have no idea whose intellectual property this
document would be. She typed it up
but most of it consists of what I and students might have said in the
class. I don’t know to whom it
belongs, as I said, but calling it “intellectual” property ought to be against
the law. I’ve edited out student
names in those cases that refer to real people.
[3] I’ve replaced names with the phrase “student name”
for those who were real people.
[4] There was no “Daisy” in the room.
[5] There was no “Diane” in the room.
[6] There was no “John” in the room.