15 April 2010

The Liars and the Fools

"[T]he point of contention was eliminating tenure for Florida public school teachers and tying their pay and job security to how well their students were learning."
Thus said The New York Times, that "newspaper of record" on Thursday, April 15, 2010 in a story about Governor Crist's decision to veto Florida Senate Bill 6.

Was that bill really about how well "students were learning"? I don't think learning was being discussed at all, testing was. Teachers' jobs, pay, etc were not going to depend on anything the students might know or understand. These things would depend on how "well" students performed on standardized tests which have never been proven to measure learning.

86% graduation rate, 96% college acceptance rate, but Bill Gates and Barack Obama tell us it's a bad school 

This distinction matters. In the same newspaper it was reported that the Charter School created and run by Stanford University's College of Education will likely be closed because students tested poorly. The board of one of Silicon Valley's most impoverished school districts "simply looked at the scores" and decided that a K12 program with an 86% high school graduation rate (better than any other school in the district) and which sends 96% of graduating high school seniors to college, is a total failure.

The real problem, of course, lies elsewhere, with the Obama Administration which has decided that - even more than under George W. Bush - the standardized test is the only measure, and with the persistently destructive Gates Foundation - which despite zero expertise and zero proof of success  has become the big dog wagging Arne Duncan's sad tail.
"As Ravenswood board members pointed out, another charter school in the same district, Aspire, has consistently had better results on state tests. In fact, Stanford’s first charter school in 2001 was a joint venture with Aspire.

"The two cultures clashed. Aspire focused “primarily and almost exclusively on academics,” while Stanford focused on academics and students’ emotional and social lives, said Don Shalvey, who started Aspire and is now with the Bill and Melinda Gates Foundation."
Because, as "everyone" knows, focusing on students' emotional and social lives is ridiculous, when those kids could be filling out a worksheet or copying and pasting together a report on "Africa." Why that kind of frivolous focus just leads to...

Oh yeah, an 86% graduation rate and a 96% college acceptance rate among kids with no college in their families' experiences, and English as a second language.

Stanford, how could you be so dumb? 

Testing, as The New York Times declares, is learning. Microsoft's definition of academics is all that matters. Teachers know nothing, educational researchers know nothing, all knowledge is actually in the hands of a bunch of businesspeople who, from Arne Duncan to Bill Gates to Meg Whitman, to Mike Bloomberg, who have declared themselves our saviors.

God help our children.

- Ira Socol


Nathan said...


I think its interesting that this came out of Stanford's Hoover Institution the other day.


I'm wondering what your opinion on this might be. I'm guessing you won't like it, but I've been wrong before. : )


Mr. Cote said...


Usually a big fan of your work. Though I don't always agree with you, you tend to consistently offer persuasive arguments from a highly original perspective. Keep it up!

But this? C'mon. You are better than this. Surely you have a more nuanced appreciation of what's going here than you let on.

You say: "These things would depend on how "well" students performed on standardized tests which have never been proven to measure learning."

That seems overstated, to say the least. Do you really mean to say that standardized tests tell us NOTHING? I don't deny that they set out to measure of basic knowledge and low-level skills (a la Blooms taxonomy), and even so can only tell us a small bit about what they know with respect to these aims ... but I find your categorical rejection of standardized tests to be unconvincing (and an unnecessary premise, really, to advance the sort of argument you make in this piece).

Let's put this in more concrete terms. The following question directly from last year's AP World History exam:

PROMPT: For the period from 1500 to 1830, compare North American racial ideologies and their
effects on society with Latin American/Caribbean racial ideologies and their effects on

Write an essay that:
􀁸 Has a relevant thesis and supports that thesis with appropriate historical evidence.
􀁸 Addresses all parts of the question.
􀁸 Makes direct, relevant comparisons.
􀁸 Analyzes relevant reasons for similarities and differences.

The degree to which a student can think about forming such an argument, not just recalling information but applying it, is something that a) a standardized test can measure and b) is something that I would consider to be a measure of student learning.

Am I off here?

A Faire Alchemist said...

@Mr. Cote

Isn't that the same AP World History exam that does this to kids:

"Section I: Multiple-Choice
The 70 multiple-choice questions cover world history from the Foundations period up to the present."

The idea that an essay that actually involves the student writing and backing up a thesis statement mitigates 1/3 of the test consisting of a Scantron is patently absurd.

The History of the World! In 70 random multiple choice questions!

What a joke.

If only it were a joke.

I had a great advisor in college who stated bluntly: "Yeah, well the reason profs give multiple choice sections is because they are lazy."

You want a relevant AP Exam? Then open it up to project-based assessment, ongoing formative assessment (assessment that demonstrates breadth/process of learning), and alternatives to written essays -- interviews, oral defenses, art exhibitions.

Right now, the AP exams are at best a smack in the face of kids who have spent a year busting their butts in class to get real learning done, only to have that learning judged by Scantron test and some anonymous reader who doesn't understand a damn thing about the kid.


Mr. Cote said...


Patently absurd? Hogwash.

My point, which I'm not sure you really responded to directly, was that standardized assessments can tell us something valuable about what students know and can do - what they've learned.

A few quick thoughts on response:

1) Not sure exactly what you mean when you say "students bust their butts ... to get real learning done," but as I hear it you're suggesting that the acquisition of background knowledge - in this case background knowledge specific to world history - is somehow not integral to real learning. It's an argument I hear all the time, and I think it's bogus, a dangerous myth perpetuated by progressive education. For the most part, a strong base of background knowledge isn't important in itself (I don't think we'd consider someone who spouts off trivial fact after trivial fact would be considered smart, well educated, etc.) but is instrumental in developing the higher order thinking / problem solving skills that I'd like to see my students leaving with (per the AP World History question cited above).

2. Your last point, about how the assessments are flawed because they "don't understand anything about the kids", I don't quite get. Could you elaborate?

3. Those alternative forms of assessment you mentioned -- I think they are in many ways "better" forms of assessing student learning, but are less easily standardized. Now we could talk about whether standardized tests in themselves (as Koretz says: "People incorrectly use the term standardized test - often with opprobrium - to mean all sorts of things: MC tests, tests designed by commercial firms, and so on. In fact, it means only that the test is uniform. Specifically, it means that all the examinees face the same tasks, administered in the same manner and scored in the same way.") are of any use in and of themselves, but that would be another conversation.

A Faire Alchemist said...

@Mr. Cote

Haven't heard the word 'hogwash' in a while; thanks for a smile.

"...as I hear it you're suggesting that the acquisition of background knowledge - in this case background knowledge specific to world history - is somehow not integral to real learning."

That's not at all what I'm saying. I teach Latin, Art History, and West Civ. My kids get their content. What I'm criticizing is the form/format of the AP Exam as being exemplary of anything resembling an authentic assessment of learning.

If I gave my kids a test that was 1/3 multiple choice and covered material from an entire year's program, I'd likely be called a sadist. Not to mention the fact that the multiple choice tells me absolutely nothing about whether the kids 'get' the content of the course. The AP Art History exam is particularly dubious in this respect in its habit of testing the ability to remember/cover material over the importance of demonstrating understanding of the most crucial trends in the discipline (i.e. Remembering the name of a particular Vermeer becomes as 'important' with regard to a final assessment as understanding the significance of the Italian Renaissance). If you don't believe me, ask to see the last half-dozen years' worth of Slide IDs from the test and tell me how many of them are actually crucial to demonstrating real understanding rather than serving as a standardized way to keep the grade-range running in a bell curve.

Secondly, I would fire myself if I were to outsource my grading to an anonymous outside source -- yet that's precisely what we do with the AP exams (no wonder so many competitive colleges and universities are not accepting scores for credit); an absolutely vital part of assessment is understanding who it is that you are assessing. The inability to recognize this is the most demonstrable evidence of the AP exam maker's lack of understanding how students learn.

And to address your last point: I 100% do not believe that tests should be uniform. Because people are not uniform. Louis Armstrong can tell me as much about New Orleans as Tennessee Williams and vice versa -- but I'd never know that if I gave Louis an essay test or Williams a trumpet.


Dan McGuire said...

Standardized tests are a function of politics, not teaching and learning. Standardized tests are frequently not even reported to the teacher and they are almost never reported in a manner that makes sense to the students taking the tests. They don't even make sense to most adults.

I might find out the scores that are generated as a result of the 'standardized' tests that I proctored this last week according to the rules sent out by the Mn Department of Education, but not until long after I've said good-bye to most of the students. The scores will most likely be released by the governor next August at the State Fair. I might get a set of numbers sometime later in the fall. That set of numbers is not going to reflect the wedding also this week of one of my student's parents - do you really think that 8 year old had her mind on the questions on the test? That set of numbers isn't going to reflect the various sites and agencies that the students who've immigrated from East Africa have experienced in the last six months. A student who's made two years gain in scores since September is still going to be counted as a failure because she's still a year behind her 'grade level.' A school full of students like that will be restarted or restructured or closed and all of the teachers who helped those students make two years gains in six months will be fired. There's nothing standard about standardized tests except that they will be used for political purposes, not teaching and learning.

narrator said...

Well, this is fascinating:

First, Nathan, thanks for the link. I've not read, but will. I've had my clashes with Willingham, largely because I believe that his research tends to be little more than "meta-averaging" of already "averaged human experiences" and of course the "Hoover" tag sets off alarms - but I'll try to stay open-minded.

Mr. Cote:

Shelley and Dan have already brought better arguments than I can, but lets begin with your proposed question:

PROMPT: For the period from 1500 to 1830, compare North American racial ideologies and their
effects on society with Latin American/Caribbean racial ideologies and their effects on

Write an essay that:
-Has a relevant thesis and supports that thesis with appropriate historical evidence.
-Addresses all parts of the question.
-Makes direct, relevant comparisons.
-Analyzes relevant reasons for similarities and differences.

Now, you think this is a reasonable measure of learning while I think it will measure the following:
1. Parental Income
2. How "white" the student is
3. How conventionally the student learns and communicates
4. How compliant the student is - how good they are at pleasing adults with authority over them
5. How much they communicate like the teacher
6. The student's stress handling abilities

So, if your goals are social reproduction and social conversion, yes, this type of high stakes testing will measure your missionary effectiveness. But it tells me nothing about student learning and understanding.

Yes, you disagree:

But: This prompt requires a specific form of reading skill largely disconnected from any student's actual current or future life. It is written in stilted academic prose designed to intimidate and confuse. Honestly I had to read it three times.

Your first bullet point, besides the continuing language problem, is "appropriate historical evidence" - ummm, "appropriate" for who? And, I guarantee, that if you bring up "less favored" historical evidence your "appropriate and proof standards" get raised really high.

Now you've already turned this into a test of my points 1-2-5-6. You've favored rich white kids, you've insisted on students sounding like you, and you've raised stress levels sky high.

Your other points bring in 3, 4. You are defining the form and structure of the argument in order to test not knowledge, but compliance. This is a test of following rules.

It also insists on a single form of expression, hardly humanity's most common, the written essay. Nobody outside of school reads essays for good reasons, they are a terrible communication form - so bound up in structure that the soul of the argument almost always vanishes.

Why can't students make a video demonstrating their knowledge of racial identifying in America? And if they do so so, why should they be required to build their comparison to and how you want them to? Wouldn't you get better stuff if you let kids find their own way?

So yes, you're right, standardized testing does indeed measure something, but there is still no evidence that it tells us anything about learning. In fact the only thing it seems to sometimes predict (though clearly not in the Stanford case) is how students will perform in the same type of schools under the same conditions.

An you know... I don't care how students "do" in school. That's irrelevant. I care about what they learn and how they learn how to learn. That matters.

- Ira Socol

Knaus said...

This might be one the best post and comment discussions that I have ever read. Well done to everyone involved. Here's my two cents:

In my classroom, I know just about everything about everyone of my students. I know why one student is pacing around my room (recent car accident that has provided emotional and physical injuries). I know why another is sitting by the window. I assess them as soon as they walk in the room.

I know their reading levels, math levels, writing levels, speaking ability and so on. I adjust their learning constantly in the course of one hour.

I assess them multiple times in multiple ways. Do they get the learning? YEP. But each in their own way at their level at their own rate. That CAN NOT ever be judged on one test on one day in April.

Here's the other part or the problem and my school is currently awaiting our fate, so I may be to passionate about the subject.

One standardized test is generating the restructuring for our next year. One test? The standardized test that doesn't accurately assess my students?

Here's what needs to happen: look at all the evaluative pieces that have been done to a school. We've had three outside agencies (including the State of MN) come in to evaluate us. All three wrote reports that have been nothing short of glowing. We have had specific programs evaluated (AVID, AVMR, Read 180, Jr. Great Books) with glowing results.

Also, if you look at relevant data, the MAP test, which is done quarterly and gives instant feedback, it shows that we are making huge gains for our students. Beating national norms. Is it grade level? No. But we are making 2 and 3 years growth in one year. That is progress. While this test still isn't measuring accurately the students, it is more relevant and accurate and helps us adjust our instruction on the fly.

The last problem that I want to mention is relevant data. The standardized test is not relevant to our current students. With a population of transient students, administrative transfers and other issues, the students we currently have are not reflected in three year data points on a standardized test. It might represent half of our current students.

In this regard, we shouldn't be held responsible for student learning that we have not been a part of and students that have only been with us for a few months.

Thank you. I'm hoping this discussion continues. Well done.

Mr. Cote said...

Oh boy. Way behind on my work but can't resist the temptation to respond, so here goes:

Allow me to briefly clarify my position: I think that standardized tests can tell us something meaningful about student learning.

@ alchemist - Your critique seems thoughtful but a bit off point. You beef with the AP exams seems to be with respect to rigor of the assessment (an inordinate emphasis on lower-level skills, knowledge, recall, etc.) and not so much the fact that it's standardized per se. If we got rid of the multiple-choice bit (an expensive move, but one that I'd support) and, instead, demanded more rigorous and meaningful open-ended response type questions, would that get your stamp of approval?

The last bit of your post -- about how assessments need to be graded by one with intimate knowledge of the student -- I'm not sure I buy in its entirety. It's a nice idea and all, but what does your knowledge of a particular student and his or her makeup (for lack of a better word) have to do with determining what he or she is able to demonstrate mastery of (in this admittedly constrained / limiting format of conveying that knowledge through essay form)? I worry that this may lead to a decrease in what we expect our students to be able to do << cue the backlash >> ...

@ Dan - "Standardized tests are a function of politics, not teaching and learning." The tax-payers who fund our public education system want (and I'd argue, deserve) to get some sense of how effectively their money has been spent. So yes, there is certainly a political element involved, but I don't think that's necessarily unwarranted nor do I think that it discounts the degree to which standardized tests can measure some sliver of student learning.

I whole heartedly agree in that they are too often used as an expensive (and exclusive) yardstick and rarely to inform future instruction, which I is more a failure in implementation than anything else.

@ Ira - You have a way of putting things that few others do. Let me address the meat and potatoes of your argument -

"Now, you think this is a reasonable measure of learning while I think it will measure the following:
1. Parental Income
2. How "white" the student is
3. How conventionally the student learns and communicates
4. How compliant the student is - how good they are at pleasing adults with authority over them
5. How much they communicate like the teacher
6. The student's stress handling abilities"

1-3: Being able to organize and persuasively advance an original argument in written form ought not be thought of as a "rich white kid" thing.

4-5: There is a "general format" in which essays are written. A quick look at the scoring rubric for that particular AP World History question may prove instructive:

EXPANDED CORE (excellence)

• Has a clear, analytical, and comprehensive thesis.
• Addresses all parts of the question thoroughly (as relevant): comparisons, chronology,
causation, connections, themes, interactions, content.
• Provides ample historical evidence to substantiate thesis.
• Relates comparisons to larger global context.
• Makes several direct comparisons consistently between or among societies.
• Consistently analyzes the causes and effects of relevant similarities and differences.
• Applies relevant knowledge of other regions or world historical processes.
• Discusses change over time (e.g., the hardening of racial ideologies).
• Recognizes nuances within regions.

A student who writes an essay that can be accurately characterized by these descriptors above has produced, so far as I can tell, a legitimate (dare I say ... a-au-authentic?) artifact of his or her learning. But you see it as a measure of subservience? Compliance? Desire to please the teacher?

Ira, do you really see no value in this?

6: You have a point.

@ Knaus - Too tired to respond but enjoyed read what you wrote.

Dan McGuire said...

Mr. Cote,
Let's start with our points of agreement:

1. Standardized tests are political.

2. They rarely inform instruction.

3. They measure a sliver of student learning.

4. They are expensive and exclusive.

5. They are poorly implemented.

Since I'm tired, too, let's just leave it at that. I like that we agree on so many points.

narrator said...

Mr. Cote:

Do I see [potential] value in this? Indeed I do. I understand what you are getting at. Yet, I also know, and this goes to Shelley's point about what I might call "student-informed grading," how, when "standardized" in grading, how these kinds of things are evaluated.

For example, I think the addition of the writing part of the SAT was potentially a great idea, yet research indicates that it is scored primarily via word-count and syllable count. For years the NYS Regents Exams have proven that the more original the writing, the worse the score. Even in "blind-graded" writing assessments in my PhD program I've found that the scores are far more about the evaluator than the work.

The issue is the contextlessness of the standardized assessment. It is a huge issue in IQ tests, and all the way down. Every form of every question, every testing environmental factor, every structure of response impacts the result more than the basics of the person being assessed. And that's my problem with it all.

- Ira Socol