If you do a "close" reading of the P&W methodology section you will see what shifting sand the ranking is based on (background on rankings is here). Most of what is in the methodology section would be more accurately placed in a separate section called "findings." This distinction is important because by keeping the findings separate from the discussion of methodology the reader knows that the findings are just one person's interpretation of the data.
Most of the statistics derive from a single number, 640, the number of would-be applicants who visited a blog and responded to a poll by completing forms. Most of what the author says are "demographics" are poll responses. The author does not provide demographics (such as age, gender, location, income, etc of the respondents).
The methodology says the responses are "votes" and that those programs receiving the most "votes" are the ones the respondents hold in highest esteem. Yet if you read elsewhere you learn that the "votes" are the programs to which respondents are applying; there may be no connection between which program one might hold in high esteem and those to which one individual is applying (eg. I may think Houston is the best but because I live and work and have family in NYC, it isn't practical for me so it's not on my list of programs I'll be applying to. I may believe that Cornell has the best program yet I know the liklihood of getting in is slim so I'm not applying- these scenarios aren't captured in the ranking). Respondents were not asked to reveal which programs they hold in highest esteem; they were asked to disclose to which programs they would apply.
Every time the author says, "it is reasonable to assume," know that it is also reasonable to assume something other that what follows. Such claims are not backed up with any evidence, anecdotal or otherwise.
Most of the programs ranked got fewer than half the number of "votes" than the top ranked program - there are many many ways to account for this gap and yet the only explanation given is that it reflects the "esteem" with which potential applicants hold the program but it could reflect many other possibilities.
A good deal of the "methodology" section is devoted to history of MFAs and to discrediting other polls - irrelevant in a discussion of methodology.
I could go on. The infelicities of the writing style are too numerous to mention. Strip away all of this filler and you see how little substance there is to the ranking.
And there is more: The author of the ranking methodology makes many unsupported claims. None of these claims have anything to do with the method used to gather data for the so-called ranking. Here are but three examples from among many (my comments in bold):
When programs are assessed by individuals already within the system, the natural result is that older programs—whatever their selectivity, financial resources, faculty resources, curriculum, pedagogy, or student outcomes—move to the top of the pack due to their profile advantage. Yet applicants report only limited interest in programs’ historical pedigrees, as pedigree itself is often considered a suspect quantity in the national literary arts community." (What is the basis for a sweeping statement about the national literary arts community's views of "pedigree"?)
Whereas scientific rankings (which require demographic data that is, in this case, unavailable both to independent researchers and national trade organizations) traditionally poll, at the state level, well less than a hundredth of one percent of their target population, and national polls typically sample well less than a thousandth of one percent, the sample size for the 2012 Poets & Writers Magazine rankings, in a nod to the necessarily unscientific nature of the polling, is between 2,000 and 20,000 times larger as a percentage of population. (Scientific polls use precise measures to identify and calculate sample size, which is why they don't have to rely on large numbers to get valid, replicable results. The sample here is unreliable because the pollster has no control over who has responded to the poll and because the pollster has known biases.)
In most instances, student scores are only lightly scrutinized (or simply ignored altogether) by the programs themselves, and instead reviewed—where they are reviewed—by individual universities’ Graduate Colleges, which often have minimum GRE-score requirements (typically very generous ones). Creative writing MFA applicants should not avoid the GRE General Test for fear of the Mathematics portion of the exam; even those programs that do give minor weight to standardized test scores in their admissions processes generally look only at applicants’ Verbal and Analytical Writing scores. (How does P&W know that applicants fear the mathematics portion of the GRE and how admissions offices value them?)
These inaccuracies, sweeping statements, and blatant falsehoods are buried in the excess verbiage that is characteristic of the author's writing. Strip the garbage away and you're left with . . . garbage.
-- sdh
What a bunch of crap.
Posted by: Caroline | September 12, 2011 at 06:55 PM
These days, much of what passes for analysis is just sloppy emotion: anger, envy, so on. Logic skills should be taught in middle and high school. And people should take more pride in distinguishing a valid argument from a specious one.
I don't pay attention to 1) Poets and Writers Magazine (thought they continue to pelt me with subscription offers); 2) MFA rankings. If I were to enroll in an MFA program I would choose the program based only on the writers who would lead the workshops. If I admire the writer then I'll have an interest in her or his workshop.
Posted by: Eric Bourland | September 12, 2011 at 07:02 PM
Stacey,
A fantastic post! I couldn't agree with your analysis more, particularly the part about how Seth manipulates language to imply things that aren't really true.
Your point about his use of the word "vote" is dead on. These are "not" votes at all. In fact, if you pooled this same group of applicants about the programs they actually held in the highest "esteem" you'd undoubtedly get very different results. I know this because I advise undergraduates on MFA Programs yearly, and I know their reasons for applying where they apply. Half the time their reasons are geographic. They routinely say things to me like, "I'd love to apply to Oregon, but I can't because my girlfriend needs to stay in the southeast for her job." Other times, their list is less a reflection of where they'd most like to go, or which programs they think are "best," than where they think they'd have the best chance to get in. For example, they'll often say things like: "School X is my dream school, but since I don't think I have any chance of getting in there I'm going to apply instead to school Y." Thus school X doesn't receive a "vote," even though the applicant in fact holds school X in higher "esteem."
The same point could be made about Seth's use of the word "rankings." These poll results are not rankings at all. They're simply the application lists of 640 people who applied to certain schools for 640 different reasons.
I could go on, but you see my point.
Of course, that Seth is doing this type of thing is not surprising to me. He's been manipulating statistics and language ever since he joined the blogosphere. What is surprising is that no one at Poets & Writers has caught on to it, or, if they have, that no one there cares. I have to imagine at this point that somewhere there must be aware of the damage these rankings have done to their reputation and journalistic credibility, but maybe no one there really cares.
Posted by: Emily | September 12, 2011 at 08:57 PM
Wow! I just saw that the Chronicle of Higher Education picked up the open letter to Poets & Writers on their website! This is looking worse and worse for them. I think someone higher up at the magazine (not Seth) needs to step forward and defend these rankings publicly, or else apologize and admit their mistake. The longer they stay silent, the worse it looks.
Posted by: Brad | September 12, 2011 at 09:11 PM
When my students write things like "It is reasonable to assume" in their papers, I put a big red mark through it. Your readers shouldn't have to assume anything; your evidence should be strong enough to convince them of your argument.
Posted by: Laura Orem | September 13, 2011 at 04:27 AM
"These days, much of what passes for analysis is just sloppy emotion: anger, envy, so on. Logic skills should be taught in middle and high school. And people should take more pride in distinguishing a valid argument from a specious one."
I dream of this.
Posted by: Caroline | September 13, 2011 at 06:18 AM
Thank you for this.
Posted by: Jordan | September 13, 2011 at 07:31 AM
"Some programs may worry that neither these rankings nor any ranking system devisable could adequately capture each graduate creative writing program’s most unique elements; these fears are understandable, but misplaced. Those applicants polled for this ranking had access to individual program websites at all times before, during, and after their formation of a list of where they intended to apply...If an unquantifiable program feature of which a program is singularly proud does not lead to that program excelling in these rankings, it is neither because these rankings did not consider that feature nor because the applicants polled for these rankings did not. Instead, it is because that feature is less popular among applicants than it is among those program faculty and administrators who originally brainstormed and designed it. By polling current applicants rather than individual program faculties and administrators, the rankings ensure that the class of persons with the most say in these rankings is one that is likely to have recently accessed the very program websites that (presumably) prominently advertise those features of which programs are most proud."
-- Seth Abramson, who apparently subscribes to the idea that a WEBSITE tells you pretty much all you need to know about "unquantifiable" things--that an unquantifiable thing can still be quantified, by golly, by the number of people who seek to "buy" that thing before they even know what the hell they're buying. Welcome to the 21st century in America, where a bunch of 23-year-olds clicking on some links and snippets on a GLORIFIED ADVERTISEMENT determines the worth/rank of an entire educational experience in the arts. Abramson might as well substitute "[unique] feature less popular among applicants" with "Crystal Pepsi."
Posted by: BlueCrush | September 13, 2011 at 05:24 PM