If you do a "close" reading of the P&W methodology section you will see what shifting sand the ranking is based on (background on rankings is here). Most of what is in the methodology section would be more accurately placed in a separate section called "findings." This distinction is important because by keeping the findings separate from the discussion of methodology the reader knows that the findings are just one person's interpretation of the data.
Most of the statistics derive from a single number, 640, the number of would-be applicants who visited a blog and responded to a poll by completing forms. Most of what the author says are "demographics" are poll responses. The author does not provide demographics (such as age, gender, location, income, etc of the respondents).
The methodology says the responses are "votes" and that those programs receiving the most "votes" are the ones the respondents hold in highest esteem. Yet if you read elsewhere you learn that the "votes" are the programs to which respondents are applying; there may be no connection between which program one might hold in high esteem and those to which one individual is applying (eg. I may think Houston is the best but because I live and work and have family in NYC, it isn't practical for me so it's not on my list of programs I'll be applying to. I may believe that Cornell has the best program yet I know the liklihood of getting in is slim so I'm not applying- these scenarios aren't captured in the ranking). Respondents were not asked to reveal which programs they hold in highest esteem; they were asked to disclose to which programs they would apply.
Every time the author says, "it is reasonable to assume," know that it is also reasonable to assume something other that what follows. Such claims are not backed up with any evidence, anecdotal or otherwise.
Most of the programs ranked got fewer than half the number of "votes" than the top ranked program - there are many many ways to account for this gap and yet the only explanation given is that it reflects the "esteem" with which potential applicants hold the program but it could reflect many other possibilities.
A good deal of the "methodology" section is devoted to history of MFAs and to discrediting other polls - irrelevant in a discussion of methodology.
I could go on. The infelicities of the writing style are too numerous to mention. Strip away all of this filler and you see how little substance there is to the ranking.
And there is more: The author of the ranking methodology makes many unsupported claims. None of these claims have anything to do with the method used to gather data for the so-called ranking. Here are but three examples from among many (my comments in bold):
When programs are assessed by individuals already within the system, the natural result is that older programs—whatever their selectivity, financial resources, faculty resources, curriculum, pedagogy, or student outcomes—move to the top of the pack due to their profile advantage. Yet applicants report only limited interest in programs’ historical pedigrees, as pedigree itself is often considered a suspect quantity in the national literary arts community." (What is the basis for a sweeping statement about the national literary arts community's views of "pedigree"?)
Whereas scientific rankings (which require demographic data that is, in this case, unavailable both to independent researchers and national trade organizations) traditionally poll, at the state level, well less than a hundredth of one percent of their target population, and national polls typically sample well less than a thousandth of one percent, the sample size for the 2012 Poets & Writers Magazine rankings, in a nod to the necessarily unscientific nature of the polling, is between 2,000 and 20,000 times larger as a percentage of population. (Scientific polls use precise measures to identify and calculate sample size, which is why they don't have to rely on large numbers to get valid, replicable results. The sample here is unreliable because the pollster has no control over who has responded to the poll and because the pollster has known biases.)
In most instances, student scores are only lightly scrutinized (or simply ignored altogether) by the programs themselves, and instead reviewed—where they are reviewed—by individual universities’ Graduate Colleges, which often have minimum GRE-score requirements (typically very generous ones). Creative writing MFA applicants should not avoid the GRE General Test for fear of the Mathematics portion of the exam; even those programs that do give minor weight to standardized test scores in their admissions processes generally look only at applicants’ Verbal and Analytical Writing scores. (How does P&W know that applicants fear the mathematics portion of the GRE and how admissions offices value them?)
These inaccuracies, sweeping statements, and blatant falsehoods are buried in the excess verbiage that is characteristic of the author's writing. Strip the garbage away and you're left with . . . garbage.