However, this method of choosing among publication offers is deeply flawed. First, US News ranks law schools, not law journals. And many argue that this “surviving rump of an otherwise defunct news magazine” doesn’t even do that properly. Others have even argued that the US News rankings have played a large part in today’s legal education crisis. But aside from all of that, after my fairly large sample size of ten published articles—by way of comparison, law professors have to publish about four articles to become tenured—I can state one thing with confidence: there is no correlation between a school’s US News rank and the quality of that school’s journal editors and editing process.
For example, I have published in the journals of law schools ranked in the 20s, 50s, 60s, and 70s, as well as a law school ranked #129 (Idaho). Chief Articles Editor Allison Parker and the other editors and staff of the Idaho Law Review did an outstanding job in the editing and publishing process. In fact, my experience with the Idaho Law Review may have been the best—or easily among the two or three best—of my ten journal publications. (My experiences with other journals have ranged from excellent to poor; my biggest complaint is when editors actually insert their own errors—including punctuation errors, grammatical errors, and even more serious substantive errors—into my work. When it comes to the act of editing, I wish law journal editors would follow this simple rule: less is more.)
To be sure, there are other means by which to choose among publication offers.
for example, ranks journal impact, i.e., how often the journal is cited. But while that’s interesting, it really
doesn’t help me much. First, it says
nothing about the quality of the editing and publication process at that
journal. And second, just because a
journal is cited more often, that doesn’t mean that if I publish in that
journal, my article will be cited (or even read) more often. Today, due to electronic research methods, few
people sit down to read an issue of a law journal from cover to cover. Instead, a researcher will search electronically
for an article on a specific topic, regardless of the journal in which it is printed. Therefore, to use a hypothetical example, an article in the Virginia Law Review might
get cited a lot. But that citation count
is more a function of the article, not the journal, and would probably be the same had the author published in the Washington Law Review or any other journal. Further, I suspect that
article citation count may even be more a function of the author rather than
the article. That is, some law
professors will agree to cite each other’s work in order to get their citation
counts up. It’s a case of “you scratch
my back and I’ll scratch yours” in the hyper-competitive, ranking- and status-obsessed world of legal academia. Washington
& Lee University
Given the major flaws in the two primary journal ranking systems, I would like to see a law professor develop a ranking methodology based on authors’ experiences with the publishing journals. Law professors are already ranking nearly every imaginable thing under the sun—see, for example, here, here, here, here, and here. And a “law review author ranking” would actually be meaningful. I would love for a semi-mathematically inclined professor to run with this idea, and conduct an annual survey of authors (nearly all of whom will be his/her fellow law professors) in order to rank their law journal editing and publishing experiences.
I’ll get the ball rolling. The categories to be ranked could include: timeliness of the publication (on time = 10 points); time allowed for the author to review edits (two weeks = 10 points); deference to the author’s style (high deference = 10 points); creation of errors during editing process (no editor-created errors = 10 points); responsiveness to the author’s edits (short response time = 10 points); and quality of the journal’s website (an up-to-date website posting the article = 10 points). Of course, there are probably a dozen other categories that could be included, but the total number of categories ranked should be few, and the respondents should be guaranteed anonymity, in order to induce participation by authors.
It is true that law review editors turn-over every year, and a new batch takes their place. This means that a great experience with “Journal A” could easily have been a bad experience had the article been published a year earlier or later. It is further true that some law professors—especially those seeking tenure—will, by necessity, continue to be slaves to the US News rankings when selecting among publication offers. However, ranking the journals on the quality of their editing process would still do two important things.
First, by ranking certain categories, such as whether the editors were deferential to the author’s writing style, authors would be clearly communicating to journal editors what they value in the publication process. And most of the editors will likely respond by improving performance in these areas. This, of course, would improve the publication experience for all authors, regardless of whether they chose their journal based on this “law review author ranking,” or some other system like the US News or the citation impact ranking.
And second, if a particular journal ranks high, it will likely be a source of pride, which will transfer to the next year’s editorial board. Similarly, if a particular journal ranks low, that too will be passed on, and will give the next year’s board the incentive to do better than its predecessor board. Remember, rankings are powerful. Law review editors are students, and some students do drastic, life-ruining things based on rankings, e.g., going into debt $150,000 or more to go to a law school ranked in the 20s instead of taking a full scholarship at a school ranked in the 50s. Further, in today’s hyper-competitive legal job market, law review editors would love another line on their resume, e.g., “named editor-in-chief of school’s law review and improved 'law review author ranking' from #125 to #10.”
I’ll be the first to contribute to the survey for the new ranking system: I score the Idaho Law Review a “10” in each of the above-named categories. But until an ambitious professor runs with this idea and then publishes the “law review author ranking” on a fancy, well-funded website, I’m still left wondering how to choose among the offers (that I will hopefully receive) for my newest article, An Alternative to the Wrong-Person Defense. It’s true, the submission season is very young and I haven’t received my first offer yet, so there’s plenty of time to devise a ranking system. However, I’ve been mulling something over: I think I’ll rank the journals by their school’s football team’s performance last year. For my purposes, at least, this seems to be as good of a method as any. And while I absolutely love the Oregon Ducks, I have to remain dispassionate about this important decision. Much like the final AP poll from last year,
comes in second. The Alabama Law Review sits atop my ranking. Roll Tide.