When I write law review articles my arguments
are generally rooted in things like “logic,” “consistency,” “analogies,” the admittedly-vague
concept of “fairness,” and sometimes even “economic reasoning.” (Other times, the things I criticize are so
absurd that it would be more accurate to say my arguments are rooted in
“anti-stupidity.”) However, sometimes
hard data combined with basic statistical techniques—see, for example, here and
here—can do wonders for effectively demonstrating a point. But as a rule, I’m not a fan of empirical
studies, and the recent “value of a law degree” debate shows why.
First, with unthinkably
high unemployment for new law school grads, shaky and poorly-defined career
paths for those who are “lucky” enough to land legal jobs, and the staggering
debt load that typically follows a law grad decades beyond the cap-and-gown
ceremony, many people, including some law profs—see, for example, here and here—have
gone to great lengths to warn would-be law students of the potential life-ruining
impact of going to law school. (These
rogue profs are largely concerned with the negative financial impact, as
opposed to the stress, anxiety, and sleeplessness that comes with many law jobs.)
Second, in response to the
current anti-law school sentiment permeating the air, two profs decided to do
an empirical study on the topic. Their
conclusion: law school has a net present value of about a million dollars.
Third, the criticisms of
the empirical study flooded in—perhaps chief among them: the study didn’t take
into account the cost of attending law school when determining the net
present value of the law degree. (Actually,
other less obvious criticisms—see, for example, here and here—do greater damage
to the study’s central claim.)
Fourth, the authors of the
empirical study responded by arguing that their data was better than the data used by one of their critics in one of his books.
Fifth, this response will no doubt provoke further responses from the other
camp. (Actually, it already has.)
The bigger picture,
however, is this: the words “empirical study” give the impression (at least to
me) that the authors will “prove” something.
Therefore, empirical studies should be more reliable than anecdotes, and
should be more useful than other tools such as logic, analogies, and even
economic-based reasoning. And I think that
in some cases, they are. (For example, in
this empirical-study-turned-article, we demonstrated several things about how interrogators deliver Miranda warnings, and how that delivery impacts suspects’
understanding of, and decision to waive, their rights.)
But in other instances, such
as the “value of a law degree” debate, empirical studies prove little, and
merely result in near-endless back-and-forth where each side of the debate preaches
to their respective choir. No one will
be persuaded. No one will switch
camps. Everyone will cite the study, or
the criticisms of the study, in support of the conclusion they’ve already
reached. (For example, anyone who had even a small toe dipped in the legal education waters could have predicted the post-study gushing of Steve Diamond.)
Not that any of this is bad. It's just that, even though we tend to perceive them as more, empirical studies are often just arguments that cloud the underlying issue with statistical complexity and the illusion of certainty, thereby creating dozens of sub-debates and detracting from the real issue. (Actually, that sounds a lot like the soon-to-be abandoned BCS in college football.) Further, for purposes of argument, these empirical studies can actually be far less persuasive than other tools, including the simple anecdote. But, admittedly, anecdotes, analogies, economic reasoning, and even logic aren’t always ideal for every type of debate. And I just haven’t figured out when empirical studies are helpful and when they’re worse than useless. Does it depend on the topic? On the empirical study’s level of complexity? On the number of assumptions built into the model? I just don’t know.
Not that any of this is bad. It's just that, even though we tend to perceive them as more, empirical studies are often just arguments that cloud the underlying issue with statistical complexity and the illusion of certainty, thereby creating dozens of sub-debates and detracting from the real issue. (Actually, that sounds a lot like the soon-to-be abandoned BCS in college football.) Further, for purposes of argument, these empirical studies can actually be far less persuasive than other tools, including the simple anecdote. But, admittedly, anecdotes, analogies, economic reasoning, and even logic aren’t always ideal for every type of debate. And I just haven’t figured out when empirical studies are helpful and when they’re worse than useless. Does it depend on the topic? On the empirical study’s level of complexity? On the number of assumptions built into the model? I just don’t know.
So what’s a legal academic
to do for his next presentation? What’s
the litigator to do for his next closing argument? Perhaps former attorney Jeff Winger has the answer: “I could cite quotes or dig up statistics, but those are just words and numbers. I think we could have a little more fun, if I expressed myself in song.”
Update: Maybe the title of this post should have been "the silliness of empirical studies when placed in the hands of law professors," as the childish bickering among grown men continues. They're like kids in the schoolyard -- some worse than others -- but instead of slinging only verbal insults, they sometimes sling numbers. Maybe we should get a neutral, detached economist to sift through all of this complexity for us.
Update: Maybe the title of this post should have been "the silliness of empirical studies when placed in the hands of law professors," as the childish bickering among grown men continues. They're like kids in the schoolyard -- some worse than others -- but instead of slinging only verbal insults, they sometimes sling numbers. Maybe we should get a neutral, detached economist to sift through all of this complexity for us.
Good post. The basic problem, in my view, is that observational studies can only as it were make suggestions, rather than prove anything in even a halfway rigorous sense of that word. For example, when trying to determine the extent to which correlations between educational credentials and increased future income demonstrate that the former caused the latter, the number of confounding factors are almost limitless.
ReplyDeleteThis doesn't mean such studies don't have value, but it does mean their findings need to be taken with several grains of salt -- something which their authors are understandably reluctant to do.
Michael,
ReplyDeleteInteresting post and debate going on within the profession. Perhaps I will use this in my Labor Market course in the spring. The omission of tuition costs are a significant omission not only because they are large, but also because they accrue immediately and therefore are not discounted to the extent of future earnings.
I have not read the original study, so I may be speaking out of turn, but in addition to the tuition cost omission, if the authors used the average earnings of the average worker with an undergrad degree—even ignoring the school quality issues—then they have overestimated the value of the degree. On average, students who enter law school are “above average” when compared to all undergrads. Using that group as a comparison is therefore inappropriate. Rather, the more appropriate group would be undergrads with similar undergraduate performance but without any post-undergrad degrees, but I doubt they had access to such data.
________________________
Norman R. Cloutier
Professor of Economics
Director, Foreign Film Series
Faculty Athletics Representative
University of Wisconsin-Parkside
900 Wood Road
Kenosha, WI 53141