Wednesday, January 9, 2013

Statistics and theory in macroeconomics

A blog post at Slate talks about the use of bad statistics in economics.  (see here for the entire post which is definitely worth reading).

The point of the post is that the data series used by economists aren't very good, in large part because the information is collected from real world events and not from pristine laboratory conditions.  So information contains a lot of error.  As a result, economic theory and policy is based on misleading to bad information.  The authors point out that

"The Obama administration, for example, took office in January of 2009 and began planning a recovery strategy based on the theory that the economy had shrunk at an alarming 3.8 percent annualized rate. After several rounds of revisions, the correct figure seems to have been an 8.9 percent annual rate. That’s on a par with confusing a mild recession with a gigantic one.  Even the smartest theoretical or policy work is basically valueless if it’s based on flawed or misleading data. And yet in the hierarchy of the profession, actually wrangling data is a relatively low-status undertaking compared with building elaborate edifices on even the shakiest of foundations. Harry Truman famously wished out loud for a “one-handed economist” who would offer concrete advice with less hedging and trimming, but, actually, overconfidence about bad data is the bigger problem. Absent laboratory conditions, it’s very difficult to obtain precise measurements of key figures, leaving us all too often with big theories based on poor numbers. And when theories move out of the ivory tower and into the policy realm, the problem gets worse as pressure for timeliness and relevance escalates."






4 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. I like Slate, especially Matthew Yglesias and his Moneybox blog (the old posts from when Steven Landsburg ran the blog are worth reading). I think that data blip explains much of the Obama administration's headaches so far. If we had known we were dealing with an 8.9% contraction, I think it would have been feasible to push for a stimulus much larger than the one he and Romer got. Whether or not this would have made a difference is certainly open to debate.

    It makes one wonder what future revisions will make our current economic conditions look like. Apparently it's going to be revealed that in March of 2012 there were 386000 jobs we didn't know about. The reason for this is the arcane and strange methods the BLS uses in its survey, which Karl Smith explains here. http://www.forbes.com/sites/modeledbehavior/2013/01/04/big-jobs-revision-on-deck/

    ReplyDelete
  3. Phillip and I have raised empirical issues. But even more fundamentally, a question exists whether we are measuring the right things. More jobs were created than originally thought--that's good. But the wages were lower on average (see http://www.nytimes.com/2012/08/31/business/majority-of-new-jobs-pay-low-wages-study-finds.html?_r=0 ). So that's bad.

    ReplyDelete
  4. With all of the variables and loop holes I wonder if it is even possible to get the statistics right. It seem like it is a problem that should be looked at but it also seems like a very expensive if not impossible thing to truly figure out. But if there is a way to make things more certain and to get data right the first time, I think it would be a valuable investment that would pay for itself in the long run.

    ReplyDelete