In my previous post, I summarized Robert S. Pindyck’s scathing paper on the computer models used by the Obama Administration for its estimates of the “social cost of carbon.” Pindyck’s critique is all the more compelling because he is a professor of economics and finance at MIT, with several decades’ experience publishing articles and books dealing with energy, and he is actually a proponent of a carbon tax. In the present article, I will explore a particular aspect of Pindyck’s critique that I skipped in the original post. Believe it or not, Pindyck explains that the allegedly state-of-the-art computer models that are now determining federal policy have damage functions that are literally made up. As I have been telling my economist colleagues for years, if they actually understood how these computer models were designed, they would have far less confidence in the “optimal carbon tax” numbers shooting out of the other end.
Economists “Make Up” the Damage Function
After explaining the uncertainties in the physical science of climate change—in particular, knowing how much global warming will occur, in response to a given increase in atmospheric carbon dioxide concentrations—Pindyck in his paper turns to the much harder problem of estimating the economic impacts of a given increase in temperature. Here is his shocking summary of what the leaders in this field have done:
When assessing climate sensitivity, we at least have scientiﬁc results to rely on, and can argue coherently about the probability distribution that is most consistent with those results. When it comes to the damage function, however, we know almost nothing, so developers of IAMs [Integrated Assessment Models] can do little more than make up functional forms and corresponding parameter values. And that is pretty much what they have done. [Pindyck p. 11, bold added.]
To be sure the reader understands the significance of the above declaration that the modelers “can do little more than make up” the economic impacts of climate change, let me be clear: Pindyck is not here giving an after-dinner talk holding a glass of wine. The above quotation comes from his forthcoming paper (September 2013) in the respected, peer-reviewed Journal of Economic Literature.
When Pindyck says the economists creating these models “make up functional forms and corresponding parameter values,” he means that they each literally make up an equation—with a percentage loss of GDP on the left side and global temperature increases on the right—that shows what fraction of the baseline GDP the global economy will actually produce, as temperature increases get larger and larger. (Note: For a more complete explanation of this procedure, see the note at the end of this blog post.)
To prove that he is quite familiar with the published models, Pindyck walks through the damage functions of some of the leaders in the field—including the very models chosen by the Obama Administration Working Group to estimate the social cost of carbon (SCC). Here’s Pindyck:
Most IAMs (including the three that were used by the Interagency Working Group to estimate the SCC) relate the temperature increase T to GDP through a “loss function” L(T), with L(0) = 1 and L’(T ) < 0. For example, the Nordhaus (2008) DICE model uses [an] inverse-quadratic loss function…
Weitzman (2009) suggested the exponential-quadratic loss function…which allows for greater losses when T is large. But remember that neither of these loss functions is based on any economic (or other) theory. Nor are the loss functions that appear in other IAMs. They are just arbitrary functions, made up to describe how GDP goes down when T goes up.
The loss functions in PAGE and FUND, the other two models used by the Interagency Working Group, are more complex but equally arbitrary…[T]here is no pretense that the equations are based on any theory. [Pindyck p. 11, bold added.]
Just to repeat the takeaway message here: The economists who constructed the computer models that are currently guiding federal policy have to “make up” an equation relating GDP loss to a hypothetical temperature increase. True, these modelers had a little more guidance that just literally picking numbers out of the clear blue sky, but Pindyck emphasizes that there is no real economic theory guiding their choices. Furthermore, they can’t even calibrate their damage functions to empirical observations, because modern economies have not experienced the amount of warming simulated in these computer models.
What About the IPCC “Consensus”?
Those following the climate change policy debate might be skeptical. Surely the leading computer models that quantify climate change impacts on human welfare have undergone a rigorous review process and rely on the cutting-edge scientific findings? Didn’t respected economists such as William Nordhaus (the DICE model), Richard Tol (the FUND model), and Christopher Hope (the PAGE model)—whose models after all were selected by the Obama Administration’s Working Group—base all of their work on the “scientific consensus” as codified in the Intergovernmental Panel on Climate Change (IPCC) reports?
This is my favorite part of Pindyck’s paper. He explains the actual situation:
The question is how to determine the values of the parameters [used in the computer models’ damage functions]. Theory can’t help us, nor is data available that could be used to estimate or even roughly calibrate the parameters.
As a result, the choice of values for these parameters is essentially guesswork. The usual approach is to select values such that L(T) for T in the range of 2°C to 4°C is consistent with common wisdom regarding the damages that are likely to occur for small to moderate increases in temperature…Sometimes these numbers are justiﬁed by referring to the IPCC or related summary studies….But where did the IPCC get those numbers? From its own survey of several [Integrated Assessment Models]. Yes, it’s a bit circular. [Pindyck pp. 12-13, bold added.]
Pindyck’s point is so important—and so hilarious—that I want to make sure the reader understands it. There is no underlying economic theory and we have no empirical data by which to estimate the impacts on humans coming from even moderate (let alone large) increases in global temperatures. Thus when economists design computer simulations of the global climate and economy, going centuries into the future, they literally just make up relationships between hypothetical temperature increases and the corresponding percentage decrease in the global GDP. Then, in an excellent illustration of “groupthink,” the creators of these made-up damage functions justify them by pointing to third-party summaries done of their own (made-up) damage functions.
If we naïvely look at history, we would see that since the Industrial Revolution began, the globe has warmed—without taking a stand on the cause of the warming—about 0.8 – 0.9 degrees Celsius, while GDP has increased substantially. Thus even looking backwards it is not obvious how to calibrate a “damage function” because we would need to know the trajectory of global GDP in the absence of warming.
Furthermore, even if we did know how things proceeded in this alternate universe—so that we could isolate the effects of warming on GDP growth over the last 150 years—that still wouldn’t mean much: Richard Tol, for example, in his FUND model projects net benefits to the world for moderate warming, which only turn to net damages after a sufficient amount of temperature increase. So even if we magically had the ability to separate out the contribution of warming to GDP over the last 150 years, we might look back in time and see global warming boosting global GDP, even though this pattern would reverse itself going forward. These considerations show just how speculative the enterprise of modeling impacts from global climate change really is, even apart from the difficulties of accurately modeling just the climate system itself (not the economy too).
As I explained in my Senate testimony, the dubious methods used to generate estimates of the so-called social cost of carbon render this concept entirely inappropriate in federal policymaking. Pindyck’s critique of the “cutting edge” computer models should underscore the false sense of precision that these simulations give us. The public and policymakers alike would be shocked if they understood the true state of the peer-reviewed literature on the economics of climate change, regardless of the situation in the physical sciences.
APPENDIX: A Note on the Damage Functions Contained in Integrated Assessment Models (IAMs)
As noted above, Pindyck says the economists creating these models “make up functional forms and corresponding parameter values. For example, a very simple damage function (which I am just making up myself to illustrate the basic idea) could be L(T) = 1 – αT, where T is the global temperature increase (measured in degrees Celsius) and α is a parameter that is a number between 0 and 1. This would be classified as a linear damage function, because the loss to GDP is always proportional to the temperature increase; going from 1 to 2 degrees of warming causes just as much marginal damage as going from, say, 8 to 9 degrees. (Most economists would think this is a bad functional form, because they want the damage function to make the marginal impacts get worse, with additional warming.)
Now, once we have chosen our functional form (linear in this simplistic example), we would have to calibrate the damage function by picking α so that the equation spits out the type of GDP loss we want, for particular temperature increases. For example, if we want our model to show that a 10 degree Celsius warming will cause a 25 percent drop in GDP, then with the above equation we need to set α equal to 0.025. With that choice of parameter, the above equation yields L(10) = 1 – (0.025)x(10) = 1 – 0.25 = 0.75, which is what we want: Global GDP is only 75 percent of its baseline value, for 10 degrees Celsius of warming.