In my first article of this 3-part series on the reliability of the climate models used to guide policy debates, I developed a coin-flipping analogy to make sure the reader understood the concept of a “95% confidence spread.” Then I showed that even using the charts presented by the climate scientists trying to defend the models, it seemed pretty obvious that since the true forecast period of 2005 onward, the suite of climate models being evaluated had predicted too much warming.
Finally, in this third and final installment, I will summarize some of the other challenges to the climate change orthodoxy coming from (like Curry) professional climate scientists themselves. I will also address a recent paper claiming that older climate models have done a good job forecasting future warming.
More From Judith Curry
I ran out of room in the previous piece, so let me here reproduce an important line of argument that Curry raised in her 2017 primer. She first defines the “equilibrium climate sensitivity” (ECS) as “the change in global mean surface temperature” several centuries after a doubling of atmospheric carbon dioxide concentrations. In other words, the ECS measures the long-run sensitivity of the Earth’s climate system to a sudden increase in CO2. After defining the concept of ECS, Curry explains:
The IPCC Fourth Assessment Report (2007) conclusion on climate sensitivity is stated as: “The equilibrium climate sensitivity. . . is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C and is very unlikely to be less than 1.5°C. Values higher than 4.5°C cannot be excluded.”
The IPCC Fifth Assessment Report (2013) conclusion on climate sensitivity is stated as: “Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).”
This likely range of ECS values varies by a factor of three. Whether or not human-caused global warming is dangerous…depends critically on whether the ECS value is closer to 1.5°C or 4.5°C. Research over the past three decades has not narrowed this range—the 1979 National Academy of Sciences study (the so-called ‘Charney Report’) cited a likely range for ECS of between 1.5 and 4.5°C.
In fact, it seems that uncertainty about value of ECS has increased since the 2007 Fourth Assessment. The bottom of the ‘likely’ range has been lowered from 2 to 1.5°C in the 2013 Fifth Assessment Report, whereas the Fourth Assessment Report stated that ECS is very unlikely to be less than 1.5°C. It is also significant that the Fifth Assessment does not cite a best estimate, whereas the Fourth Assessment cites a best estimate of 3°C. [Curry 2017, bold added.]
It is worth amplifying the very important points Curry makes in the block quotation above. First of all, we can see the disingenuousness of the “denier” label, and (on the flip side) of using “the 97% consensus” in order to ram through support for aggressive government policies in the name of fighting climate change. Elsewhere I have shown the tricks involved with that factoid, but for our purposes here, consider what Curry is saying: The entire debate over whether human emissions might be a mere annoyance versus posing a serious threat, hinges on whether the equilibrium climate sensitivity (ECS) falls within the range that the IPCC has announced. So a “lukewarmer” who agrees human activities contribute to global warming, but doesn’t think it will be a big deal, is merely leaning toward the lower end of the range coming straight from the UN’s own report on climate science.
Another important point Curry raises is that the announced range of the Earth’s sensitivity to carbon dioxide has not narrowed since 1979. (!) This should be very surprising to readers who assumed that climate science started out in its infancy in the 1970s but has matured greatly in the interim. If scientific estimates of the “likely” range of values for the charge on an electron (or the age of the universe) didn’t narrow over the course of three decades, we would conclude that scientific understanding of that phenomenon hadn’t much improved.
Third, Curry explains that the “equilibrium climate sensitivity” (ECS) is not a raw fact of nature, like the speed of light or the temperature of the sun. Instead, it is an “emergent property,” based on a hypothetical question. Namely: If we doubled the atmospheric concentration of carbon dioxide, and then waited several centuries without additional human emissions, how much would average global temperature increase?
This is a very complicated question to answer, which is why climate scientists can’t agree on it. As I explained in my paper on William Nordhaus’ DICE model, economists should realize that the all-important ECS is not a simple empirical estimate, like measuring the price elasticity of potatoes. Rather, the ECS is an outcome of a hypothetical experiment inside an idealized model simulation, and so is more akin to asking, “If we doubled the quantity of money, what would happen to real GDP in the new equilibrium?” Economists from different schools of thought give very different answers to that type of question.
Grab-Bag of Other Researchers Who Are Skeptical of the Alarmist Position
Those wishing for a deeper dive into these issues should read the work of (NASA-award-winning) climate scientists John Christy and Roy Spencer; here is a recent Spencer post discussing the poor performance of the standard climate models (specifically those used in “CMIP5,” which I discussed in my first article). They should also check out the work of economist (and textbook author on climate change economics) Ross McKitrick.
More generally, interested readers should check out the work of Nicholas Lewis (who made the news when he pointed out some calculation errors, which forced the authors of a study on ocean heat uptake to revise their claims). Those who prefer a lecture can listen to this Independent Institute event featuring two physicists who were explaining their concerns with the way climate science was being communicated to the public. Speaking of physicists, here’s an interview with the famous Freeman Dyson who expresses his contrarian take on climate hysteria. And of course the Heartland Institute famously publishes its NIPCC (Nongovernmental International Panel on Climate Change), the rival to the UN publication. Also see this recent article (co-authored by a physicist and an atmospheric scientist) to see their modest plea for more humility coming from the climate science community, when communicating to the public the reliability of comprehensive climate models.
Rather than try to summarize the perspectives of all of the above “skeptics,” instead I will here focus on the work of Mototaka Nakamura, who recently published a scathing critique of mainstream climate science.
The Scathing Confession of Mototaka Nakamu
First we should establish Dr. Nakamura’s credentials: In 1995 he earned a Sc. D. (which stands for Doctor of Science and is comparable to a PhD) in Meteorology, from the Massachusetts Institute of Technology (MIT). He has at least 11 peer-reviewed publications in journals pertaining to climate science, and is currently a Visiting Associate Researcher at the School of Ocean and Earth Science and Technology at the University of Hawaii. So this guy isn’t (say) just some accountant who loves Sean Hannity and has “read lots of stuff” about global warming.
In June 2019, Dr. Nahamura published a book (in Japanese) with the provocative title: Confessions of a climate scientist: the global warming hypothesis is an unproven hypothesis.
This review gives several quotations from the book that show its flavor. I’ll reproduce some below.
On the crudity of current climate modeling:
These models completely lack some critically important climate processes and feedbacks, and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.
I myself used to use climate simulation models for scientific studies, not for predictions, and learned about their problems and limitations in the process.
On the issue of “tuning” the models’ sensitivities to greenhouse gases and industrial aerosols, in order to make the model outputs match historical temperature observations:
The models are ‘tuned’ by tinkering around with values of various parameters until the best compromise is obtained. I used to do it myself. It is a necessary and unavoidable procedure and not a problem so long as the user is aware of its ramifications and is honest about it. But it is a serious and fatal flaw if it is used for climate forecasting/prediction purposes.
And finally, the vexing problem of clouds:
Accurate simulation of cloud is simply impossible in climate models since it requires calculations of processes at scales smaller than 1mm. Instead, the modelers put in their own cloud parameters. Anyone studying real cloud formation and then the treatment in climate models would be flabbergasted by the perfunctory treatment of clouds in the models.
The comments above mirror those made by Curry and others. To repeat, these are not the rants of an outsider who doesn’t understand science; these are the confessions from someone who himself used these very techniques to publish in the literature.
Incidentally, those wishing to cross-check these statements about clouds and other limitations should consult (IER founder) Rob Bradley’s recent post. Rob shows how climate scientist Gerald North made very prescient statements about the limitations of climate models twenty years ago, and how even today’s orthodox defenders implicitly admit much of the critique.
Didn’t They Just Prove the Climate Models Are Working Great?
Knowledgeable readers may wonder how to interpret Part 1 of my series—which allegedly showed that the climate models have been overpredicting global warming—with a recent paper by Zeke Hausfather et al., which apparently shows that the climate models are doing just fine, thank you very much.
This is how the Scientific American relayed the news to the layperson, but for our purposes here I want to quote from Vox’s resident climate wonk, David Roberts, when he reported on the Hausfather et al. paper:
As interesting as the details of climate science may be, what society most needs from it is an answer to a simple question: What the hell is going to happen?…
It turns out that attempting to understand, model, and predict the entire global biophysical/atmospheric system is complicated. It’s especially tricky because there’s no way to run tests. There’s no second Earth to use as an experimental control group…
This reliance on models has always been a bête noire for climate change deniers, who have questioned their accuracy as a way of casting doubt on their dire projections. For years, it has been a running battle between scientists and their critics, with the former rallying to defend one dataset and model after another…
Now, for the first time, a group of scientists — Zeke Hausfather of UC Berkeley, Henri Drake and Tristan Abbott of MIT, and Gavin Schmidt of the NASA Goddard Institute for Space Studies — has done a systematic review of climate models, dating back to the late 1970s. Published in Geophysical Research Letters, it tests model performance against a simple metric: how well they predicted global mean surface temperature (GMST) through 2017, when the latest observational data is available. [Roberts at Vox, bold added.]
Notice the rhetorical moves and the interesting admission: Even though David Roberts knows that the critics of the climate models are “deniers,” whom he pits as the critics of scientists—as opposed to other scientists disagreeing with their peers—he also admits that this fall 2019 study is the first attempt to comprehensively assess the forecasting performance of past climate models. As Mike Myers might ask, “Ishn’t zhat veird?”
In any event, here is the takeaway from the Hausfather et al. article, according to David Roberts at Vox:
Long story short: “We find that climate models published over the past five decades were generally quite accurate in predicting global warming in the years after publication.”
This is contrary to deniers, who claim that models overestimate warming…As it happens, models have roughly hit the mark all along. It’s just, nobody listened.
The good news, as the authors say, is that this result “increases our confidence that models are accurately projecting global warming.”
The present article is already long, so let me provide a quick response:
- The models studied in the Hausfather et al. paper had to be published no later than 2007. In contrast, the CIMP5 suite of models that the orthodox (not “denier”) climate scientists at RealClimate used for their ongoing assessment of forecasting performance, are much more relevant to determine what is underlying more recent assessments of future climate change. (The CMIP5 process didn’t even start until 2008, with published modeling results beginning in 2011.) To repeat, the chart in my Part 1 of this series—which shows that “the climate models” are on the verge of falling below the 95% confidence spread, or they already have if you use satellite observations—didn’t come from the Heartland Institute, it came from the people trying to vindicate the IPCC.
- According to these critics, the reason for the apparently good track record of 17 models analyzed in the Hausfather et al. paper is that the recent, strong El Niño provided a spike in global temperatures that rescued (most of) the models. Yet if Hausfather et al. had run their assessment up through, say, 2013 and stopped there, then even these earlier-generation climate models would have been consistently running hot.
- Regarding the Hausfather et al. paper’s attempt to rehabilitate James Hansen’s predictions made in the late 1980s, by arguing that he shouldn’t be blamed for overestimating future methane and CFC emissions (because of the Montreal Protocol), I point the reader to the rebuttal from climate scientist Pat Michaels. Specifically, if we are going to use our superior knowledge to tinker with a decades-old model, then we should be consistent. Yes, as the Hausfather team points out, Hansen overestimated the trajectory of emissions of some greenhouse gases and that’s partly why his predictions were too hot. However, as Pat Michaels rebuts, it’s also true that the Hansen 1988 model overstated the cooling effect of sulfate aerosols by a factor of three (according to current estimates). When Pat Michaels includes that update as well, then Hansen’s models overpredict warming even worse than before the tinkering began.
The Earth’s climate system is a tremendously complex entity that is difficult to model. Advances in modeling techniques and computer processing speed, as well as the accumulation of more high-quality instrument data, have no doubt increased scientists’ understanding over the past few decades. Even so, climate science is still in its infancy, as even a cursory inspection of the published “equilibrium climate sensitivity” (ECS) ranges from the UN indicate.
Unfortunately, because fears of catastrophic climate change have been tied so closely to political battles, the entire subject has been weaponized. The defenders of the orthodox line on climate science have often been pushed to exaggerate the performance of the computer models, in order to reassure the public not to listen to those skeptical of the models.
Yet as I’ve demonstrated in this 3-part series, prominent scientists, both within and from outside the field of climate science, have pointed out some of the serious limitations in our current understanding. Furthermore, as I made the central focus of my Part 1, even the temperature chart produced by the orthodox defenders shows how badly the recent climate models have done. Policymakers and the public who want a more balanced view on climate change should consider the points I’ve raised in this series, and better yet should follow the links to the actual experts.