Guest post by Ira Glickstein
Thanks to WUWT readers who posted estimates of how much of the supposed 0.8ºC Global Warming since 1880 was due to Data Bias, Natural Cycles, and AGW (human-caused warming). I am happy with the results even though the average for AGW came out higher than my original estimate.

This is the fifth of my Tale of the Global Warming Tiger series where I allocated the supposed 0.8ºC warming since 1880 to: (1) Data Bias (0.3ºC) , (2) Natural Cycles (0.4ºC) , and (3) Human-caused global warming – AGW (0.1ºC). Click Tiger’s Tale and Tail :^) to read the original story.
WUWT COMMENTERS SAY
As the above graphic indicates, WUWT Commenters who provided their own estimates generally agreed with my allocation, with the interesting exception of AGW, where the average is 0.18ºC, nearly double my original allocation of 0.1ºC. Natural Cycles averaged out at 0.33ºC, a bit lower than my original 0.4ºC. Data Bias averaged out at 0.28ºC, a bit lower than my original 0.3ºC. While this is not a scientific poll, it certainly shows a wide variety of Climate Science opinion is alive and well here at WUWT.
Far from being a Global Warming Tiger mostly due to atmospheric CO2 from human burning of fossil fuels and land use, and on its way to 2ºC to 5ºC or more, according to the IPCC, it appears we are actually dealing with a Global Warming Pussy Cat, with warming since 1880 around 0.5ºC to 0.6ºC,and stabilizing despite continued rise in CO2, much of it human-caused.
Some who responded put AGW as low as ZERO (while others put it as high as 0.7ºC), some put Natural Cycles as low as ZERO (while others put it as high as 0.55ºC), and some put Data Bias as low as ZERO (while others put it as high as 0.65ºC). At the end of this posting, I’ve tabulated your estimates, along with the names of those kind enough to provide them. THANKS!
When everything settles out over the coming decades, which I believe will be marked by stabilization of Global temperatures, and perhaps a bit of Global Cooling, I think your estimates will turn out to be more prescient than that of the official climate Team! One Commenter humorously posted “Jim Hansen’s” estimates as: AGW = +3.3ºC, Natural Cycles = – 2.5ºC, and, of course, Data Bias = 0.0ºC.
IS ALL THE TEMPERATURE DATA USELESS – OR IS IT THE ANALYSIS?
When I discussed the controversy about the temperature data collected since 1880 with my PhD advisor (with whom I am still in regular contact) he reminded me that, given a large number of measurements by different observers, using a variety of thermometers, and taken at a variety of locations and times, the random errors would largely cancel each other out. Even systematic errors in given thermometers, which might be calibrated a bit high or low, and given observers, who might tend to round the numbers up or down, would largely cancel out. Indeed, he said, even long-term systematic bias would hardly show up in the temperature trends. Thus, he assured me, while any individual reading may or may not be accurate, the overall temperature trend would be quite robust, to a high level of precision.
Of course, he is correct from an academic point of view. As a brilliant analyst once humorously explained to me, once we ASSSUME a perfectly smooth elephant with negligible mass, all sorts of wonderful circus tricks become possible!
Yes, errors may be categorized as:
- Perfectly Random (due to “noise” in the measurement process, and equally likely to be higher or lower than the truth) or,
- Perfectly Systematic (due to miscalibration of the measuring instrument, off by a constant amount, equally likely to be higher or lower than the truth), and assumed to be
- Perfectly Independent (not affected by any other measurement).
In the real world, however, these conditions seldom obtain, but they are necessary assumptions for statistical analysis to operate correctly. When a scientific study concludes that the results are correct, plus or minus a given amount (say +/- 0.05ºC), to a given statistical certainty (say 95%), they are implicitly assuming the three items above are satisfied.
In many cases, even if those assumptions are not perfectly true, they are close enough for the statistical results to be valid. How can we tell if Global Warming is one of those cases? Well, for a start, we can ask how ROBUST are the results. In other words, when they are analyzed by different people at different times, do they all come up with close to the same results? In the case of Global Warming data, as I have shown, even when the same exact data is analyzed by the same exact members of the official climate Team, the results vary by +/-0.2ºC or more, indicating that something is wrong with their basic assumptions.
Case #1
According to my posting, a graph of the US Annual Mean Temperature record from 1880 to 1998, published by NASA GISS in 1999, differs substantially for the record for the same years, published by them in 2011, see blink graphic below:
A commenter suggested that the 1999 chart did not look like what had been published by GISS in that year. Well, the 1999 chart I used came from a posting by Anthony who credited Zapruder.nl. An almost identical chart appeared at Climate Audit in 2007, linking to a Hansen 1999 News Release but that link now brings up a damaged image. However, I found an almost identical chart at GISS in a Hansen 1999 paper. The 2011 graphic I used was downloaded from GISS last month. The GISS re-analysis makes data after about 1960 warmer by up to 0.3ºC, while that prior to 1950 gets cooler by 0.1ºC.
Case #2
According to a GISS email, released under the Freedom of Information Act, records for US Annual Mean Temperature for 1934 and 1998 were re-analyzed seven times, and that resulted in a reduction of 1934’s lead of 0.5ºC warmer to a virtual tie. [The email is embedded in the graphic below.] In the latest GISS accounting, done after the date of the email, 1998 pulled ahead by a bit. (Our tax dollars at work.)
There is a need to analyze and adjust the raw temperature data when stations move or are encroached by development or when other changes are made to the equipment and enclosures or in the times of observation, etc. It seems that most of those changes would tend to exaggerate the amount of warming, yet those charged with analyzing the data seem to think otherwise. The reported temperatures always seem to increase with each re-analysis. That suggests an agenda on the part of those entrusted with the analysis.
DOES SATELLITE TEMPERATURE DATA SOLVE THE PROBLEM?
Satellite temperature measurements have been available starting in the late 1960’s, with good surface and tropospheric data available since late 1978. So, it would appear that, at least from 1979 on, given a uniform Global source set of data, global temperature trends have been accurately reported. However, according to Wikipedia
Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have obtained different temperature trends. Among these groups are Remote Sensing Systems (RSS) and the University of Alabama in Huntsville (UAH). Furthermore the satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for satellite drift in orbit. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult. …
They go on to say “Satellites may also be used to retrieve surface temperatures in cloud-free conditions, generally via measurement of thermal infrared …”[Emphasis added] so it would appear that this type of instrumentation cannot reliably measure surface temperatures below clouds. That is problematic, since anyone who has been to a beach knows how cold it gets when a cloud happens to pass overhead and block the Sun!
Roy Spencer, PhD updates the UAH Global temperature datasets based on satellite data. He writes:
Since 1979, NOAA satellites have been carrying instruments which measure the natural microwave thermal emissions from oxygen in the atmosphere. The signals that these microwave radiometers measure at different microwave frequencies are directly proportional to the temperature of different, deep layers of the atmosphere. Every month, John Christy and I update global temperature datasets … that represent the piecing together of the temperature data from a total of eleven instruments flying on eleven different satellites over the years. As of early 2011, our most stable instrument for this monitoring is the Advanced Microwave Sounding Unit (AMSU-A) flying on NASA’s Aqua satellite and providing data since late 2002.
… Contrary to some reports, the satellite measurements are not calibrated in any way with the global surface-based thermometer record of temperature. They instead use their own on-board precision redundant platinum resistance thermometers calibrated to a laboratory reference standard before launch.[Emphasis added]
The last sentence is somewhat reassuring, but it does not resolve my questions about how they compensate for cloud cover. It appears highly likely that Global temperatures have increased since 1880 by around 0.5ºC, which would most likely increase the water vapor content of the atmosphere and, over time, result in more clouds, on average. Thus, depending upon how the satellite temperature data analysis corrects for cloudiness, that data might report more warming than actually occurs. In any case, it appears that the satellite data will help improve the general reliability of global tempeature data, assuming that the analysis is done properly, by experts who do not have any political agenda to “prove” or “disprove” Catastrophic AGW. Spencer appears to be a solid citizen in that respect.
CONCLUSIONS
In my postings (A-, B-, C-, D-) in this Tale of the Global Warming Tiger series, I asked for comments on my allocations: to: (1) Data Bias 0.3ºC, (2) Natural Cycles 0.4ºC, and (3) AGW 0.1ºC. Quite a few readers were kind enough to comment, either expressing general agreement or offering their own estimates. Here is a tabulation of their interesting inputs. THANKS!
| Anomaly due to — | Human (AGW) | Natural Cycles | Data Bias |
|---|---|---|---|
| A- | ºC | ºC | ºC |
| Bill Illis | 0.225 | 0.275 | 0.300 |
| Brian H | 0.450 | ||
| Edmh | 0.100 | ||
| Ágúst Bjarnason | 0.250 | 0.250 | 0.100 |
| B- | |||
| Ed Caryl | 0.000 | 0.300 | 0.500 |
| James Barker | 0.000 | 0.480 | 0.320 |
| JimF | 0.100 | 0.500 | 0.200 |
| richard verney | 0.000 | 0.550 | 0.250 |
| Scarface | 0.000 | 0.150 | 0.650 |
| Dave Springer | 0.500 | 0.000 | 0.300 |
| Mike Haseler | 0.100 | 0.300 | 0.200 |
| C- | |||
| Leonard Weinstein | 0.300 | 0.400 | 0.100 |
| TimC | 0.100 | 0.400 | 0.300 |
| Steve Reynolds | 0.400 | 0.250 | 0.150 |
| Eric Barnes | 0.150 | 0.450 | 0.200 |
| Lucy Skywalker | 0.000 | 0.300 | 0.500 |
| D- | |||
| Wayne | 0.100 | 0.300 | 0.400 |
| Eadler | 0.700 | 0.200 | 0.000 |
| Nylo | 0.200 | 0.400 | 0.200 |
| Minimum | 0.000 | 0.000 | 0.000 |
| Maximum | 0.700 | 0.550 | 0.650 |
| AVERAGE | 0.179 | 0.331 | 0.275 |
| Ira’s Estimates | 0.100 | 0.400 | 0.300 |
| Anomaly due to — | Human (AGW) | Natural Cycles | Data Bias |