Australia is a large continent in the Southern Hemisphere. The temperatures measured and recorded by the Australian Bureau of Meteorology contribute to the calculation of global averages. These values, of course, suggest catastrophic human-caused global warming.
Two decades ago the Bureau replaced most of the manually-read mercury thermometers in its weather stations with electronic devices that could be read automatically – so since at least 1997 most of the temperature data has been collected by automatic weather stations (AWS).
Before this happened there was extensive testing of the devices – parallel studies at multiple site to ensure that measurements from the new weather stations tallied with measurements from the old liquid-in-glass thermometers.
There was even a report issued by the World Meteorological Organisation (WMO) entitled ‘Instruments and Observing Methods’ (Report No. 65) that explained because the modern electronic probes being installed across Australia reacted more quickly to second by second temperature changes, measurements from these devices need to be averaged over a one to ten-minute period to provide some measure of comparability with the original thermometers. The same report also stated that the general-purpose operating range of the new Australian electronic weather stations was minus 60 to plus 60 degrees Celsius.
This all seems very sensible, well-documented, and presumably is now Bureau policy.
Except, this winter I have discovered none of this policy is actually being implemented.
Rather than averaging temperatures over one or ten minutes, the Bureau is entering one second extrema. This would bias the minima downwards, and the maxima upwards. Except that the Bureau is placing limits on how cold an individual weather station can record a temperature, so most of the bias is going to be upwards.
I have known for some time that the Bureau remodel these recordings in the creation of the homogenised Australian Climate Observations Reference Network – Surface Air Temperatures (ACORN-SAT), which is subsequently incorporated into HadCRUT, which is used to inform the United Nation’s Intergovernmental Panel on Climate Change (IPCC). Nevertheless, I naively thought that the ‘raw data’ was mostly good data. But now I am even sceptical of this.
As someone who values data above most else – this is a stomach-churning revelation.
Indeed, it could be that the last 20-years of temperature recordings by the Bureau will be found not fit for purpose, and will eventually need to be discarded. This would make for a rather large hole in the calculation of global warming – given the size of Australia.
*******
Just yesterday I wrote a rather long letter to Craig Kelly MP detailing these and other concerns. That letter can be downloaded here: TMinIssues-JenMarohasy-20170831
One of Australia’s true national treasures, Ken Stewart, was documenting this disaster some months ago – when I was still in denial. One of his blog posts can be accessed here: https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/
And I am reminded of a quote from the late Christopher Hitchens: The tribe that confuses its totems and symbols with reality has succumbed to fetishism and may be more in trouble than it realises.
Shadeburst says
Most science is about gathering data and organising them in a format suitable for analysis by the usual numerical methods. These methods can compensate for noisy and dirty data, but suspect data render the whole exercise pointless.
Silvestre Acedillo says
This is for real? I am at a loss for words. So I need time to digest and read on all of them first. but…… I wanna scream…
So sorry to hear that….
Peter C says
Jen,
Excellent letter to Graig Kelly MP. I might try writing to my own MP.
tom mallard says
There perhaps isn’t a reason yet to discard, consider using probability against the thermal-expansion devices for maxima & minima correlations to use in integrals that limit what it had to be thus having a mean to use in long-term data a reaction.
Steve Short says
Astonishing! If true as appears this is going to be a real big scandal.
Graeme No.3 says
Steve Short:
Scandal? Not at all; it will all be covered up.
Personally I was interested in the revelation that the BoM used standard mercury ( and/or alcohol) thermometers out of the pack without calibration. Third rate practice.
Steve Richards says
Its good enough for government work. Whats the problem?
Politicians in Oz and in the UK are typically not hard science or engineering based.
Therefore they have extreme difficulty in seeing and understanding any problem like this.
It needs to be pointed out to them in steps of A, B, C style learning and understanding.
Even then, some will not get it if it interferes with preconceived ideas.
Embarrassment and shame used to work but politicians seem to be immune to this currently. (unless said shame and embarrassment is overwhelming)
Jennifer Marohasy says
Steve Richards
Agreed. So, what are the steps of A, B and C… how would you explain this?
Siliggy says
“how would you explain this?”
How about comparison to measuring the peaks of waves instead of the tides.
hunter says
The climate obsession has corrupted more and more of society. But it has primarily corrupted itself.
But since so many parts of society have been co-opted into the climate obsession that one wonders what can be done to reform this institutional breakdown….
bart says
Im 2km from a weather measuring site and I am finding up to 4 deg difference between the official recordings and my temperature loggers. I have found my loggers to be comparable to loggers 4km, 10km and 15km away. this only occurs at temps below 0 deg c . The official recording is always warmer.
Steven Fraser says
Teach them the reality of temp measurement, then and now.
IMO best way would be to take 5 of each device into a conference room, put them in a circle, and have folks record the values themselves. Each person gets 1 analog or digital device, and does not compare notes with the person next to them.
Let the AC cycle as it will for the first half hour, and then turn it off.
Take manual readings every 5 mins, and digital ones as often as the devices allow. Record a value every 5 minutes, and place. The timestamped measurements from the reading in a box next to the device. These will be be examined when the hour is up. The staff rotates to different devices of the same type each 10 minutes, doing 2 recordings at each device.
At the endof the hour, in teams of 2 persons or small groups, each team takes 1 analog, and one digital device recording, and calculates the high and low and average reading reports for the device, entering the raw data into a chart manually. The charts are collated and summarized down to 3 numbers for all measurements.
When all are collected, post them and discuss the results and any variations found.
As a more ‘field trip’ experiment, place the device pairs in different parts of a city, with different surroundings… parks, seashore, airport, neighborhoods, hills, or simply distribute them around the outside of a building in different situations… near walls, out in grass, by parking lots, shaded. This one would be easier to administer, and could still be done with the sense of a ‘game’.
For an additional wrinkle, leave a large gap somewhere in the distribution of the stations, and have the final temp collation ‘assign’ a value to it, homogenizing’ with a nearby station values. Or, midway during the analysis, simply start disregarding the values in a location, and assigning it the values of the digital station on either side.
This last idea will provoke discussions of station infilling, site change continuity, and ‘adjustments’, and facilitate discussion about the propriety of those procedures.
I hope this is helpful.
John Atkins says
Thanks. This post and your letter cleared a lot of things up for me on this recent development. Appreciate the effort.
Jennifer Marohasy says
Thanks, Steven Fraser.
I also need this explaination – specifically how a one-second reading as opposed to a 1 minute average – will produce readings inconsistent with measurements from mercury thermometers for a radio interview.
In short, how do I explain all of this in a ‘quick’ radio grab to the ‘average’ Australian?
Dave says
Jennifer,
“How do I explain all of this in a quick radio grab?”
Maybe a chat about the Heathrow Airport RECORD high temperature? http://clivebest.com/blog/?p=6721
Imagine if 1 second was included in this 1 minute average?
An A380 alone would have made a world record! Aircraft need reliable temperature data for take off loads etc. Imagine one jet blast increasing the reading to say 42 degrees C, the fuel loads would be altered drastically!
Jennifer Marohasy says
Just filing this here:
Was the recent ‘record’ merely caused by a blast of hot air from a passing airliner at Heathrow?
By Christopher Booker 6:10PM BST 11 Jul 2015
Since my story last week headed “Mystery grows over Met Office’s ‘hottest day’”, there have been further developments. How could the Met Office justify its widely publicised claim that July 1 was the hottest July day recorded in Britain, based solely on a reading of 36.7 degrees Celsius (98 degrees Fahrenheit) made at Heathrow airport?
When the blogger Paul Homewood (on Notalotofpeopleknowthat) tracked down four weather stations around Heathrow, none showed readings on July 1 above 35.1. He wondered how far the Met Office figure might have been influenced by the siting of its Heathrow temperature gauge, shown by aerial photographs to be surrounded by heat-radiating Tarmac and near a runway.
He therefore asked the Met Office for further details about how its figure was arrived at. Its reply was that this information could only be supplied for £75 plus VAT. But it then, in light of all the interest this was arousing, issued a long press release. Despite claiming that its Heathrow weather station met all the requirements of the World Meteorological Organisation, it failed to answer any of the relevant questions. What it did include, however, was a graph revealing that the wholly untypical 36.7 figure had only been fleetingly reached in a marked 1.7 degree temperature spike at 2.15pm. Was this merely caused by a blast of hot air from a passing airliner? No answer on this from the Met Office.
Jennifer Marohasy says
And a good blog post by Jo Nova: http://joannenova.com.au/2017/09/bom-scandal-one-second-records-in-australia-how-noise-creates-history-and-a-warming-trend/ .
Steven Fraser says
Jennifer,
I concluded from your earlier description that the measurement procedure established for use with the digital thermometers during the calibration tests is not being followed. Since the thermometers are not being used consistent with calibration, the measurments are invalid.
Analogous to this, and perhaps more along the lines of an audience interests, are some of the more familiar measurements: the time used when pulse is measured, or waiting until a scale has stopped swinging before taking a value, or waiting for the ‘beep’ before reading an digital oral thermometer.
Not all air by the thermometer is of the same temp, and sequential measurements of the same temp by same device can vary. The averaging over a minute lessens the effect of the noise in the air temp and in the measuring device.
I hope this helps.
Ian George says
I wonder how the BoM arrive at the average/mean temps for each month/season/year.
I have often noticed that the NSW monthly max temp anomaly average is not consistent with the actual anomalies for each site.
For instance, August’s NSW max temp average was stated as 1.17C. But if one checks all the sites in NSW’s max anomalies, only 20% show around or above that average.
Do BoM only use ACORN sites, then smooth those temps over surrounding areas and then compare it with 1961-1990 anomalies?
The figure the BoM reports bears no resemblance to the actual raw temps.
http://www.bom.gov.au/climate/current/month/nsw/summary.shtml
BTW, all the weather maps for temps appear to be down at the moment.
Jennifer Marohasy says
Hi Steven,
Those words/that analogy will be most useful.
Hi Ian,
There is a lot more on this very topic in Chapters 9 and 10 of ‘Climate Change: The Facts 2017’, which is now available again as paperback and also on Kindle. More information here:
https://ipa.org.au/publications-ipa/books/climate-change-facts-2017-ebook-now-sale
But basically, in answer to your specific question, and from Chapter 10:
It is impossible to replicate the method used by the Bureau to calculate the current ACORN-SAT values [i.e. the state-wide and nation-wide averages]. This is not only because of the complexity of the homogenisation system used to remodel individual series, but also because the maximum temperature anomalies for each state are based on the application of a complex weighting system to each of the individual homogenised series. In particular, the Bureau has acknowledged:
“Temperature anomalies are based on daily and monthly gridded data with more than one station contributing towards values at each grid point. Unlike simpler methods such as Thiessen polygons, there is no specific set of weights attached to these. The effective contributions change on a daily or monthly basis.”
****
Comments are now closed for this thread.