Questions for the Australian Bureau of Meteorology

    Open Letter – 4th March 2014

The Hon Greg Hunt MP
Minister for the Environment
PO Box 6022
House of Representatives

Dear Minister Hunt

Re: Australian Bureau of Meteorology and reliability of temperature statistics and seasonal rainfall forecasts

I am writing to you as the Minister for Environment, ultimately responsible for the activities of the Australian Bureau of Meteorology, to ask you consider the following seven issues pertaining to the activities of the Bureau.

Australia has long been considered “a land of drought and flooding rains”. Nowadays, there is also anxiety within the community about the potential impacts of climate change, and how this might be managed. For example, recent focus has been on revised planning schemes for anticipated sea-level rise, water reform in the Murray Darling to mitigate the next drought, or estimating peak electricity demand for summer heat waves. Indeed, the Bureau is currently tasked with one of the most important jobs in Australia, the provision of information about past, future and present weather and climate. It is absolutely imperative that it fulfills this mandate through the application of the best available science, most trusted statistical techniques, and absolutely without agenda or bias.

Q1. Could the Bureau explain why it uses 1910 as the start date for the official temperature record rather than a year such as 1860, given there was a large amount of reliable temperature data available from the mid 1800s?

On 3rd January 2014, David Jones, Manager of Climate Monitoring and Prediction at the Bureau, was reported extensively in the Australian media claiming that 2013 was Australia’s hottest year on record. It remains unclear how this national average value was calculated. Benchmarking against satellite records since they began in 1979 suggests 1980, 1998, 2005 and 2009 may have been hotter years. Earlier records from individual weather stations indicate that during the Federation drought (1896-1902) there were many hotter years in particular areas. Of practical concern, the official temperature record only extends back to 1910 and thus the years of the Federation drought are not considered in determining the hottest average year.

I wrote to the Bureau on 9th January asking how the national average temperature was calculated. I also asked why the official temperature record only goes back to 1910, when it is well known there are official recordings extending back to the mid to late nineteenth century.

In his reply, Neil Plummer, Assistant Director, Climate Information Services, explained that national standardisation of instruments did not occur until 1910. He also explained that data prior to this time can be fragmented, of uncertain or low quality and, in many cases, there is no information about the nature of the instruments and their enclosures.

Such an explanation is, in fact, historically incorrect.

Starting from at least the mid-1800s a network of meteorological observation stations grew associated with the establishment of post offices and telegraphic offices. The Colonial Meteorological Conferences, held in 1879, 1881 and 1888, were attended by scientists from across the land. They included discussions about the communication of meteorological data and also the need for standardization in the methods of recording, including through the installation of Stevenson screens.

A Stevenson screen now forms part of a standard weather station and according to Mr Plummer they were not installed across Australia until 1910, which is why the Bureau excludes pre-1910 records in the calculation of a national average temperature. But this is inconsistent with the minutes from the Meteorological Conferences and also early newspaper reports. For example, it was reported in Rockhampton’s Morning Bulletin on Saturday 28th September 1889 that the Queensland government meteorologist Clement Wragge had just been in town busy fitting a Stevenson screen at the Telegraph Office, transforming Rockhampton from a second to a “first-class meteorological station”. The same article explained that he would be travelling on to Boulia and Cloncurry, where he would also install Stevenson screens.

Australia’s telegraph and meteorological networks owe a huge debt to electrical engineer, meteorologist and astronomer, Charles (later Sir Charles) Todd, who was employed by the South Australian government initially as superintendent of telegraphs, arriving in Adelaide in November 1855. He soon had a telegraphic line operating from Adelaide to Melbourne, and by 1872 he oversaw completion of the overland telegraph line from Adelaide to Darwin connecting Australia to Europe via Indonesia. By 1877 each state had tapped into this network.

In 1870 the post office and telegraph departments were amalgamated and Charles Todd was appointed Postmaster General and Superintendent of Telegraphs. This institution, established in 1870, became a Commonwealth department at federation on 1st January 1901, and was administered from Adelaide until 1975.

For Charles Todd the telegraph and post offices were a means to an end. His first passion was meteorology, and everywhere he established a telegraphic office he established a weather station and trained the staff in the operation of the equipment. The telegraphic officers in South Australia and the Northern Territory were required to report temperatures and rainfall on a daily basis to his observatory in Adelaide.

By 1860, Charles Todd was receiving temperature data from 14 stations in South Australia and the Northern Territory. By 1879, he was publishing weather maps, which resemble current synoptic charts.

Without incorporating these earlier records, from Sir Charles Todd, Mr Wragge and others, who were particularly competent record keepers, we can have no confidence in claims by Bureau spokespeople such as Neil Plummer or David Jones that 2013 (or any other) is the hottest year on record, when they only look at figures post 1910, and then selectively.

Q2. Further to 1, if the pre-1910 data is not suitable for official domestic use, can the Bureau explain why it finds it suitable enough to provide this data for generation of a global annual mean temperature anomaly back to 1850?

Despite the claimed unreliability of pre-1910 temperature data, the Bureau contributes to an international program, coordinated by the Climatic Research Unit at the University of East Anglia, which calculates annual average temperatures for Australia back to 1850.[1] This information is then used by the Intergovernmental Panel on Climate Change (IPCC) to calculate warming over the last 160 years.

This seems incongruous and inconsistent and should be explained. It undermines the contention that pre-1910 data is unreliable.

Q3. Could the Bureau please provide a list of the actual stations used to calculate the 2013 average mean temperature anomaly, the specific databases and time intervals applied to each of these stations, as well as the adjustments that have been made to the raw data?

The Bureau does not have a set of temperature thermometers uniformly positioned on a grid across the landmass of Australia from which it might derive an average annual temperature. The largest percentage of its approximately 750 (the exact number is always in a state of flux) temperature recording stations are in south eastern Australia.

The Bureau’s solution is to select a subset of about 112 stations from the 750 stations, some with long data sets, others with very short data sets, and to discard all data before 1910 (more than 50 years of measurements for some stations).

Even though post-1910 data is considered most reliable, the Bureau still adjusts some of this residual data.[2] Then the adjusted and truncated values from the subset of stations are fitted to a grid to generate an area-weighted average national temperature. The resulting single data series is the official national average temperature since 1910, which shows an increase but with significant variability between years.

When announcing the record-breaking national average temperature for Australia for 2013, Dr. Jones suggested that it was indicative of a general warming trend, and that this trend was Australia-wide.

Some issues have been raised with this claim.

There has been considerable variability over recent years in both the annual average temperature anomaly, and also the annual maximum and minimum temperature anomaly, with 2011 and 2012 relatively cold years, which suggests this may not be a warming trend at all.[3] Furthermore, when state boundaries are considered, only South Australia, was significantly warmer in 2013.

While there has been much reference to January 2013 being exceptionally hot, when the maximum January temperature anomaly is plotted for South Australia back to 1910, it is apparent that there were hotter Januarys in the 1930s.

Indeed according to the Bureau’s time-series based on the truncated and adjusted subset used to generate the official statistics, the January maximum temperature anomaly has not been exceeded since 1934 for the Northern Territory, and since 1947 for Queensland. New South Wales and the Australian Capital Territory have hotter Januaries going back to 1938. Considering the January mean temperature anomaly for South Australia for the period 1910-2014, there have been seven hotter January’s back to 1933. Tasmania had its hottest Januaries in the early 1960s and Victoria in the early 1980s.

It would increase confidence in the Bureau’s methodology for calculating national average temperature if the actual raw data was made available along with the code so that independent scientists could verify their procedures and methodology, as per standard scientific procedures.

Q4. Given potential and actual conflicts of interest, could the Australian Bureau of Statistics, (ABS) rather than the Bureau of Meteorology, be tasked with the job of leading the high quality and objective interpretation of the historical temperature record for Australia?

Confirmation bias is a tendency for people to treat data selectively and favor information that confirms their beliefs. Such bias can quickly spread through an organization unless there are procedures in place to guard against groupthink. Groupthink – Psychological Studies of Policy Decisions and Fiascos (Houghton Mifflin Company, Boston, 1983) by Irving L Janis is the seminal text in the area and outlines how irrespective of the personality characteristics and other predispositions of the members of a policy-making group, the groupthink syndrome is likely to emerge given particular conditions; including that the decision-makers constitute a cohesive group, lack norms requiring methodical procedures and are under stress from external threats. This can lead to illusions of invulnerability and belief in the inherent morality of the group leading to self-censorship, illusions of unanimity and an incomplete consideration of alternative solutions to the issue at hand. All of these characteristics can be applied to the Bureau, which is particularly convinced of the inherent moral good in both its cause and approach to the issue of global warming.

The extent of the problem of groupthink within the Bureau, and the international climate science community more generally, became particularly evident in 2009 when the Climategate emails were released. These emails raised many disturbing questions about the way climate science is conducted; about researchers’ preparedness to block access to climate data and downplay flaws in their research; and about the siege mentality and scientific tribalism within the community. These emails show that managers at the Bureau including David Jones and Neil Plummer, rely on other climate scientists, particularly those at the heart of Climategate, for statistical advice and share the general contempt of the mainstream climate science community for rigorous scientific analysis.

For example, in an email dated 7th September 2007 Dr Jones wrote to Phil Jones from the Climate Research Unit that, “Truth be know,[sic] climate change here is now running so rampant that we don’t need meteorological data to see it.” In an email dated 5th January 2005, David Parker from the UK Met Office wrote to Mr Plummer resisting a suggestion that the period used to calculate temperature anomalies be corrected on the basis that “the impression of global warming will be muted.”

In 2006 Edward Wegman, professor at the Center for Computational Statistics at George Mason University, chair of the US National Academy of Sciences’ Committee on Applied and Theoretical Statistics, and board member of the American Statistical Association, was asked by the US House of Representatives to assess the statistical validity of the work of Michael Mann which contributed to many of the claims by the IPCC that the 1990s was the warmest decade and 1998 the warmest year of the millennium. In his final report, Professor Wegman made damning assessments pertaining to the statistical competence of leading climate scientists.[4]

In particular, and drawing an analogy with pharmaceutical research, Professor Wegman recommended:

Recommendation 3. With clinical trials for drugs and devices to be approved for human use by the FDA, review and consultation with statisticians is expected. Indeed, it is standard practice to include statisticians in the application-for-approval process. We judge this to be a good policy when public health and also when substantial amounts of monies are involved, for example, when there are major policy decisions to be made based on statistical assessments. In such cases, evaluation by statisticians should be standard practice. This evaluation phase should be a mandatory part of all grant applications and funded accordingly.

Q.5. What is the explanation for the discrepancy between allocated funding for salary and actual salary of climate change modelers employed under the Climate Change Science Program?

On 8 November 2010, John Abbot of Central Queensland University made a freedom of information (FOI) request to the now defunct Department of Climate Change and Energy Efficiency (DCCEE). Part of the request related specifically to budget details for the Australian Climate Change Science Program (ACCSP) and included a request for information as to how monies were allocated between specific projects and the outcomes of these projects.

Documentation received by Professor Abbot from the DCCEE with a letter dated 1st March 2011 showed that in both the 2008-2009 and 2009-2010 financial years almost $8 million was allocated annually for “fundamental climate change science research” projects, with over 90 per cent of the allocation being granted to the Bureau and the Commonwealth Scientific and Industrial Research Organisation (CSIRO). The details of the projects, including Document 323 (which listed and described projects proposed by the CSIRO and Bureau) were, however, withheld by the DCCEE on the basis of conditional exemptions under section 45 (Confidential information) and section 47G (Business) of the FOIA.

Professor Abbot disputed the validity of these exemptions. The DCCEE released all documents after protracted legal argument, although it is unclear what precipitated this release: the DCCEE never conceded any points of law, nor has the Australian Information Commissioner provided formal written advice. After the documents relating to the ACCSP had been released it was evident that there were accounting discrepancies in the distribution of the funding. For example, Professor Abbot calculated that staff costs for the nominated 160 individual researchers (part-time and full-time) from the Bureau and CSIRO for the 31 separate modelling projects listed under the Australian Climate Change Science Program came to $18.1 million. Spread across 82 full-time equivalent employees, this equates to approximately $220,000 annually for each full-time equivalent employee. Yet a government computer modeller with a postgraduate qualification in a management position, with at least 10 years’ experience, is unlikely to earn more than $80,000 and a mid-tier technical level scientist at CSIRO would earn less than $116,000.

When clarification was initially sought the DCCEE claimed the discrepancy related to travel expenses for these modelers including travelling by ship to the Antarctic. When it was pointed out that there were separate allocations within the stated budget for operational costs, the DCCEE declined to comment further. Under Freedom of Information Legislation a government department is not required to provide information or answer questions, only to consider disclosing documents already in existence.

Published lists of salary scales for CSIRO and Bureau employees for this period are publicly available and would suggest that the majority of the employees would have had to have been employed at a senior managerial level which is incongruous with the actual job titles within the budgets.

Q6. Is the reliance by the Bureau on a General Circulation Model (GCM) to provide monthly and seasonal forecasts justified when methods that use historical patterns have proven to be more accurate?

Without issuing a media release, or otherwise informing the public, in June 2013 the Bureau discarded the statistical models that had been used for over 30 years to generate seasonal rainfall forecasts in favour of a GCM, the Predictive Ocean Atmosphere Model for Australia, POAMA.

While the old statistical models were relatively primitive and had many deficiencies, there is no evidence to suggest that POAMA can provide a better medium-term rainfall forecast. In fact, a recent peer-reviewed paper by Andrew Schepen from the Bureau with colleagues from CSIRO (Journal of Geophysical Research, Volume 117) clearly states that the simple statistical models are more skilful at rainfall forecasting than POAMA during the period from September to February. This is the period most often associated with extreme rainfall events in Australia, particularly cyclones and flooding.

Evaluations of POAMA rainfall forecasts by Professor Abbot from Central Queensland University, and me, (Atmospheric Research, Volume 138) indicate that forecasts from POAMA are less accurate than calculating simple long-term averages.

Professor Abbot and I show that models based on artificial neural networks (a form of artificial intelligence) run on standard computers can produce significantly more skilful monthly rainfall forecasts for localities in Queensland than the forecasts from POAMA (Advances in Atmospheric Sciences, Volume 29).

In discussing these issues with researchers in Australia and the United Kingdom there is a high level of defensiveness because these computer models, involve a very substantial investment in supercomputers, and also underpin attempts to model anthropogenic global warming to which many leaders within the field have already linked their reputations, continued funding and career advancement.

While the Australian taxpayer invested upward of $30 million dollars in just one supercomputer in March 2009 on the basis that this would make weather predictions more accurate, some individual forecasters, operating outside the mainstream climate-science community, and without any government support, are producing more reliable and accurate medium-term rainfall forecasts than the Bureau.

For example, the Bureau issued its national rainfall outlook for spring 2013 on 28th August last year indicating that it would be wetter than average for most of the Murray Darling Basin and especially central Victoria. Then on 25th September 2013, less than one month later, it issued another forecast, this time forecasting a drier than normal season was likely for most of the Murray Darling Basin. These forecasts influence federal water resource planning, with the capacity of the relevant authorities to make sensible decisions about the management of water storages and distribution of allocations impacted accordingly.

An engineer based in Bendigo, Kevin Long, also made a spring forecast, also published on 28 August 2013 at his website.[5] He forecast below-average rainfall for central Victoria, above-average temperatures and that river stream flows would drop away quickly in the Murray Darling. Mr Long’s forecast was much closer to that observed.

Mr Long is skeptical of anthropogenic global warming theory and the associated claim, which is often made by Bureau staff, that the Australian climate is on a new trajectory and that historical data is therefore of limited value in making forecasts for the future. Rather Mr Long relies on patterns in historical data and relates these to solar and lunar cycles. On 28th November 2013, a month before Chris Turney from the University of New South Wales became trapped in sea ice in the Antarctic, Mr Long issued his summer forecast, which began with reference to sea ice:

In recent months, new Antarctic sea ice records were set, exceeding all previous satellite records… Already since mid-September, record growth rates of the Arctic sea ice and the slow melt rates of the Antarctic have driven the global sea ice extent up to levels not seen for more than a decade. In the future these higher sea ice averages will become one of the dominant drivers of eastern Australia’s developing mega-drought cycle… higher sea ice periods go hand-in-hand with below-average rainfall and heavier late season frosts.

My forecasting work with artificial neural networks is also underpinned by the assumption that past patterns continue into future periods. The fact that these rainfall forecasts are superior to forecasts from POAMA suggests that the historical patterns have not been obliterated by global warming, a claim sometimes made by Bureau staff and a justification for using GCMs.

The accuracy of forecasts from all advanced statistical methods, including artificial neural networks, will be further enhanced by the availability of the longest historical data records possible. This is another reason to provide digitized versions of weather records extending back as far as possible for individual weather stations.

Q7. Could the Bureau explain why it doesn’t publish the actual quantity of rain forecast by the GCM when issuing its monthly and seasonal forecasts, and why it shouldn’t establish a publicly available archive showing quantities of rainfall forecast in the past?

Australian rainfall is extremely variable with episodes of drought that often end with extreme flooding. The summer of 2010-11 in Queensland is remembered as the ‘Summer of Sorrow’ with 38 people killed, Brisbane inundated, and three-quarters of the state declared a disaster zone. Eighty-five per cent of Queensland coalmines either closed entirely or had to restrict production.[6] A report prepared for Australia’s National Climate Change Adaptation Research Facility following that extreme event concluded that available seasonal rainfall forecasts from the Bureau were not useful, and lacked localised information, and other micro details, needed for focused pro-active planning and risk management.[7]

Official seasonal rainfall forecasts are now generated by the GCM, POAMA, as a series of monthly rainfall forecasts for grid points across the Australian landmass. The model generates this output as an actual quantity of rainfall. But rather than publish the actual quantity forecast staff at the Bureau convert this into a coloured map by calculating the probability of exceeding the long-term average value for broadly defined regions. While the coloured maps provide a useful quick reference, they are inadequate for planning and risk management and they prevent benchmarking against observed rainfall. In short, it is currently impossible to objectively measure the skill of POAMA in providing rainfall forecasts, unless you are directly employed in its use.

In conclusion

The topic of climate change seems to be perpetually in the news, with current examples of very cold weather in the eastern USA, floods in Britain, and more recently drought in Queensland and New South Wales. From the perspective of climate, one hundred years of recorded measurements is a mere blink of the eye in the history of our planet. If the record for Australia can be extended back to 1860, providing an additional 50 years of data, then this should be a priority. This information is more important than the calculation of a national average temperature. If data is to be adjusted and homogenized then the methodology applied needs to be clearly stated. Indeed having access to all the available records as far back as possible is important because it helps unravel the true features of the natural climate cycle, a goal that Sir Charles Todd and his colleagues were working towards before the establishment of the Bureau in 1908.

In arriving at theories that explain the natural world, the best scientists always use all the available data, not just the data that happens to fit a particular viewpoint. Furthermore, long historical data series are critical for statistical methods of rainfall forecasts, including the application of artificial neural networks that can currently provide more skillful forecasts than POAMA. That the Bureau persists with POAMA, while failing to disclose to the Australian public the absence of any measurable skill in its monthly and seasonal forecasts, should be of grave concern to the Australian parliament.

Yours sincerely

Dr Jennifer Marohasy
Adjunct Research Fellow
Central Queensland University


1. Global climate variability & change – Time series graphs, Bureau of Meteorology website, and Temperature station data – Climate Research Unit website,

2. ACORN-SAT: A Preliminary Assessment, Ken Stewart, May 2012,

3. Australian climate variability & change – Time series graphs, Bureau of Meteorology website,

4. Wegman E J, Said Y H, Scott D W. 2006. Ad Hoc Committee Report On The ‘Hockey Stick’ Global Climate Reconstruction, Congressional Report, United States House Committee on Energy and Commerce

5. The Longview website,

6. Queensland Government Flood Commission of Inquiry. Chapter 13 Mining

7. Sharma V, van de Graaff S, Loechel B, Franks D. 2013. Extractive resource development in a changing climate: Learning the lessons from extreme weather events in Queensland, Australia, National Climate Change
Adaptation Research Facility, Gold Coast, pp. 110.

Website by 46digital