Dear Jennifer,
I hope you are going to comment on the BOINC disaster. See my post at;
http://www.warwickhughes.com/blog/?p=39#more-39
With links to where they have had to announce a major error.
Coolwire 11 in Feb 2005 with perhaps a prescient little note on the BOINC mess.
http://www.warwickhughes.com/cool/cool11.htm
All the best,
Warwick
——————————-
I have previously posted on Boincing, click here.
Graham Young says
I think you’re being unfair to BOINC in a technical sense. A small mistake like this doesn’t invalidate their models. But what is interesting is that it took them so long to pick it up, and so many simulations. They must have had a strong desire to believe!
What this also demonstrate is that far from being based on pure physics, climate modelling appears to be an elaborate piece of curve fitting.
They appear to be just running and running the thing with small tweaks to get a best match which they then hope will obtain into the future. But as this shows, who really knows if they have tweaked the right factors?
Ender says
Graham – “What this also demonstrate is that far from being based on pure physics, climate modelling appears to be an elaborate piece of curve fitting.”
So if the Holden Commodore is found to have a fault and a model needs recalling then obviously all cars on the road of all make and models need to be recalled as well.
detribe says
You are using the wrong analogy Ender:
What you should say that the so called consensus that car design engineering is completely settled and we don’t have to worry about imperfections in human assumptions about car behaviour is an error, so that skepticism about new car performance is still in order, is nearer the mark. That said, the best cars today are pretty good, and we can use them with some confidence if we are careful about where we drive them.
The point I would make is: if model behavour is influenced by just one parameter, what about the cumulative interaction of numerous uncertainties in hundreds of parameters and indeed uncertainties about appropriate physical models and loops in a complex non-linear system, a la “End of Certainty” by Ilya Prigogine?
David says
>They appear to be just running and running the thing with small tweaks to get a best match which they then hope will obtain into the future. But as this shows, who really knows if they have tweaked the right factors?
Graham the problem is clearly with the boundary forcings not the model and sounds like a simple oversight. The affair is a nice demonstration of the willingness of climate scientists to admit error, find a solution and more on. Would be nice if some of the “sceptics” were to do the same wouldn’t it. A lot of them still have difficulty with the fact that global warming is global (not urban), the climate scientists and their models were right and the MSU data was wrong, the Hockey stick has been verified by dozens of papers (including one out yesterday in Nature http://info.nature.com/cgi-bin24/DM/y/eXqF0LyRAm0Ch0y3l0E5, )MIT, Manabe, Hanssen were right with the predictions of global warming made ~40, ~30, and ~10 years ago, and the globe continues to warm despite the absence of a natural driver.
David
Ender says
detribe – “You are using the wrong analogy Ender:”
Absolute garbage. It is exactly the correct analogy. One fault in one car gives no indication of the general state of car faults or of the current state of car design. It is simply one problem with one group of engineers that made a mistake.
Similarly one mistake in one climate model gives no indication of the state of general climate modelling or computer modelling in general. As computer modelling is used from aircraft to metal smelting are you now saying that all aircraft designed with numerical models and all structures made with metal and designed with computers to be examined?
It is a very another very strong indication of the total lack of scientific merit of the AGW skeptic case. Starting with Ian Castles mentioning the MBH98 in a thread that had nothing do with paleoclimate to now all the pseudo scientists, with Warwick in the lead, descending on a simple error like a pack of ravening wolves after a dog biscuit.
The history of science is littered with accidental discoveries and mistakes leading to profound insights. Look at this from another way and try to be the scientist that you deserve to be.
These computer runs were done without aerosols due to a mistake in a header file. So this is an indication of the warming that could happen in the absence of particals in the upper atmosphere. Humans without ANY forsight or design decided that we needed lots of energy so we burnt coal releasing CO2 AND billions of small particles from unburnt matter and pollutants. In an amazing stroke of luck these particals created a heat shield that counteracted the greenhouse effects of the CO2 that was also released.
So what happens when we start to burn coal cleanly? These aerosols contibute to thousands if not millions of deaths from respiritory complaints should be eliminated along with the urgent requirement to reduce CO2. If we clean up coal to save lives and not reduce CO2 then we may just be exposed to the full warming of the enhanced greenhouse effect and temperatures might rise much more that we think.
Perhaps this mistake contains some science after all.
ABW says
David is spot on…
The “error” was simplay the failure to switch on the aerosols which were being put into the model runs, and was the result of a pure human mistake – it was not model error. The models ran just as they were programed to do.
In fact the “error” will probably be a boost to the experiment – we will now be able to compare non-aerosol and aerosol-included runs, and hence have a better idea as to the impact of, say, global dimming etc., and how that may have been/could be masking the full impact of the increased greenhouse gases and their resultant global warming.
Worth remembering that some of the best scientific findings are the result of a simple human mistake. (e.g., Penicillin)
Ian Castles says
David – ‘The affair is a nice demonstration of the willingness of climate scientists to admit error, find a solution and more on.’ But how could these climate scientists avoid admitting error, when the error was about to be exposed 100,000 times all around the world?
I’ve no difficulty with accepting that the particular problem that has now arisen is not with the model and is a simple oversight. But it’s a very embarrassing oversight precisely because the climateprediction.net principals took such a high profile even before the experiment proper had begun.
From the outset, they were very strong on the PR aspects of the exercise. The Oxford University media statement of 27 January 2005 seemed to be a clear example: we discussed that earlier on the BOINC thread. But an even more revealing indication of the motivations of the group is this extract from BBC News on the evening before the press statement and the publication of the first results of the experiment in ‘Nature’:
“It’s very difficult to get politicians to collaborate, not only across the globe but also over sustained lengths of time,” Bob Spicer from the Earth Sciences Department at the Open University, told BBC News. “The people who can hold politicians to account are the public; and with this project we are bringing cutting-edge science to the stakeholders, the public.”
For the full report, see http://news.bbc.co.uk/2/hi/science/nature/4210629.stm
Bob Spicer was one of the co-authors of the Stainforth et al paper in ‘Nature’, which included statements such as the following:
‘Our results demonstrate the wide range of behaviour possible within a GCM, and show that high sensitivities cannot yet be neglected as they were in the headline uncertainty ranges of the IPCC Third Assessment Report (for example, the 1.4-5.8K range for 1990 to 2100 warming).’
‘Thanks to the participation of tens of thousands of individuals world wide we have been able to discover GCM versions with comparatively realistic control climate and with sensitivities covering a much wider range than has even been seen before.’
The climateprediction.net scientists are now complaining about the media prominence that was given to the 11+ deg.C climate sensitivity figure, but it was THEY who gave it prominence – in the Oxford University media release, in their paper in ‘Nature’ and in their presentation at Exeter. The Editor’s summary in ‘Nature’ included the following:
‘The first batch of results has now been analysed, and surface temperature changes in simulations that capture the climate realistically are ranging below 2 deg. C to more than 11 deg. C. These represent the possible long-term change, averaged over the whole planet, as a result of doubling the levels of atmospheric carbon dioxide in the model.’
But was it a ‘possible long-term change’, and was it ‘cutting-edge science’ that was brought to ‘the stakeholders, the public’ in this ambitious project? Here’s an extract from Gavin Schmidt’s posting on RealClimate on 29 January 2005, two days after the ‘Nature’ article and over 14 months before the British scientists have been obliged to eat some humble pie:
‘With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11. This shows that it is possible to construct models with rather extreme behavior – whether these are realistic is another matter. To test for this, the models must be compared with data. Stainforth et al. subject their resulting models only to very weak data constraints, namely only to data for the annual-mean present-day climate. Since this does not include any climatic variations (not even the seasonal cycle), let alone a test period with a different CO2 level, this data test is unable to constrain the upper limit of the climate sensitivity range. The fact that even model versions with very high climate sensitivities pass their test does not show that the real world could have such high climate sensitivity; it merely shows that the test they use is not very selective.’
The Allen Consulting Group report on ‘Deep Cuts in Greenhouse Gas Emissions’, released earlier this month as part of the Australian Business Roundtable on Climate Change package, cites the 11+ deg. climate sensitivity figure from the Stainforth et al Exeter presentation and states that ‘There appears to be a small, but not insignificant, risk of very severe climate events resulting from emissions growth that is within the forseeable range.’ Well, does this risk in fact exist in the real world, or is it an artefact of the very weak data constraints that were applied to the models?
Graham Young says
Ender,
The motor vehicle is an absurd metaphor for a forecasting model. One is a concrete object the other is an abstract. A conceptual error in a particular car doesn’t invalidate the concept of all cars, but a conceptual error can invalidate a total family of abstracts.
David,
If the model was correct there would be no reason to run it many times to get it right. You can’t say that assumptions are somehow separate from the model – they’re integral to it.
I don’t have a problem with the proposition that AGW is real, but I do have a problem with a lot of the assertions made by all sides, and the certainty that they have in predictions which are inherently uncertain. I also have a problem with over-reliance on climate models. I’ve enough experience of financial modelling to know that externalities ruin the best laid plans, as well as internal factors that weren’t properly considered.
After the 1987 crash Wall Street hired a lot of maths and physics PhDs to design models to predict share prices. Anyone with a PhD in those areas who said this was possible should have been disqualified from the job because it is fairly obviously mathematically impossible, but they took the jobs and the money anyway.
I think we’re seeing shades of that in computer modelling. Better to have the models than not, but let’s not put too much weight on them, or kid ourselves that they are somehow pure physics.
Ender says
Graham Young – “Ender, – The motor vehicle is an absurd metaphor for a forecasting model.”
Not really, it illustrates neatly that one fault in one model of car does not mean that all cars have problems.
In exactly the same way one fault in one header file of one model does not cast doubt on all the thousands of other climate models running on different hardware, programmed in different lanquages by different teams and investigating different aspects of the climate.
“One is a concrete object the other is an abstract”
You argument is totally bogus. The error was a physical error in one file not a conceptual problem with the model. Therefore your pathetic attempts to apply one error in a file to the whole climate modelling community is completely without merit.
rog says
The problem with Ender’s argument is that car modellers can test their projections in the real world with real cars and subsequently rectify errors. Hence the term ‘prototype’ (from Wikipedia: in many fields, there is great uncertainty as to whether a new design will actually do what is desired. New designs often have unexpected problems. A prototype is built to test the function of the new design before starting production of a product.)
David says
>David – ‘The affair is a nice demonstration of the willingness of climate scientists to admit error, find a solution and more on.’ But how could these climate scientists avoid admitting error, when the error was about to be exposed 100,000 times all around the world?
Damned if you do damned if you don’t. Are you suggesting the Myles only came clean because he had no choice?
>If the model was correct there would be no reason to run it many times to get it right. You can’t say that assumptions are somehow separate from the model – they’re integral to it.
The purpose of these studies is generally to perturb model parameters in simplified cut-down versions of climate models. It is standard practice to vary the parameters across a realistic range to see if your end result is sensitive to limitations in your understanding. We should be very clear in this, however, that the underlying physic of climate is not at question. Climate models are based are based on the non-negotiable principals of conservation of mass, momentumn, energy, and water. It is this basis which makes these models very different to the economic and similar models many commentators are familiar with.
David
Ian Castles says
No I’m not suggesting that Myles Allen only came clean because he had no choice, David. As I don’t know anything about him, I have no basis for making such an imputation.
My point was that Allen and his colleagues did not in fact have any choice, so your claim that this is a ‘nice demonstration of the willingness of climate scientists to admit error’ doesn’t stand up. The climateprediction.net scientists might have been willing to admit error anyway, and as scientists one would like to think that they would have done so. But this example doesn’t demonstrate it, that’s all.
You said that it ‘would be nice if some of the “sceptics” were to do the same wouldn’t it’, which seems to mean that you think (a) that sceptics can’t be climate change scientists; and (b) that sceptics know that they’re in error but won’t come clean about it. I think that these are unwarranted generalisations.
Ender says
rog – “The problem with Ender’s argument is that car modellers can test their projections in the real world with real cars and subsequently rectify errors”
The problem with this is that I was using cars as an ANALOGY. I was highlighting the fact that one error in one file in one climate model says nothing about the all the other models. I used a real world example of a fault in one car not reflecting on the entire car fleet.
I wish I had never mentioned bloody cars now.
detribe says
Spiked Online, 20 April 2006
http://www.spiked-online.com/Printable/0000000CB027.htm
A climate model program downloaded by thousands of PC users had an internal error that meant it overstated how hot the world might get. Oops.
by Rob Lyons
Researchers behind a much-hyped climate model downloaded by hundreds of thousands of home PC users have had to admit that many of their results are wrong because of errors in the program. And it’s not just their software that’s flawed.
The software, produced by Oxford University in conjunction with a consortium of research institutions, was launched with great fanfare in February by the BBC. Around 200,000 people downloaded the programme, which runs in the background on PCs, each one working on one of thousands of very slightly different scenarios about how the world’s climate might change in the future.
However, after two months it’s been discovered that there were errors in a data file which was supposed to take account of particles in the atmosphere that suppress rising temperatures. Consequently, the world was – virtually, at least – getting too hot, too quickly.
continues at link
david says
The last link really says in all. The error was a simple problem with the aerosol forcing file. No problem with the model, no problem with the physics, not problem the theory. Shame the report could tell the difference between model and boundary data.
David
detribe says
David,
You’re overstating things, as absence of evidence is not evidence of absence.
Better: no problem was detected or searched for in the model, no problem detected with the physics, no problem with the theory was considered, but errors in computer simulation methology definitely were found.
The point about this error is not about science, where error detection is routine, but about public perceptions. It doesn’t really matter which organisation made the error, it signals to the general public that computer methods are fallible, which is irrefutable.
Should we be debating about public perceptions? Well yes, once lobby groups started exploiting public hysteria and using PR rather than focussing on solid science, that became the main game. We entered that arena long ago.
Is the point scientifically substantial- who knows? We should weighing the evidence of lots of other studies which you are obviously well equipped to tell us about, and put aside just this one.
jennifer says
Just filing this URL here: http://www.realclimate.org/index.php?p=296 . Some background about the “+11C” and an early “boincing” press release.
Ian Castles says
Public perceptions are important: the Climateprediction.net scientists should be the first to agree, because the avowed purpose of their experiment was to influence public perceptions.
The report of 26 January 2005 on the BBC website to which I gave a link above is headed ‘Alarm at new climate warming’. It says that ‘The study used a programme that ran on PCs round the world’ (note the use of the past tense, and that this was 15 months ago) and then, in big bold type, ‘Temperatures around the world could rise by as much as 11C, according to one of the largest climate prediction projects ever run.’
And, finally, the giveaway sentence: ‘Scientists behind climateprediction.net believe their project, because it is distributed to individual PCs, can help inform people about climate change – and that, in turn, could bring political change.’
The scientists in question come from prestigious institutions, including Oxford University, the Hadley Research Centre in the British Meteorological Office, the Open University and the London School of Economics. When they assert that temperatures COULD rise by 11C with a doubling in CO2, perhaps they can salve their consciences by reminding themselves that a dexterous monkey COULD type Macbeth in the morning and Hamlet in the afternoon – and then do Othello for an after-dinner performance.
But as Gavin Schmidt said in response to a comment on RealClimate by me, the conclusion by Annan and Hargreaves ‘that there is no positive evidence for extremely high sensitivities is completely correct.’
But why bother about evidence? The climateprediction.net team will now correct the computer error and ‘move on’ – i.e., they will now, metaphorically speaking, send 100,000 monkeys back to their keyboards.
The announcement of the results of the climateprediction.net will be somewhat delayed, but the main purpose will have been achieved. Meanwhile, the Stern Review discussion paper, apparently on the advice of the Hadley Centre, has used a 15 billion end-century global population estimate to project emissions in a ‘business-as-usual’ world. Now there’s an error for you.. ..
detribe says
Ian, I guess they must have slept when this came out:
The end of world population growth.
Lutz W, Sanderson W, Scherbov S.
International Institute for Applied Systems Analysis, Schlossplatz 1, A-2361
Laxenburg, Austria. lutz@iiasa.ac.at
There has been enormous concern about the consequences of human population growth for the environment and for social and economic development. But this growth is likely to come to an end in the foreseeable future. Improving on earlier methods of probabilistic forecasting, here we show that there is around an 85 per cent chance that the world’s population will stop growing before the end of the century. There is a 60 per cent probability that the world’s population will not exceed 10 billion people before 2100, and around a 15 per cent probability that the world’s population at the end of the century will be lower than it is today. For different regions, the date and size of the peak population will vary considerably.
Nature. 2001 Aug 2;412(6846):543-5.
Ian Castles says
Thanks, detribe. The irony is that these population forecasts are produced by the same research institution that led the work on the IPCC’s Special Report on Emissions Scenarios.
In his written submission to the Lords Committee, Richard Tol said:
‘The above pattern suggests that the SRES modellers know a lot about the energy supply side of the energy system, but less about the demand for energy. Their knowledge of economic development is lacking. Their demographic expertise is sound, BUT STRANGELY SEPARATED. My personal knowledge of the SRES modellers confirms this impression’ (EMPHASIS added).
david says
>You’re overstating things, as absence of evidence is not evidence of absence.
My point is that this was a beat up from the start. A trivial (though no doubt embarassing error) in a aersol data file is simply that. It has no relevance to climate change science, climate physics, global warming, climate modelling, the IPCC, economics, etc etc.
It is a simple human error, noted, and then corrected.
David
Courtney Gidts says
I’ve managed to save up roughly $60170 in my bank account, but I’m not sure if I should buy a house or not. Do you think the market is stable or do you think that home prices will decrease by a lot?
karel says
ionolsen23 Very good site. Thanks for author!
tits says
8b503dbc1f3d Good theme
menu says
3ae320caddcd You are doing some very good work