Everyone with a computer can help scientists better understand the universe, or just climate change, according to Phil Done. In the following guest post Phil suggests we all join the The Berkeley Open Infrastructure for Network Computing (BOINC) community:
“Bored with that aquarium screen saver – worried about the computer being idle and generating greenhouse emissions between blog comments – or do you just want to do something that Ian Mott cannot do on the back of an envelope – BOINC and save the world!
What could make the climate change enthusiast’s heart beat faster – help solve the problem faster and make a personal contribution. That little hotspot on Greenland might be yours!
With Climateprediction.net and the BBC Climate Change Experiment you can do just that. You can contribute a small piece of the modelling puzzle.
How does it work?
The Berkeley Open Infrastructure for Network Computing (BOINC) is a distributed computing infrastructure intended to be useful to fields beyond the Search for Extraterrestrial Intelligence (SETI). This software platform is open in that it is free and open source software released under the GNU Lesser Public License. Currently BOINC is being developed by a team based at the University of California, Berkeley led by David Anderson, the project director of SETI@home – a project which uses this software.
It’s supercomputing on the grassroots level – millions of PCs on desktops at home helping to solve some of the world’s most computer – intensive scientific problems. And it’s an all-volunteer force of PC users, who, with very little effort, can contribute much-needed PC muscle to the scientific and academic communities. There are hundreds of millions of Internet-connected PCs, and they’re getting more powerful all the time, so volunteer computing can provide computing power and storage capacity way beyond what can be achieved with supercomputers, clusters, or grids. The volunteer computing approach works best for applications that don’t need to move a lot of data between processors, but this limitation will diminish as the Internet gets faster.
Of course if you’d prefer molecular biology and speed up GMO research, fighting human diseases, finding aliens in a celestial haystack, or sorting out gravitational waves there’s something for you too.
Of course contrarians have warned not to run BOINC software if:
* You are in an urban heat island,
* You’d rather find out what happens to the climate personally, or
* You are not familiar with the MER/PPP parameter settings.Denialists have also suggested we don’t need a SETS project – Search for Earthbound Terrestrial Stupidity – it’s already been solved. And perhaps you’re blogging so hard that your screen saver never appears anyway.
Is BOINCing a western ideology? Are we losing out in the global BOINC? Do only greenies and left wingers BOINC or is BOINCing bipartisan? Are right wingers born to BOINC?
Also, you need broadband, and laptops are not recommended due to heat buildup. See the relevant FAQs.”
————————-
Thanks Phil.
Ian Mott says
Some people are born to BOINC,
Some people achieve BOINC
While other have BOINC thrust upon them.
Thinksy says
Is it inter-operational with my air con unit?
Ian Castles says
Phil, I’d be happy to harness my computer’s unused capacity to advance the sum of human knowledge, but (a) if the output is the prediction of future climate (b) the underlying assumption is that humans influence the future climate and (c) they do so by means of emissions of greenhouse gases and aerosols, then (d) the input assumptions (future emissions of GHGs and aerosols) must be based on empirically observed observations of past emissions.
Australians Warwick McKibbin and Alison Stegman have examined the available evidence of per capita carbon emissions from fossil fuel use and have found ‘strong evidence that the wide variety of assumptions about ‘convergence’ commonly used in emissions projections are not based on empirically observed phenomena’ (‘Convergence and per capita carbon emissions’, Brookings Discussion Paper in International Economics no. 167, May 2005). So far as I know, the M&S findings have not been contested.
Bearing in mind the GIGO (‘garbage in, garbage out’) principle, I’m not disposed to waste my time processing data that is not based on real world phenomena. Does BOINC claim to provide input assumptions that are derived from observations of such phenomena and, if so, what is their source? Before agreeing to BOINC, I seek your guidance.
Louis Hissink says
It’s called Internet II for those of us in physical reality.
๐
And I remain coy ๐
Phil Done says
Ian – I’m not in a position to warrant the exact quality of all BOINC applications – BOINC is merely the middleware tha splits up a large task, distributes the sub-tasks, and reassembles them from across the Internet – very clever – not all BOINC apps are climate related – we have gravity waves, protein sequencing etc etc.
The climate experiments are genuine in that I seriously believe the researchers are very well intentioned. It’s not a stunt. But for an expert such as yourself this would not be enough – I seriously suggest you read the sites and have some discussion with the operators to convince yourself.
My purpose in alerting Jen to BOINC was simply to draw her attention (and now yours) to interesting new ways of achieving massive computation on serious scientific and environmental problems. Interesting and newsworthy.
I think it also provides an all important public participation and enthusiasm for science.
The following url has details of the climateprediction.net experiment and contact and support information.,
http://www.climateprediction.net/science/index.php
The BBC site is a Transient Coupled Model version of Climateprediction.net (CPDN). This model will soon be released on the Climateprediction.net (CPDN) site as well as from the BBC Site.
http://boinc-doc.net/boinc-wiki/index.php?title=BBC_Climate_Change_Experiment_Project
There are other experiments in physics and protein science which may also be of interest: http://boinc-doc.net/boinc-wiki/index.php?title=Catalog_of_BOINC_Powered_Projects
Phil Done says
Louis – how was the hole drilling? You missed some great blog action.
Louis Hissink says
Phil,
I doubt it, you still have not worked out the difference between science and religion – your last post re-enforcing that distinction.
Ian Castles says
Thanks Phil. I don’t doubt that the operators are well-intentioned. But when I went to the site at ‘forcing scenarios’ and followed the link at http://www.climateprediction.net/science/forcing_scenarios.php , I found that they were using two of the IPCC marker emissions scenarios (A1B and B1) to force the model. Both of these marker scenarios, and the 24 other scenarios within the A1 and B1 families, assume the ‘convergence’ in emissions levels over time that McKibbin & Stegman found to be ‘not based on empirically observed phenomena’. So in this case it’s not possible to ‘BOINC and save the planet’.
Two further points. First, The IPCC reports themselves state repeatedly that their scenarios ‘are neither predictions nor forecasts’, so the operators have, perhaps unintentionally, given the site a misleading title (‘climateprediction.net’). Secondly, the Australian Greenhouse Office funded the McKibbin & Stedman study, but to the best of my knowledge have made no public reference to their findings. You can bet we would have heard about the results with the speed of light if M&S HAD found empirical evidence to support the IPCC convergence scenarios.
Phil Done says
Ian – alas I suspected you would find things lacking. Would they have a different range of scenarios if they had sought your advice?
It seems that you guys have singularly failed to convince the IPCC people of your issues and given your criticism is not my field you should take that up with them. It is unlikley that climateprediction.net is going to take that sort of issue on unilaterally.
So perhaps the gravity wave experiment then?
Ian Castles says
Phil, the Australian Government advised the IPCC to ‘consider whether there are plausible emissions scenarios outside the range of the SRES and if so, manage integration of such scenarios outside into the AR4 (for example, consider developing a further scenario with lower developing country growth than the B1 scenarios, but without the high population and slower rate of technology growth associated with the A2 and B2 scenarios.’ (Australian submission on the scope and structure of the Fourth Assessment Report, March 2003).
The fact that our Government singularly failed to convince the IPCC people that they should consider such a scenario says something about the politicised nature of the Panel. It tells us nothing about climate change science.
If the climateprediction.net guys think that they’re predicting the actual future climate, they are the ones who are arguing with the IPCC on that point, not me.
Phil Done says
Well I think the Australian Government’s position on climate change lacks a fair bit o international credibility (IMHO) so perhaps we should not be surprised. On what basis or compelling logic would they accept the Australian advice.
Imagine if climateprediction.net suddenly decides to take a position contrary to the established consensus – they’re not in a position to do that and would be roasted for it (even if you are correct). I would imagine they will only get support to run an experiment within the current framework. And they will considerable support to do that. It goes back to the IPCC.
Who else is doing any better? As I said – will it may any difference to the range of scenarios being computed anyway.
rog says
Collective P/C? – forget it.
Ian Castles says
Phil, Warwick McKibbin is an Australian and has been funded by the Australian Government. This does not in any way diminish the credibility of his research in my eyes, especially as his book ‘Climate Change Policy After Kyoto’ (co-authored with Peter Wilcoxen) was published in 2002 by the Brookings Institution, Washington, DC, one of the world’s leading policy research institutions.
In his Foreword to the book, the President of Brookings said:
“In the pages that follow, Warwick and Peter argue that the climate policy stalemate results directly from flaws in the design of the Kyoto Protocol. In their view, the root of the problem is the protocol’s focus on establishing targets and timetables for reductions in greenhouse gas emissions. Warwick and Peter carry their critique further to argue that the Kyoto Protocol’s emphasis on international emissions trading is politically unrealistic; that the agreement has no credible mechanism for monitoring participants and enforcing compliance; and that it will be particularly vulnerable to collapse if a major country was to withdraw. They devote the remainder of the book to presenting an alternative policy that would address the protocol’s shortcomings. Brookings is proud to be part of this effort.’
I’m sure you’re right that climateprediction.net would be roasted for taking a position contrary to the established consensus. They are stuck with using projections which McKibbin & Stegman have found are not based on empirically observed phenomena.
If the IPCC had accepted the Australian Government’s advice, climateprediction.net would have been able to model a scenario that IS based on empirically observed phenomena – and they would probably have been able to get support to run an experiment with such a scenario. Thus the IPCC effectively operates to stifle the exploration of emission scenarios that are based on evidence rather than on modellers’ ‘storylines’.
Phil says
Ian – now have the Brookings paper and am reading – but I’m not an economist so some pain.
Ian Castles says
Thanks Phil. I agree that the McKibbin & Stegman paper is not easy reading. I wouldn’t claim to fully understand it myself.
It may help in relation to the broader issue if I paste in here the concluding paragraph of a message that Warwick sent to some of us a couple of months ago:
‘The reason most IPCC related modellers don’t think there is much effect of changing economic assumptions is because they usually take aggregate GDP growth as exogenous and everything is driven from their underlying energy model scenarios adjusted by AEEI. Raise GDP growth in a scenario– AND reduce emissions intensitives over time because that is what technical change means! — then emissions follow roughly the original path from the emissions scenario. The IPCC modellers actually do this. It is a view of the world not based on any empirical economic analysis that I know. We have been arguing against this approach of aggregate assumptions for more than 10 years and been totally ignored – perhaps we need to become “commentators” rather than researchers in economics.’
Warwick Hughes says
Gidday Jennifer and readers,
A year ago put up a critique of the Climateprediction.net wild claims at;
http://www.warwickhughes.com/cool/cool11.htm
Enjoy
Ian Castles says
Thanks Warwick. In the light of the press statement by climateprediction.net of 26 January 2005, I now do have reason to doubt that the operators of this experiment are well-intentioned.
In his book ‘Climate Change: Turning up the Heat’ , Barrie Pittock refers to ‘the IPCC 2001 range of climate sensitivity of 1.5 to 4.5 deg. C.’, and immediately says:
‘However, best current estimates of this range of uncertainty have changed recently, due to new probabilistic estimates of the climate sensitivity (by David Stainforth of Oxford University and others in ‘Nature’ , 27 January 2005) ranging from 1.9 to 11.5 deg. C. This would extend the range of possible warmings upwards, but the new results have yet to be fully considered.’ (p. 87).
Further on, Pittock says ‘A follow-up study by David Stainforth of Oxford University and others found a range from 1.9 to 11.5 deg C, with a 4.2% chance of being greater than 8 deg. C. Thus there now appears to be only a small chance of a climate sensitivity as low as 1.5 deg. C, and a considerable chance that it may be well above 4.5 C. This makes it FAR MORE LIKELY that temperatures will reach dangerous levels (p. 153, EMPHASIS added).
Then in his Supplementary Reference and Notes, available online at CSIRO Publishing’s website since October 2005, Pittock says ‘The Stanforth and others result is even more extreme, but is not YET widely accepted’ (EMPHASIS added).
This means that
(a) the IPCC refused to model the emissions scenario suggested by the Australian Government (that would have been easy to dismiss, given that Australia’s position on climate change ‘lacks a fair bit of international credibility’);
(b) the Panel reaffirmed the scenarios prepared in the late 1990s, which have been roundly criticised by many economists, as ‘ suitable for use in AR4’;
(c) the Oxford University group misinterpreted the SRES-derived projections and, notwithstanding the explicit statements by their authors to the contrary, used the temperature increases derived from the storylines as predictions, to which they attached probabilities.
(d) Barrie Pittock interpreted the climate sensitivity numbers generated by the Oxford group as showing that alarming temperature increases are ‘far more likely’, but one can’t draw any inferences about the likelihood of given temperature increases without investigating the likelihood of a doubling in CO2 concentrations. The Oxford group haven’t even looked at this.
(e) Dr Pittock also says that the high climate sensitivity numbers are not ‘yet’ accepted, implying that it is only a matter of time before these are accepted.
To me, the whole thing looks more like a PR exercise than a scientific investigation.
Phil Done says
Ian – the issue that keeps coming up when I present your criticisms of SRES to various climate scientists is that “it doesn’t change the range the scenarios generated in terms of CO2- ” and shrug shoulders at that point. But it may affect which scenario is more likely of course. Any comment. Are the ranges of CO2 scenarios fully covered?
(I am asking politely)
Ian Mott says
So it seems that BOINC is another form of palliative tokenism whereby people who use 3 tonne SUV’s to pick up 15 kg of kid from school can leave their PC on all night to extinguish any residual guilt that may linger from their actions.
BTW, the scenarios used in ocean acidity modelling are the very same ones that are protected by IPCC.
Louis Hissink says
Pittock’s estimate of climate sensitivity is not from experiment but a guess of what they think what happens when CO2 quantity is doubled in air.
It isn’t science by any stretch of the imagination.
Phil Done says
Ian – somehow I doubt it. It’s only for those who are interested. Protected by the IPCC my foot – there has still been no substantive case made (IMHO) that the range of CO2 levels are still not covered.
And who has misrepresented what. Is this the Oxford team or the press talking. Are all the results in and analysed. What a feeding frenzy of usual suspects.
And Warwick has been going on about all manner of things for some time with little overall impact. In terms of the UHI story – we’re still waiting for a coherent paper telling us what the real position is from all this ongoing nitpicking. But maybe we should just not worry about the ground and just go with the satellite evidence.
Phil Done says
Should we rename “Coolwire” as “Contrarian wire” or Loose_wire?
As usual some selected cherry picking, sleight of hand, and mixing of press grabs and confusion with what’s really being researched.
You will notice that Coolwire makes very little detailed analysis of what is being done.
Looking at the actual Nature paper:
The paper desribes work in progress seeing how GCM parameters affect climate sensitivity. The authors make a case that domain experts may constrain their estimates to certain parameters where the intent of question is pre-suggested. And as many have been critical of the IPCC TAR (Lord knows the IPCC reviewers did their best the poor exploited bastards!) the authors are also somewhat critical as how high sensitivities were ignored and paramters constrained. Nevertheless the scientific aspects of the experiment are most interesting
Fig 1 shows changes over time rather than the equilibrium change. So, yes it only shows changes of up to about 8K but these runs are still warming up. A fit to the timeseries shows that an equilbrium change of 11K is to be expected.
As described in the paper the majority of model versions show a sensitivity of ~3.5K but “the shape of the destribution is determined by the parameters selected for perturbation and the perturbed values chosen, which were relatively arbitary”.
They go on to say: “Can either high-end or low-end sensitivities be rejected on the
basis of the model-version control climates? Fig. 2b suggests not; it
illustrates the relative ability of model versions to simulate observations
using a global root-mean-squared error (r.m.s.e.) normalized
by the errors in the unperturbed model (see Methods). For all
model versions this relative r.m.s.e. is within (or below) the range of
values for other state-of-the-art models, such as those used in the
second Coupled Model Inter Comparison (CMIP II) project28
(triangles). The five variables used for this comparison are each
standard variables in model evaluation and inter-comparison exercises29
(see Methods). This lack of an observational constraint,
combined with the sensitivity of the results to the way in which
parameters are perturbed, means that we cannot provide an
objective probability density function for simulated climate sensitivity.
Nevertheless, our results demonstrate the wide range of
behaviour possible within a GCM and show that high sensitivities
cannot yet be neglected as they were in the headline uncertainty
ranges of the IPCC Third Assessment Report (for example, the 1.4โ
5.8 K range for 1990 to 2100 warming).14 Further, they tell us about
the sensitivities of our models, allowing better-informed decisions
on resource allocation both for observational studies and for model
development.”
Coolwire makes a big deal of the fact that few model versions show high sensitivities but as discussed in the paper, this means little if anything about what is most likely.
Doubling CO2 instantaneously is a common technique in simulations designed to study sensitivity. The authors believe it has been shown to make little difference in the final temeperature change. e.g. HadSM3 is shown to have a similar sensitivity to HadCM3
If Coolwire wants to see the results from experiments 2 and 3. These experiments began a few weeks ago. The way this is written up in Collwire is nothing short of mischievous
As usual with “scorching” reports that are “hot off the press” Realclimate had a “real cool” take on the press reports and the paper. (sorry couldn’t resist !)
http://www.realclimate.org/index.php?p=115
They did not find notions of 11 degrees warming as reported by the press as credible. BUT –
.. .. .. .. ..
“With this background, what should one make of the climateprediction.net results? They show that the sensitivity to 2xCO2 of a large multi-model ensemble with different parameters ranges from 2 to 11ยฐC. This shows that it is possible to construct models with rather extreme behavior โ whether these are realistic is another matter. To test for this, the models must be compared with data. Stainforth et al. subject their resulting models only to very weak data constraints, namely only to data for the annual-mean present-day climate. Since this does not include any climatic variations (not even the seasonal cycle), let alone a test period with a different CO2 level, this data test is unable to constrain the upper limit of the climate sensitivity range. The fact that even model versions with very high climate sensitivities pass their test does not show that the real world could have such high climate sensitivity; it merely shows that the test they use is not very selective.
Our feeling is that once the validation becomes more comprehensive, most of the extremely high sensitivity examples will fail (particularly on the seasonal cycle, which tests for variations rather than just a mean). ”
and
“Hence, we feel that the most important result of the study of Stainforth et al. is that by far most of the models had climate sensitivities between 2ยบC and 4ยบC, giving additional support to the widely accepted range. The fact that some of the models had much higher sensitivities should not be over-interpreted.”
The ‘Meeting the Climate Challenge’ report (around at the same time) tried to quantify what is meant by ‘dangerous’ interference in climate. All countries including the US and Australia have signed the Framework Convention on Climate Change which obligates them to prevent ‘dangerous’ interference with the climate system. Actually quantifying what this means is rather tricky. For various reasons (although some are subjective) they suggest that any global warming above 2ยฐC (above the pre-industrial) is likely to be increasingly dangerous. The issue is how one prevents such an outcome given the uncertainty in the climate sensitivity.
So we can add this issue to Ian Carter’s SRES scenario difficulties.
Who’d want to be in this game !!
SO – given the importance of sensitivity to the debate gives more power to the issues of sensitivity exploration and analysis of GCM parameters.
RC summed up by saying “Uncertainty in climate sensitivity is not going to disappear any time soon, and should therefore be built into assessments of future climate. However, it is not a completely free variable, and the extremely high end values that have been discussed in media reports over the last couple of weeks are not scientifically credible. ”
SO if we all BOINC’ed hard enough – (including YOU Ian C and Ian M and Warwick Hughes!) and they got the results in – and we had some proper analysis and critique we may actually learn something. And we are learning in parameter space that is not usually explored. What a great public contribution. We may even be surprised – but not 11 degrees surprised.
However getting the public in on such research will mean press attention. And the press will grab the sensationalist bits and before we all know it we’ll be back “doing a Schneider” wondering how effective or cautious we’ll be in our 90 second grab to camera. We know what Warwick and Louis would say regardless of any real information.
And who knows Barrie Pittock may even know something we don’t. Having known Barrie for over 10 years I can only say he was ahead of his time. ๐
P.S. I notice even Jen is now going with the warming but keep IPA-safe on the issue of attribution. (OK unfair but you have to try to be cheeky – ๐ ) The contrarians could do better than doggedly ragging – and come up with a bloody decent explanation for a “natural” variation across the globe. I could give you a tip but that would spoil the fun.
P.P.S.
And thanks to the climateprediction.net guys for a most enlightening discussion. And always for Realclimate being so bloody good.
Time for a Bex and lie down after that rant !
Phil Done says
Ian C,
Further on scenarios it’s interesting to see that in http://www.abare.gov.au/rr06.1/pdf/RR06_1_ClimateAsia.pdf for the inaugural ministerial meeting of the Asia Pacific Partnership on Clean Development and Climate, ABARE provides a projection for an unmitigated CO2 emissions “reference case” – provided in Fig 4 on page 15. Is this not on the upper end of the IPCC projections.
From the rest I cannot see how PPP versus MER matters. Also Australia’s leading agency for such predictions seems more pessimistic that
the IPCC.
So in terms of economists being shut out of the IPCC – at least ABARE seems to be in there? I thoght ABARE were our leading economic organisation on resource issues?
Ian Castles says
Phil, My objection is not to the paper in Nature or to the efforts of the Oxford team to estimate climate sensitivity. If they
Ian Castles says
In the first paragraph of my long posting, I should have referred to the figure of 11 degrees C as a prediction by climateprediction.net of the increase in temperature that COULD occur with a doubling in CO2 concentrations, not to the increase that WOULD occur. This does not affect the succeeding argument.
Phil says
Ian – thanks for your reply.
Press statement? or what was reported in the press. Do we have a copy? Nature editorialised it incorrectly. I prefer to read the Nature paper itself. It’s quite interesting on the unfinished issue of climate model sensitivities and RC suggested they continue.
Do you think the public would have a clue what climatesensitivity.net was about – skin rashes? We could also take affront at the alarmist colour of their web site too.
On hydrogen cars – of course the stuff leaks like crazy and seems to have an effect on stratospheric ozone – so maybe hydrogen is not the panacea we have hoped for. But maybe technology will prevail and we will find a way to generate the substance without CO2 production and distribute it leak-free around the world. Another potential problem is peak-platinum which seems to be needed for fuel cells. Lots of resource issues here once you go mass production and out of prototype vehicles.
Phil Done says
So I still think exploration of the GCM parameter space and the potential for higher sensitivities is well worth doing.
Stainforth et al seem to indicate that domain experts are likely to bias GCM parameter estimates to keep “plausible”.
Climate sensitivity is an artificial measure of the sensitivity of global climate models, and of the real climate, to increases in GHG concentrations. It is the stabilised global average warming resulting in models or the real world from a doubling of CO2 equivalent concentrations from the pre-industrial level of about 270 ppm to twice that, namely about 540 ppm. IPCC from its first report in 1990 until now, has assumed a climate sensitivity range of 1.5 to 4.5C, based on expert opinion in about 1989.
No probabilities were attached to these estimates, and they are not connected in any way with the SRES scenarios.
This means that any revision of the range of climate sensitivities alters the probability of global warmings for any given emissions path or scenario. Higher estimates of climate sensitivity increases the perceived risk of higher global warmings, for any given emissions scenario.
Doing a back of the envelope caculation. Consider the increases in concentrations of CO2 to date.
Preindustrial was about 270 ppm. Present is about 380 ppm. Present annual increase is about 1 to 2 ppm, roughly 1.5 ppm on average recently. Add another 94 years at 1.5 ppm per annum and you get another 140 ppm by 2100, or a total of about 520 ppm. And this takes no account of increasing global population or per capita emissions (clearly not true for India and China with their rapid growth in GDP).
Consider present global emissions of some 6 billion people, totalling about 6 or 7 billion tonnes of CO2 (not equivalent, i.e., neglecting methane, N2O etc.). Present global average emissions in CO2 equivalent is about 1.5 tonnes per person. Now assume some 9 billion people by 2100, emitting about 5 tonnes CO2 equivalent each. This gives some 45 billion tonnes total. The highest SRES scenarios give about 30 billion tonnes total by 2100. Thus the SRES scenarios have been conservative, assuming that per capita emissions will not reach the present average in Australia, or the USA (which are about 6 to 7 tonnes p.a. each).
This is back of the envelope stuff, but it shows that the SRES secnarios are not way off beam, whatever the method they used to get them. Anyway, the point is not the SRES scenarios, but what is likely to happen emissions-wise, and what would be the result climate-wise.
But higher estimates of climate sensitivity means the risk is greater that there will be larger warmings.
Jennifer Marohasy says
I will be copying and pasting some of Ian Castles comment from 2.25pm yesterday as a new blog post – but for the moment I have been ‘locked out’ of my own blog. Except for comments! Jen.
Jennifer on behalf of Phil Done says
The following comment is from Phil Done, unable to post for himself because this blogs sometimes blocks seemingly honest comments on the basis of ‘questionable’ content:
“As they say in parliament – “is the member aware of any alternative policies.. ..” Well yes – the enigmatic James Annan has just done a Bayesian sensitivity paper.”
He says “Of course, as you will hopefully realise having read my previous posts about Bayesian vs frequentist notions of probability, there isn’t such a thing as a truly objective estimate, since in a situation of epistemic uncertainty, observations can only ever update a subjective prior, and never fully replace it”. Yup I got that.
So you need to read his paper and the whole post at http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html and http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf
He goes onto say in the comments:
http://www.jamstec.go.jp/frcgc/research/d5/jdannan/GRL_sensitivity.pdf
We said that a more optimistic (but not completely unrealistic) view might give an upper limit of about 4C at 95%. If I was giving round numbers, I would simply say 3+-0.5 (at 1sd, Gaussian) is a pretty good estimate – that makes
2.5-3.5C is likely (68%)
2-4 is very likely (95%)
Of course low value are equally unlikely too !
from Phil Done.
Louis Hissink says
P Done quoted Annan:
He says “Of course, as you will hopefully realise having read my previous posts about Bayesian vs frequentist notions of probability, there isn’t such a thing as a truly objective estimate, since in a situation of epistemic uncertainty, observations can only ever update a subjective prior, and never fully replace it”.
This is essentially post-modernism, the belief that objective truth is impossible.
It can be refuted simply – no observations of white swans can allow the inference that all swans are white but the observation of a single black swan is sufficient to refute the conclusion that all swans are white.
Annan’s statement has nothing to do with science but the belief that aprioristic belief determines all subsequent observations.
Religion in other words.
And P Done ? Yup, got that.
Phil Done says
So Louis is now taking on Bayesian statistics, after failing to understand averages and anomalies. HAAHAAHAAHAAHAA Woo hoo !
The Reverend would turn in his grave.
1761 is post-modernism ?
(Did you read the paper ? Why do I even ask.)
Thinksy says
I too noticed that Louis is wielding intellectual concepts without appreciating their significance, ie the subject and its roots predate post-modernism. Louis aren’t you being typically post-modernist yourself by continually deconstructing consensus theories without offering a viable alternative? Can we assume that your grasp of the other intellectual concepts you bandy around to justify your crackpot views have an equally shallow understanding?
Reminds me of a favourite quote, from Bulgakov’s ‘The Master and Margarita’: even among a group of intellectuals there may occasionally be found an intelligent man.
Louis Hissink says
Phil,
I am not tackling Bayesian statistics at all – just taking issue with Annan’s assertion that there are no objective facts – the essence of post-modernism.
So you have it gloriously wrong again.
And Thinksy, postmodernism is better described as an intellectual inanity.
Louis Hissink says
Furthermore Phil, I have read Annan’s paper on climate sensitivity and the use of Bayesian statistics.
Annan et al start with a simple statement of climate sensitivity which is a guess of a range of temperatures which they assume are true. They admit it as a guess (the temperature range you quote above).
So I take issue with the fundamental assumption that a guess is assumed to be scienticically true.
You seem to think I am taking issue with the ensuing statistics Annan et al develop.
Totally wrong again Phil.
Phil Done says
Yea yea – have some more Amsterdam ganga !