Last Wednesday was the launch of a new initiative between the University of Queensland and the Institute of Public Affairs for environmental research.
There is some reporting of the program in today’s The Australian newspaper under the title ‘Climate Sceptic’s $350,000 grant to uni has no strings attached’. [I have received comment that the article includes some snide remarks about me – hopefully not related to my critique of the national newspaper’s ‘Save the Murray Campaign’.]
The Australian also includes a column by the Perth-based philanthropist, Bryant Macfie, who’s generosity has made the partnership possible.
The column is entitled ‘Blessed are the sceptics’ and in it he explains the importance of shining the hard light of reason and critical thinking on our environmental problems, aided by multiple skills and points of view.
After the launch at the University, Aynsley Kellow, Professor and Head of the School of Government at the University of Tasmania, gave an address to member and friends of the IPA at the Brisbane Club. His talk was entitled ‘All in a Good Cause: Framing Science for Public Policy’. He said:
“The history of science is replete with error and fraud.
Environmental science is no exception. Indeed, this area of science provides a hyperabundance of examples, thanks to the presence of two factors: a good cause and extensive reliance upon modelling, especially that involving sophisticated computer models.
The good cause — one that most of us support — can all too readily corrupt the conduct of science, especially science informing public policy, because we prefer answers that support our political preferences, and find science that challenges them less comfortable.
We would all wish to preserve the spiralled-horned ox, Pseudonovibos spiralis, because it is on the Red List of endangered species. Problem is, is doesn’t seem to have existed in the first place.
And we might not have minded the apparent planting by US Federal Fish and Wildlife Department officers of fur from endangered Canadian lynx in Wenatchee and Gifford Pinchot National Forests in the Pacific Northwest in 2002.
When found out, the officials claimed that they were merely trying to test the reliability of testing methods, by covertly seeing whether the testing laboratories could identify real lynx fur if not told in advance. Critics suspected the samples had been planted in an effort to protect the national forests from logging, mining and recreation. The Executive Director of the Forest Service Employees for Environmental Ethics termed this response ‘a witch hunt in search of a false conspiracy’.
This Executive Director, Andy Stahl, had what is known in policing circles as ‘form’. In the 1980s, during the controversy over the logging in the Pacific Northwest, Stahl was involved in sponsoring the production of peer-reviewed science to support the Spotted Owl campaign to reduce old-growth logging.
Stahl put mathematical modelling entymologist Russell Lande in touch with scholars who suppled the data, and then helped find reviewers to produce a peer-reviewed publication. This was necessary because the only ‘science’ then available on the spotted owl was an incomplete doctoral dissertation.
The Lande paper was created to suit the political campaign and was used together with the notion of precaution to win the day. Whereas it assumed an owl population of 2,500 and further assumed that logging old-growth forest would cause its extinction, subsequent research showed the species was far more numerous and, if anything, preferred regrowth forest. Regrowth forest provided more prey and more conducive hunting conditions than old-growth forest.
Remarkably, the leading journal Nature editorialised in support of those who had faked the Canadian Lynx evidence — which tells us something about scientific journals.
The combination of the precautionary principle with endangered species legislation is a particularly seductive one, but it is the use of models into which value-laden assumptions can be smuggled that is particularly pernicious — as a recent Australian example shows.
A case involving the Orange-Bellied Parrot in 2006 saw the merest hint of a parrot, together with some mathematical modelling (and the precautionary principle) used by the then Australian Commonwealth Environment Minister to disallow the construction of a wind farm that was environmentalists’ preferred response to climate change, but was opposed by residents in a marginal Coalition government constituency.
Modelling for the Bald Hills wind farm on the Orange-bellied Parrot assumed the birds spent time at most of the sites of wind farms in Victoria, despite the fact that the birds had not been recorded at 20 of the 23 sites along the coast of Victoria, and despite active searches having been conducted. Only one or two sightings had been made at the other three sites.
The authors then assumed that the birds would remain present within a single wind farm location for six months—the longest possible period the migratory species could remain at a winter site, and longer than any bird had been recorded at any site. They also assumed the parrot would make two passes through the Bald Hills site. They did all this to err on the side of caution.
So, while no parrot had been sighted within 50 kilometres of the proposed site, the minister then acted in accordance with the precautionary principle (and an election promise) to block Bald Hills on the basis of cumulative impact—compounding the precaution already embedded in the assumptions underlying the modeling.
I have proposed in what I call Kellow’s Law that sightings of endangered species are clustered around the sites of proposed developments. This reflects not just the cynical uses of endangered species for political purposes, but partly also the fact that research for environmental assessments frequently finds species because the site has never previously been surveyed.
This ‘noble cause’ corruption of science — named for the ‘framing’ by police of suspects ‘known’ to be guilty is helped not just by the virtuous cause, but by the virtual nature of both the science and the context within which it occurs. Both conservation biology and climate science rely on virtual science. The former has seen people in check shirts counting deer scat give way to physicists and mathematicians, while the latter (unlike more traditional meteorology) has always involved more computing than fieldwork.
James Hansen, of NASA’s Goddard Institute, for example, wrote his doctoral thesis on the climate of Venus, and — contrary to what some of his critics might think — it’s clear he has never visited another planet.
Computer models fed by scenarios based on economic models are the norm in climate science, and when we are dealing with climate impacts on biodiversity, we are often dealing with species-area modelling fed by the modelled results of the impact of climate models on vegetation.
It is important to understand the way in which the revolution in information technology has transformed the conduct of science. Its impact has come not just in the ability to model complex phenomena of which scientists a decade or so ago could only dream — though that is part of the problem. Computer models are always subject to the Garbage In – Garbage Out problem and they can never be a substitute for hypotheses tested against the cold, hard light of observational data.
Many of the scientists working with models appear to have forgotten that science is about testing predictions against data. They seem to have fallen victim to the trap long-recognised at IBM, where it used to be said that simulation was like self-stimulation: if one practised it too often, one began to confuse it for the real thing.
One problem with observational data in areas like climate science is that they themselves are subject to substantial massaging by computers before they are of any use. Even data collection, therefore, provides opportunities for subjective assumptions to intrude into the adjustments made to data to make them useful.
This highlights the importance of quality assurance processes, and there are no greater guarantors of quality assurance in science than contestation and transparency — full disclosure of data properly archived and of methods, including computer code.
Society deems this fundamentally important when we are dealing with science such as drug trials, which are conducted under fully transparent conditions, ideally with separate teams making up doses, administering them, diagnosing effects and analysing data. We insist on regulatory guidelines, and we audit laboratories. We know that even when researchers are fastidious in pursuing impartiality, subjective assumptions can find their way into what become ‘data’.
There are similar requirements imposed by stock exchanges for data such as core samples relating to mineral resources. Standards govern the collection, archiving and analysis of data . In Australia, these are laid down as standards by JORC – the Joint Ore Reserves Committee. Even then, mistakes occur and there are consequences: shareholder value is destroyed or created.
In areas such as climate science we have made no similar demands. Data are routinely gathered, manipulated and modelled by the same research teams and the discipline has not insisted on anything like full transparency. Many of the people engaging in this science are then acting as advocates for particular policy responses. James Hansen is perhaps the most notable example in this regard, but there are numerous others, such as Stephen Schneider at Stanford.
The Intergovernmental Panel on Climate Change then allows the same people to act as lead authors, sitting in judgment on their own work and that of those who might differ with them. This corrupts the scientific process.
The work of former mining industry analyst Steve McIntyre in exposing the debacle of the Hockey Stick controversy in climate science and in finding that Hansen’s computer generation of mean temperatures for the US had a Y2K problem (that meant that the hottest year shifted conveniently from the 1930s to the 1990s) are good examples of what is needed. But it is significant that these necessary correctives came from outside the climate science community.
The shift of the ‘warmest year’ in the US was in itself a small change in the totality of climate science. But most of the mistakes tend to be in one direction, and that is in a politically convenient one. This underscores my point about the need for openness, transparency and sceptical challenging of science, especially where data collection, data preparation, data adjustment, modelling and interpretation all take place in the one institution.
Again, it is noteworthy that an amateur scientist, Anthony Watts, is responsible for a web-based audit of sites that generate data for that record, and he and his ‘citizen auditers’ have found many sites that are likely to have produced a recent warming trend through poor siting or site maintenance.
It is worth reporting that Watts visited NOAA recently, and not only was he given a warm reception, but he found that the walls of the offices of those responsible for maintaining temperature records were covered with photographs of the stations he and his supporters have photographed. NOAA is grateful for the work they have done (at no cost to it), and the result is likely to be better data in the future. But its surface records continue to be based on flawed instrumentation that is subject to adjustment and compilation.
I would suggest that the need for sceptical auditing is even greater when the senior spokesman for the institution concerned is also a vociferous advocate for a particular policy position. James Hansen claims to have been muzzled by the Bush administration — though Republicans were unkind enough to point to the 1400 or so media interviews he seems to have managed, and he managed to throw off the muzzle for long enough to endorse John Kerry in 2004.
The point about all this is that, while Michael Crichton once famously observed that ‘data is not Democrat or Republican, it’s just data’, we need to ensure we have institutions that prevent data from acquiring partisan characteristics.
Steve McIntyre was aware of the case of Bre-X, where gold assays were fabricated and now applies his considerable skills to auditing climate science—to our enormous collective benefit. The proposition with climate change policy is that we are being asked to make substantial social investments in an enterprise that does not have the standards of transparency and accountability stock exchanges insist upon to prevent Bre-X situations, nor situations where subjective beliefs have intruded into analyses.
But to return to the impact of IT on all of this, we must recognise how the IT revolution has also revolutionised both the conduct of science and the way in which it is interpreted — the way in which it enters politics and the policy process.
One of the impacts has been on peer review, the cornerstone of quality assurance in science. Publication after anonymous peer review in quality journals does not guarantee that the science is accurate, but it helps guard against inaccuracy.
Some journals in which key pieces of climate science are published do not maintain the standards of strict double blind refereeing that we take for granted in the social sciences. Geoscientists I raised this with thought that this would inhibit debate between authors and reviewers that might lead to fresh insights. Perhaps — but if society is to take such science seriously, such conversations have to be secondary to quality assurance. We are well past Victorian gentlemen discussing interesting fossils they have found.
That problem aside, the internet has made it much more likely that the identity of an author can be tracked down, breaking down the anonymity that focuses reviewers on the quality of the reason and evidence presented in the paper.
Indeed, the internet has made possible increased international collaboration among scientists, while the increasing specialisation of knowledge has narrowed the circle of likely referees. Not only does the internet (and cheap air travel) increase the likelihood that authors are known to potential referees, it increases the likelihood that they have worked together. The IPCC has assisted this process, by engaging many of them on a common task and producing that enemy of all good science, a consensus.
Edward Wegman performed a social network analysis of those working on multiproxy reconstructions of climate when examining the Hockey Stick controversy and found that there was a clear network of co-authorship between the Hockey Stick authors and almost all others working in the field, including those most likely to have been selected as a referee by an editor.
There was neither true independent verification of results nor peer review, and the possibilities for (at the very least) what we call ‘groupthink’ were great. When Professor David Deming reported receiving an e-mail some years earlier from a senior climate scientist stating that there was a need to do something about the inconvenient truth presented by a Medieval Warm Period warmer than the present, the need for scepticism is obvious.
Scepticism can guard against such results, but unfortunately leading scientific journals seem to have lost their sceptical zeal and become, at least on occasions, boosters for good causes. Let me give you two examples from what many regard as the best journals of all: Nature and Science.
A 2004 paper in Nature using the species-area model to predict species distribution in response to modelled climate change (in turn based upon emissions scenarios) concluded its abstract with a call to action: ‘These estimates show the importance of rapid implementation of technologies to decrease greenhouse gas emissions and strategies for carbon sequestration.’ The paper itself presented neither reason nor evidence for such conclusions.
The problem is confined to neither climate science nor modelling. Science, for example, not only published the fraudulent research on cloning of Dr Woo Suk Hwang, but rushed it into print after short review so that it appeared in an electronic version, accompanied by a press release that ensured media coverage, on the eve of a key vote in the US Congress to overturn an administrative order of the Bush Administration prohibiting the use of federal funds for cloning research. Not only did it seem such research was more promising than was the case at that time, but South Korea was seemingly passing the US by.
Not only have leading science journals yielded to the temptation of the need for ‘relevance’, but the ramparts of the prevailing paradigms are now defended using information technology to marshal the troops. ‘Swarming’ is not confined to partying adolescents in yellow sunglasses, enjoying their 15 megabytes of fame, but is to be seen whenever ideas emerge to challenge the consensus. The white cells of the immune system of the dominant paradigm are despatched electronically, dealing with the infectious ideas with all means at their disposal, including (but by no means limited to) typically anonymous posters to internet discussions.
One of the means commonly employed is the use of the term ‘denier’, a rhetorically powerful signifier quite deliberately first used (as far as I can tell) by a couple of defenders of the faith reviewing Bjorn Lomborg’s The Sceptical Environmentalist for Nature. It was used quite deliberately by Jeff Harvey and Stuart Pimm to liken Lomborg to a holocaust denier for daring to question the highly questionable estimates of the number of species extinctions that supposedly occur every year.
The computer-based estimates of species extinction range all the way from a few tens of thousands to 50-100,000 (if you can believe Greenpeace). The actual documented number accepted by the International Union for the Conservation of Nature is around 800 over the 500 years for which we have records.
While I’m prepared to accept we have missed more than a few, and I’m a passionate advocate for the conservation of charismatic megafauna (such as tigers and orangutans), I think the use of the term ‘denier’ tells us more about the person using it than about the target. I think the use of it amounts to an example of Godwins’s Law of Internet Discussions, which holds that eventually someone will liken someone else to Hitler, at which point rational debate is over. (Implicitly, the person using it loses).
Unfortunately, the use of the term is rife in debates over climate change, where those on one side seem finally to have cottoned on to the point that scepticism in science is actually a good thing, and it was even used last year by the now minister responsible.
If it has served any purpose, this use of illiberal name calling serves to remind us of what is needed to ensure that noble cause corruption does not afflict the science informing public policy.
Those of us who see value in both social democracy and liberal democracy — who are committed to humanist ideals but are open to evidence-based reasoning rather than ideology in determining how we are to advance them — must acknowledge that it is from liberal views of the celebration of different points of view, and the battle of contending ideas, that good science derives.
The philosopher of science, Paul Feyerabend, warned that scientists might engage in all manner of devices — from the rhetorical to the reprehensible — to have their points of view prevail. It seems to me that the only protection against any kind of corruption in science is to celebrate the liberalism inherent in Karl Popper’s philosophy of science, regardless of whether we share his political liberalism — though separating the two might be difficult in practice. Feyerabend’s prescription was a kind of anarchism and a rejection of any kind of marriage between science and the state.
As I said at the beginning of this lecture, the history of science is replete with error and fraud. In science, the best kind of quality assurance is to celebrate sceptical dissent and to reject any attempt to tell us that we should bow to a consensus, that ‘the science is settled’ on principle — not just even, but especially when it supports our preferences. Because as Carl Sagan once put it, ‘Where we have strong emotions, we’re liable to fool ourselves.’”
Russ says
Nice review, Jennifer. I think the true problem is how science is funded. It entices those like Hanson who need to justify their existence. It also encourages groupthink which has been identified as one of the contributing causes to the first US space shuttle disaster (Challenger).
Luke says
And thinktanks have a vested interest in perpetuating this philosophy with selective examples. Which is why independent minds question motives
(1) “in finding that Hansen’s computer generation of mean temperatures for the US had a Y2K problem (that meant that the hottest year shifted conveniently from the 1930s to the 1990s) are good examples of what is needed” – this a POOR account – did he say what the result was, was the process challenged, was it a genuine “error”, was the difference significant or piddly, and WHY have use the word “conveniently”. Had he mentioned that they had never made much of this point hitherto as the numbers were so close anyway? Has made clear the result doesn’t extrapolate globally.
(2) “But its surface records continue to be based on flawed instrumentation that is subject to adjustment and compilation.” – because that’s what data systems have to work with. Did it occur that the most scientists would LOVE better maintained climate stations, diligently measured and the budget to do so. Jen – if you ever had to work with any climate data in your entomology days you’d know what a pain in the neck it is. The implication left is that somehow NOAA “enjoys” these problems.
(3) “But most of the mistakes tend to be in one direction, and that is in a politically convenient one.” WHAT ? like the IPCC sea level rise estimates reducing?
(4) thirdly – an example not mentioned above. The recent airing of the Spencer MJO work. Just slammed up there with no context of the current research background – an implication that there was none – still not remedied – and no explanation as to whether that research can be extrapolated.
(5) the level of ongoing denial that there have been any changes in rainfall in Australia. When the regional changes are obvious.
So why should we see any of this UQ initiative as more than a sophisticated effort to create highly skilled mercenaries available create necessary uncertainties for vested interests.
e.g. http://www.theaustralian.news.com.au/story/0,25197,23657735-11949,00.html
http://www.theaustralian.news.com.au/story/0,25197,23655529-11949,00.html
It sounds all noble and aspirational – but the reality is that environmental debates are now highly polarised – all motives by all sides are suspect. Science isn’t an answer. It’s just more sophisticated weaponry to outbluff the opposition. Why isn’t this initiative simply a training camp for techniques in managing selected anti-enviro causes requiring some expedient obfuscation. Of course you could also easily assert the same in reverse at the liberal establishment in environmental science matters.
So who judges what’s corrupt in science and what’s not? You? Me? Aynsley most likely.
Reality is like the arts, science needs patrons – and that means funding – and so that’s the beginning of the problem if that link is too direct and the questions framed tightly.
When does scepticism simply become sophistry.
Paul Biggs says
(1)Climate Audit has recently done a nice job on Hansen’s never ending GISS temperature adjustments:
Rewriting History, Time and Time Again:
http://www.climateaudit.org/?p=2964
(2)We agree that near surface temperature is a flawed metric that is dogged by non-climatic influences, and there really is no excuse for poorly maintained temperature stations – do the job properly or not at all.
(3)’Most’ is self explanatory.
(4) Look out for Spencer’s Journal of Climate paper – next issue, I guess.
(5)The longer the period looked at for rainfall, or any other natural event, the less unusual recent trends become. As I asked before, how do you control these things with CO2, and over what time scale?
Corrupt science is judged by science itself, and science is eventually self-correcting.
Claiming consensus is sophistry. Scepticism is part of scientific method where a hypothesis is tested rather than protected by a political agenda
that can’t afford the hypothesis to be falsified.
cc says
Thanks Jennifer for taking the time to do this blog.
Is it unreasonable for anyone to ask for balance? As one accused (and occasionally guilty) of not telling the whole truth from the devotee’s perspective, as well as presenting a biased viewpoint, for me to stop hand-waving for the sceptics I would like only that the over funded under delivering AGW climate science and its camp followers and comperes accept the same impositions that AGW proponents too frequently insist sceptical observers and scientists should follow. Although accuracy, honesty and openness would do for starters.
I expect a backlash may develop from the ostracised and estranged scientists (and others connected to the science that have been damaged)to develop and it will probably use as much ‘science’ with an overburden of politics and media in the mix as we’ve witnessed to date from the ‘consensus’.
What is sadder is that the perhaps one good thing we do for nature, emit co2, continues to be the whipping boy and condemned to be a pollutant whilst the real badness has been passed over, especially by the inept persecutors of the public consciousness.
The winter has shown that climate can do unanticipated U turns. Until in a position to forecast such, outspoken AGW climate scientists should be silent. imo.
The warming we experienced before 98 and since was unalarming by any standard. The over credited and under delivering co2 by the most optimistic truthful measure has very little left to contribute to climate by radiative means. Why do the clingons still hold it to be an issue?
Luke says
Yes Paul – all this hard hard work by CA and look my understanding of the overall trends have been shattered by McIntyre – http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.146/mean:12/plot/uah/from:1979/mean:12/plot/rss/from:1979/mean:12/plot/gistemp/from:1979/offset:-0.238/mean:12
After all this it has revolutionised my understanding. Oh look maybe it hasn’t.
Face it – there’s an organised campaign to tear the temperature record down – to fog and obfuscate. To make it uncertain. Then in the next breath – sceptics will sit around and analyse every wiggle imputing that “this could be the start of next trend”. If it is that bad – stop looking at it? Don’t talk about it. Ever again.
So here’s you denialist problem – unlike the voluminous IPCC chapters (which golly gee do seem to have a lot in there about the PDO which most people now think the IPCC has never heard of) – it’s what is not said, what is left out, often a lack of context, a lack of review, and more cherrypicks than the tree.
And as soon as someone starts a rave by quoting Galileo as a source of inspiration, it’s time to turn the page quickly. It’s inevitably going to be a sophistic rant to justify one’s position as holier than thou.
Nexus 6 says
(3) Satellites underestimating warming until errors corrected.
Most errors seem to underplay climate change. I suspect fraud.
Wes George says
Thank you, Jennifer. Your elucidation above is the best summation of the problem available online. You have made a difficult issue assessable to a broad audience. I wish you a wide readership.
Perhaps, in a later article you could shed more light on just how important proper scientific method really is and how it works—transparency, documentation, dearchiving and reproducibility by those without shared vested interest in the outcome. And how after generations of strictly adhering to scientific method the end results are often unappreciated ubiquities like our electrical grid, cars and even the comment section of a blog, where pardoxically those opposed to strict adherence to science methodology can post opinions.
Some posters here seem to be unfamiliar with the history of science and imagine science to be just another ungracious form of rhetorical coercion or that in some fuzzy post-modern way empirical science is relative to who is doing the judging, as if scientific method was just another subjective narrative. I wonder how they imagine the keyboard they are typing upon came about? Perhaps it leaped fully formed from the head of a sophist?
How wonderful it is that the innumerable benefits of the scientific paradigm are shared indiscriminately with both the friends and foes of reason.
Jennifer says
Thanks. But please note ‘the summation’ is from Aynsley Kellow. I have simply republished his speech.
Of course he has a great book that is also a must read and available from Amazons: ‘Science and Public Policy – The Virtuous Corruption of Virtual Environmental Science’, 2007.
Louis Hissink says
“James Hansen, of NASA’s Goddard Institute, for example, wrote his doctoral thesis on the climate of Venus, and — contrary to what some of his critics might think — it’s clear he has never visited another planet”.
No wonder he thinks CO2 is a problem – Venus is hot for quite other reasons, not due to a runaway Greenhouse effect which Carl Sagan invented to counter Velikovsky’s deduction from historical data that Venus was very young.
And there you go, the deductive method in all its glory – make an tested assumption that a gas can act as a greenhouse and then deduce it’s a problem.
Walter Starck says
The idea that ongoing future behavior of vast incredibly complex interactive phenomena can be reliably predicted using rough estimates and assumptions plugged into unverified models that only crudely approximate reality is an intellectual conceit that would beggar belief were it not taking place. Even more unbelievable, it is those deemed to be intellectuals who seem to have succumbed most uncritically to the modeling cult.
Ian Mott says
What a breathtakingly ignorant and self serving statement made by Luke when he says, “there’s an organised campaign to tear the temperature record down – to fog and obfuscate. To make it uncertain”.
The facts have clearly established that the uncertainty was always in the data. This uncertainty was first exacerbated by the refusal of Hansen to provide his source material. And once it was provided then the continual scrutiny of it revealed more and more cause for uncertainty.
And as Hansens recent statements about the ambiguity of what temperature records actually mean confirms, the only organised campaign appears to have been the one that tried to mislead the public as to the integrity of the data, the quality of the analysis and the validity of the conclusions drawn.
Luke says
So what difference has it all made to your overall understanding? Zilch !
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.146/mean:12/plot/uah/from:1979/mean:12/plot/rss/from:1979/mean:12/plot/gistemp/from:1979/offset:-0.238/mean:12
John Van Krimpen says
Personally from what I’ve observed in the debate, the AGW at all costs side when it goes into debate loses. No media attention, Gore wont debate because he can’t.
It’s all of the same.
Good Blog Jen, Paul and Neil, might even buy Niel a pair of shoes one day.
I would be interested in your site metrics one day histiographically.
Ian Mott says
What Luke’s comment shows is the nature of his allegiances. If he was foremost a servant of the truth he would applaud all Steve McIntyre’s work for the improved data integrity. But no, his allegiance is to the edifice constructed around the idea of global warming. His concern over the data is for its implications for the credibility of the edifice.
Sad really.
Ender says
“His talk was entitled ‘All in a Good Cause: Framing Science for Public Policy’. He said:”
The Three Laws of the IPA.
First Law – A small minority of people have to be continue to be allowed to make masses of money. Because what is good for rich people is good for everybody (Catch22).
Second Law – Corporations must be preserved except where that would conflict with the first law.
Third Law – The Free Market must be preserved except where that would conflict with the first and second law.
Secondary and Unimportant Laws – the environment must be preserved except where that would conflict with the first, second or third laws or anything else that anyone in the IPA can think of.
Alarmist Creep (Lucy - the artist formerly known as Luke) says
Weak response Ian. Weak. You hadn’t pondered what it all meant had you?
McIntyre didn’t invent data integrity. But speaking of adjustments one way – I wonder why he hasn’t looked into that MSU data. I wonder. Don’t try to convince us you’re fair dinkum. Gee Aynsley is right on the one way business after all.
Steve says
I quite liked parts Aynsley’s lecture, though I think his account of computer modelling is erring far, far too much on the luddite side of good sense.
Be nice to see a skeptic who is able to talk intelligently and critically about computer modelling (which is useful if not essential in a huge range of engineering applications, not just in airy-fairy climate science) without coming up with a stale, nagging quote about garbage in garbage out. When I hear that, I feel like I’m on the receiving end of a lecture from Brainy-smurf: “As poppa smurf always says, garbarge in garbage out”.
Here’s some science and technology that hopefully will please everyone:
http://www.sciencedaily.com/releases/2008/05/080506124443.htm
King Canute says
Google “climate data integrity 2008” and we get Canada Free Press, Warwick Hughes, Heartland Institute, Climate Sceptic and so on. We can assume it’s the latest from the blog industry.
Does anyone wish to own the concept and the science for AGW?
rossco says
The article in the Australian points out that the University insists there will be no link between the source of funding and the research undertaken – the University will maintain its idependence from the IPA. Is that the way you understand it Jen? What if the research doesn’t confirm the views of the IPA and fellow sceptics? Will the research funding be maintained if the “right” results are not forthcoming? With 2-3 years being the norm for a PhD we might have to wait for a while to see how it all pans out.
JR says
All this quibbling about temperature rise and personal attacks on Hansen! And hardly ever a mention of the rapidly increasing melting of the Artic ice. Summer Arctic sea ice is now predicted to be completely gone by 2013 – 100 years ahead of IPCC WORST CASE scenarios. What if the IPCC has seriously UNDER-estimated climate change?
Or do you think that there is something shonky about the satellite photos of the Arctic ice melt??? How about the rapidly accelerating melting of the Greenland ice? The release of methane from the melting permafrost? The decreasing capacity of the sea to act as a carbon sink as it becomes more acidic? And the big ice shelf that just broke off Antarctica?
Paul Borg says
JR rarely have I seen such an amazing collection of alarmist cliches in one spot.
My congratulations.
Tim Curtin says
JR: have you ever glanced at Steve McIntyre’s blog, http://www.climateaudit.org? His latest thread begins: “Four of the past 5 months are “all-time” records for Southern Hemisphere sea ice anomalies, “unprecedented” since the data set began in 1979 as shown below…” In case you are not aware, the period referred to is the SH SUMMER.
He adds: “On a global basis, world sea ice in April 2008 reached levels that were “unprecedented” for the month of April in over 25 years. Levels are the third highest (for April) since the commencement of records in 1979, exceeded only by levels in 1979 and 1982.”
SJT says
The IPCC has shone the hard light of reason on the issue, and done quite a good job of it. For some reason it’s not good enough. 🙁
frank luff says
Thanks Jennifer for a well written and important view of how “revelations” can be manipulated.
“climate science” as a science is new, modeling the same, and hardly yet a science, but
What harm does the reduction of emissions do?
I think the corruption of govts. is an established fact. Corruption of science is only possible for a finite time.
Cleaner air can be the only result, if then some get funding they don’t deserve we’ll still not be the poorer by much.
fluff4
SJT says
“”climate science” as a science is new, modeling the same, and hardly yet a science,”
the science behind AGW has been studied for well over a century. It’s getting a little more mature by now.
old construction worker says
Rossco, what does melting ice have to do with CO2 drives the climate theory.
Observed Facts: 1) CO2 lags temperature. 2) No hot spot in the upper troposphere. 3) evaporating oceans does not cause heat trapping clouds. 4) oceans have been cooling for several years. 5) Global temperatures have been flat or cooling eventhough CO2 has been rising since 1998. So how many other parts of the CO2 drives the climate must be disproved before you open your eyes to the fact that CO2 does not drive the climate.
SJT says “What harm does the reduction of emissions do? “First of all define the emmissions you are talking about? If you are talking about CO2, keep in mind that without CO2 we are dead. The money spent on regulating CO2 would be better spent on real enviromental concerns.
As far as the IPCC is concerns, I hold the same view as the NCPA
http://www.ncpa.org/pub/st/st308
“The authors of this study used these forecasting principles to audit the IPCC report. They found that:
Out of the 140 forecasting principles, 127 principles are relevant to the procedures used to arrive at the climate projections in the IPCC report.
Of these 127, the methods described in the report violated 60 principles.
An additional 12 forecasting principles appear to be violated, and there is insufficient information in the report to assess the use of 38.
As a result of these violations of forecasting principles, the forecasts in the IPCC report are invalid. Specifically:
The Data Are Unreliable. Temperature data is highly variable over time and space. Local proxy data of uncertain accuracy (such as ice cores and tree rings) must be used to infer past global temperatures. Even over the period during which thermometer data have been available, readings are not evenly spread across the globe and are often subject to local warming from increasing urbanization. As a consequence, the trend over time can be rising, falling or stable depending on the data sample chosen.
The authors of this study used these forecasting principles to audit the IPCC report. They found that:
Out of the 140 forecasting principles, 127 principles are relevant to the procedures used to arrive at the climate projections in the IPCC report.
Of these 127, the methods described in the report violated 60 principles.
An additional 12 forecasting principles appear to be violated, and there is insufficient information in the report to assess the use of 38.
As a result of these violations of forecasting principles, the forecasts in the IPCC report are invalid. Specifically:
The Data Are Unreliable. Temperature data is highly variable over time and space. Local proxy data of uncertain accuracy (such as ice cores and tree rings) must be used to infer past global temperatures. Even over the period during which thermometer data have been available, readings are not evenly spread across the globe and are often subject to local warming from increasing urbanization. As a consequence, the trend over time can be rising, falling or stable depending on the data sample chosen.
The authors of this study used these forecasting principles to audit the IPCC report. They found that:
Out of the 140 forecasting principles, 127 principles are relevant to the procedures used to arrive at the climate projections in the IPCC report.
Of these 127, the methods described in the report violated 60 principles.
An additional 12 forecasting principles appear to be violated, and there is insufficient information in the report to assess the use of 38.
As a result of these violations of forecasting principles, the forecasts in the IPCC report are invalid. Specifically:
The Data Are Unreliable. Temperature data is highly variable over time and space. Local proxy data of uncertain accuracy (such as ice cores and tree rings) must be used to infer past global temperatures. Even over the period during which thermometer data have been available, readings are not evenly spread across the globe and are often subject to local warming from increasing urbanization. As a consequence, the trend over time can be rising, falling or stable depending on the data sample chosen.”