MANY mainstream media science, economic and environmental journalists are not sufficiently trained to be aware of the limitations of models when they present climate-modelled output computated projections not only as data but also advocate this output as supposed proof of the threat posed by anthropogenic global warming, particularly with regard to runaway or catastrophic climate change. This disjunct between the scientific and media presentation when contained within the paradigm of advocacy represents a threat to the integrity and falsifiability of science.
Science seeks the truth in knowledge; (some) media advocacy seeks to propagandise this knowledge. The impact is reinforced if a climate scientist/modeller is directly quoted as an expert, further blurring the line between science and advocacy. This has societal repercussions as the science of anthropogenic global warming (AGW) and the perceived impact of runaway or catastrophic climate change is so model-dependent that the citizenry is not always able to differentiate between the science and advocacy – the implications of which, as regards policy development in term of climate change mitigation, are likely to have a profound effect on society.
Climate models are used, in part, to determine future climate change scenarios related to anthropogenic global warming (AGW) and are described by the Intergovernmental Panel on Climate Change (IPCC) as “mathematical representations of the climate system, expressed as computer codes and run on powerful computers.”
Furthermore, the IPCC states that climate models:
“Are derived from fundamental physical laws (such as Newton’s laws of motion), which are then subjected to physical approximations appropriate for the large-scale climate system, and then further approximated through mathematical discretization. Computational constraints restrict the resolution that is possible in the discretized equations, and some representation of the large-scale impacts of unresolved processes is required (the parametrization problem).”
In other words a climate model is a numerical model or simplified mathematical representation of the Earth’s climate system, or parts thereof. It includes data from real world observations and creates parameters or variables for the unresolved or unknown processes.
The ability of a model to simulate interactions within the climate system depends on not only the level of understanding of the physical, geophysical, chemical and biological processes that govern the climate system but on how accurately these processes are expressed as algorithms within the model, and how closely they represent real-world data. These models do contain some well-established science but they also contain implicit and explicit assumptions, guesses, and gross approximations, referred to as parameters (the parametrization problem mentioned above), mistakes in any of which can invalidate the model outputs when compared to real world observations. In other words computer models are just concatenations of theoretical calculations; as such they do not constitute evidence.
Climate models are data and parameters dependent. Data is based on direct or indirect observations from the environment; parameters (or parametrizations) are defined by the IPCC as:
“Typically based in part on simplified physical models of the unresolved processes . . . Some of these parameters can be measured, at least in principle, while others cannot. It is therefore common to adjust parameter values (possibly chosen from some prior distribution) in order to optimise [author’s italics] model simulation of particular variables or to improve [author’s italics] global heat balance. This process is often known as ‘tuning’.”
Tuning is considered justifiable if two conditions are met: that parameter ranges do not exceed observational ranges where applicable (though this does not necessarily constrain parameter values, which could lead to model output problems); that adjusted (or tuneable) parameters are allotted less degrees of freedom than in the observational constraints used in the model’s evaluation. The IPCC states that,
“If the model has been tuned to give a good representation of a particular observed quantity, then agreement with that observation cannot be used to build confidence in that model. However, a model that has been tuned to give a good representation of certain key observations may [author’s italics] have a greater likelihood of giving a good prediction than a similar model . . . that is less closely tuned.”
Herein lies a problem with modeling: that last sentence implies a subjective judgment on the part of the modeler regarding the greater likelihood of the model providing a good prediction than a less closely tuned similar model. In other words there is the possibility that tuneable parameters can be used as a ‘fudge’ factor, in either model prediction or hindcasting (making the model fit already observed data).
Prominent climatologist Richard Lindzen, writing in “Climate Science: Is it currently designed to answer questions?” a paper he presented at the “Creativity and Creative Inspiration in Mathematics, Science, and Engineering: Developing a Vision for the Future” conference held in San Marino, August 2008, summarises the problem thusly:
“Data that challenges the [AGW] hypothesis are simply changed. In some instances, data that was thought to support the hypothesis is found not to, and is then changed . . . Bias can be introduced by simply considering only those errors that change answers in the desired direction. The desired direction in the case of climate is to bring the data into agreement with models, even though the models have displayed minimal skill in explaining or predicting climate. Model projections, it should be recalled, are the basis for our greenhouse concerns. That corrections to climate data should be called for is not at all surprising, but that such corrections should always be in the ‘needed’ direction is exceedingly unlikely. Although the situation suggests overt dishonesty, it is entirely possible, in today’s scientific environment, that many scientists feel that it is the role of science to vindicate the greenhouse paradigm for climate change as well as the credibility of models. Comparisons of models with data are, for example, referred to as model validation studies [author’s italics] rather than model tests.”
It needs to be kept in mind that computer climate models do not output data: their results are simply computations of the input data. Obviously then, the accuracy or otherwise of the computated output is dependent upon the accuracy of the input data. Furthermore, a climate model’s output is only reliable to the degree that the model’s performance can be validated, not necessarily by comparisons with other models but from raw data recorded or observed from the real world. Of course, tuned parameter corrections may be legitimate but only if they include both those corrections that bring observations into agreement with the model, and those that do not – to exclude the latter is to obfuscate the model’s outcome through omission.
In climate science the most notorious example of obfuscation through omission is what has become known as Mann’s Hockey Stick. Lindzen again:
“In the first IPCC assessment (IPCC, 1990), the traditional picture of the climate of the past 100 years was presented. In this picture, there was a medieval warm period that was somewhat warmer than the present as well as the little ice age that was cooler. The presence of a period warmer than the present in the absence of any anthropogenic greenhouse gases was deemed an embarrassment for those holding that present warming could only be accounted for by the activities of man. Not surprisingly, efforts were made to get rid of the medieval warm period . . . The most infamous effort was that due to Mann et al . . . which used primarily a few handfuls of tree ring records to obtain a reconstruction of Northern Hemisphere temperature going back eventually a thousand years that no longer showed a medieval warm period. Indeed, it showed a slight cooling for almost a thousand years culminating in a sharp warming beginning in the nineteenth century. The curve came to be known as the hockey stick, and featured prominently in the next IPCC report, where it was then suggested that the present warming was unprecedented in the past 1000 years. The study immediately encountered severe questions concerning both the proxy data and its statistical analysis.”
The Mann Hockey Stick has since been discredited by two independent assessments, both statistically and by reference to historical and archeological records, though his initial claim that the current (late 20th century) warming is unprecedented remains within the lexicon of adherents to the AGW hypothesis.
There is a problem here for the reliability of science when models fail, either through prediction or hindcasting, but are still given the same validity as observed or model input data. One could suspect that advocacy is overriding science in this instance. While advocates and politicians might think that the science of AGW is settled scientists and climate modelers need to be able to, and be seen to, separate clearly what is science and what is advocacy otherwise their research may be subjected to political manipulation.
The computated output of climate models, often used in conjuction with models from outside the field of climate science, have been used to construct climate change scenarios, often abbreviated as SRES, an acronym for Special Report on Emission Scenarios. SRES was developed by the IPCC to develop scenarios with which to analyze, according to SRES:
“How driving forces may influence future [greenhouse gas] emission outcomes and to assess the associated uncertainties. They assist in climate change analysis, including climate modeling and the assessment of impacts, adaptation, and mitigation. The possibility that any single emissions path will occur as described in scenarios is highly uncertain . . . Any scenario necessarily includes subjective elements and is open to various interpretations.”
The output of SRES models, alternative views of how the future may unfold, are termed projections. Projections are often stated or implied erroneously, particularly in the media in connection with runaway climate change, as forecasts. This creates the impression that the SRES model output is new data, even proof, as opposed to being simply a projection of computated input data and parameters from a number of sources within and beyond the field of climate science.
Multi-model SRES climate change scenarios are said to create an ensemble of climate change projections. Modellers then consider the spread of these SRES projections, upon which has been built the notion that if the spread is close together then they can have confidence in the projections while if the spread significantly differs then there is uncertainty about the projections even though they may offer a range of possibilities.
The SRES approach is problematic: it is assumptive; prone to exaggerated errors; unscientific. Firstly, assumptions are made about unresolved processes by using tuned parameters while, secondly, errors may be exaggerated by (i) using the computational outputs as new ‘data’ by reintroducing that ‘data’ as inputs into a new model, upon which new projections are predicted and (ii) it assumes not only that a close spread within an ensemble raises the confidence of the prediction but also that a broad spread, rather than disproving the accuracy or otherwise of the models, indicates a range of possibilities, though with a lower (assumed) confidence. Thirdly, multi-model SRES outputs are based on so many assumptions that its use is inherently unscientific because many of the model elements are not falsifiable. It is, nonetheless, a good tool for advocacy though, especially when presented in the guise of science.
This SRES approach has no place in the scientific processes as its outputs can not be verified with real world data: projections are not records and models are not data generators. As yet there is no scientific principle that says that one can derive valid estimates from model outputs until the model output resembles the observed non-modelled data. The uncertainty of an ensemble of climate change projections will always depend on the accuracy of the raw data input irrespective of the spread of projections.
This is not to say that there is no place for models in climate science: even though they are a tool and not data generators there are there are many examples of statistical climate forecasting models providing good projection examples over short time frames. It is important that models, in this context, remain a tool of climate science and not a tool of advocacy.
A problem with climate modelling is that of replication and validation. Because models are tools used in order to calculate, usually complicated, data inputs it is important that the computated outputs can be tested (the models are validated, or not) and repeated (the models’ results are replicated, or not) – by doing so helps remove bias (the models’ outputs are easily influenced by data inputs, flux adjustments and parameterisation) thus increasing the confidence that the models do in some way represent what they are seeking to show. Not to do so means that the models’ computated outputs could be used for non-scientific purposes, such as advocating a predetermined position, without the ability for others, perhaps affected by this advocacy, having the ability to ascertain that the models’ computated outputs actually represent what they seek to show.
This is especially true for climate science, and the repercussions that the computated outputs have on public policy as regards AGW, climate change, tipping points, emissions trading schemes, etc. Furthermore, as much climate science is publicly funded through government grants, etc then it is even more imperative that the funders, ie the public, receive information that can be trusted. It is unfortunate, then, that there is a reticence for some climate scientists and modellers not to share data, especially all the codes or algorithms used that would allow the models to be fully replicated and, if necessary, challenge the validity of the models. This reticence goes to the core of scientific thought and process.
******************
Notes and Links
Ian Read is a researcher, author and geographer with a special interest in climatology and vegetation. He has written over twelve books including The Bush: A Guide to the Vegetated Landscapes of Australia and Australia: The Continent of Extremes – Our Geographical Records.
This article was previously published at On Line Opinion on 23 June 2009 and is reproduced here with permission from the author.
Jack Walker says
What we can measure is a truth not a sole or lonely truth.
When our senses are not acute enough we build instruments of measurement. This we have done for thousands of years.
Computers do not measure, Computers allow us to interpolate and interpret that is all. They allow us to test our theories algorithmically that is all.
The data going into these algorithmic machines must be first class. We cannot and we must never send assessments into algorithmic machines, never for reliance.
We break the rule first rule of science, what we call the first truth.
In our science as we know it we measure on the instruments we have.
Jeremy C says
I must remember this piece of reasoning for the next time I get on an aircraft so I can tell the pilot that the computer model flying the aircraft doesn’t work with real world data so can he/she turn it off and fly the aircraft manually.
Jack Walker says
Jeremy,
The computer simulations that back up the Pilot are the best, the avionics industry have, they are tested to a level of human safety. The programs are not algorithmic, there is no attempt at Artificial intelligence, they are programs hard wired so to speak.
The situational parameters in the programs are event parameters. For all intents and purposes in the concept of the flying, the computer is hardwired from a programming point of view. The pilot has no input to parametisation of the computer program. He calls his event and the computer is as hardwired as a chess program is designed to respond to his input. The avionic computers deal with situation as instruments give them.
You anthromorphise computers. They are not animals or humans, they are machines.
Humans are not hard wired in their thinking, fear the pilot more than the progam, the program will not get tired or drunk.
Your argument is specious. The machine is the fail safe not the pilot.
Pokemon dude buy a pet.
Neville says
Jeremy what has an aircraft journey of several hours have to do with the science of climate change?
If the onboard flight computer had to encompass every variable for every flight for say 30 years it would fail.
Compared to climate change over the next 30 years the flight projection/ prediction would be simple compared to the complex dynamics that make up our climate.
wes george says
Jeremy Clueless is totally clueless about how an airline autopilot works, it’s not comparable to a a 5 or 50-year climate model scenario in anyway. It’s a sophisticated cruise control like you might have in your car, with a lot of rapid second by second adjustments to make the ride smooth.
http://www.gadling.com/2008/05/02/plane-answers-when-do-pilots-use-the-autopilot/
Jeremy C says
So boys and girls give me some evidence that the basics of climate models on computers are different than the basics of computer models used in areas such as flying planes, modeling fluid dynamics, orbital mechanics, wind and wave models (e.g. WAM-Mike 21 etc), modeling chemical reactions and so on. What two words might you use to characterise such models?
Don’t forget, evidence boys and girls, evidence.
Michael says
What I have a great deal of difficulty with is why we have over 20 different models each predicting a different outcome. If the models were any good at predicting the future climate of this planet surely they would all by now be coming to a view which should eliminate all but one of the models – The model which demonstrates beyond doubt that the science is settled and the future is predictable on any time scale.
Michael says
Ok Jeremy C I will give it a shot but it may take more than two words. Firstly it is the number of “Degrees of Freedom” of each of the variables and secondly it is the number of variables. Both these conditions are unconstrained and in some cases poorly understood in climate models but are severely constrained and well understood in the models you wish to use as a comparison.
Jeremy C says
Michael,
Thanks for the thoughtful answer but do you mean that overfitting results with all variables in a climate model and that no one has thought of this and why are you suggesting that things are different in other models?
Gordon Robertson says
Jeremy C. “I must remember this piece of reasoning for the next time I get on an aircraft so I can tell the pilot that the computer model flying the aircraft doesn’t work with real world data so can he/she turn it off and fly the aircraft manually”.
This the problem with you AGW zealots, you can’t tell the difference between a computer used as a model and one used as part of a control system involving feedback. EVERYTHING fed into an aircraft’s computer is known. The technology is totally known…there are no guesses whatsover and absolutely no consensus. There is no tweaking (fudging) in an aircraft’s control system, it is all absolute.
A computer model is totally dependent on the designers understanding of the atmosphere and his/her ability to interpret the physics. According to Gerlich and Tscheuschner, their understanding of the physics is not good. Lindzen and Spencer concur, that modelers have gotten the sign wrong with cloud feedback.
The computer in an aircraft is a servo-system. That means the computer gathers information from various sensors, in real time, obviously, and outputs control data to the mechanisms flying the plane. The results of the mechanism responses are compared to a standard and any errors are automatically corrected. That is not in the least like a computer model, where imaginary forcings like CO2 and aerosols are dreamt up to TWEAK the model so it will respond in the realms of the normal. If you flew a plane based on that nonsense it would crash.
Gordon Robertson says
Michael “What I have a great deal of difficulty with is why we have over 20 different models each predicting a different outcome”.
That doesn’t bother me nearly as much as the fact that the people designing and running the models are biologists, mathematicians, geologists, astronomers, etc. It seems they are anything but physicists, degreed climate scientists or meteorologists.
One would think an expert understanding of physics was required but according to the book by Raymond Peirrehumbert, of realclimate fame, physics is something that can be absorbed along the way. The engineer, Jeffrey Glassman, took Gavin Schmidt, a mathematician/modeler, to task for his understanding of feedback. According to Glassman, Schmidt had no idea what feedback was in a practical sense. The fact that Lindzen and Spencer have declared, based on direct observation, that modelers have the sign wrong with cloud feedback, comes as no surprise.
Even the physicists involved with the modelers at realclimate, like Stefan Rahmstorf, seem to have difficulties with the thermodynamics. According to G&T, who quote Rahmstorf, the latter thinks back-radiation that warms the surface to a higher temperature than solar radiation does not break the 2nd law of Thermodynamics. He think it’s ok to add heat absorbed by GHGs from the surface (at a loss to the surface energy), to solar radiation to get a ‘net balance of energy’ that is positive. G&T responded that the 2nd law is not about adding energies but about heat flow. Such a mechanism as described by Rahmstorf represents a perpetual motion machine that creates energy rather than losing it through losses.
Someone should tell modelers there’s a real world out there that does not correspond to there model worlds based on idealized physics. In fact, John Christy of UAH tried to tell one that but the modeler responded arrogantly that his model output was correct and the real satellite data was wrong.
Russell says
Nice article Ian,
I should state first that I really have never thought much about the evidence and/or the climate modelling sides of the debate on climate change. It’s all too complicated for me, too many variables, and my own opinion is that it does not really matter all that much one way or the other in terms of whether we accept that global warming is real or not. To me there are obvious benefits in reducing the average Australian’s energy footprint.
But I do I have my own experience of models in the industry where I earn my living. The modelling of dredge plumes is less complicated business, than a climate model, but still requires a series of inputs to provide forecasts of what the particles of sediment will do under various scenarios of dredging and weather and sea states. I am not a modeller, but I am very familiar with the intepretation of the outputs, and how it’s possible to derive very different outcomes with just small tweaks of the model inputs or the weightings assigned to various inputs in the model algorithms.
The recent insistence by government regulatory authorities for the incorporation of ‘resuspension’ into the models has required models to provide ever more sophisticated outputs, without the benefit of ever more reliable input data.
I have seen advocacy first hand in relation to interpretation of modelling output, where the accuracy of the same output can be questioned by an individual wherever it forecasts no significant environmental effect, and yet is considered to be exact by that same individual wherever it forecasts significant environmental impact is likely.
We have also run a few model validation exercises recently, where the models used by two different companies have had their predictions ‘tested’ as part of a regulatory requirement. The testing sounds inherently simple enough…….and is based on taking a series of real time measurements in and near the observed plume, and asking the model to predict what the plume should have looked like at those times, with inputs based on dredge logs, weather conditions, tidal states and so on. However, experience has demonstrated the need to be very careful when setting up the conditions of the test and comparing the model output with the real time measurements.
My interactions with the modellers has demonstrated they are always mindful of the many shortcomings in the modelling process, and they are always keen to investigate why their models may not have predicted the position, scale and intensity of a plume accurately, although many understand that near enough may be all thats possible in this field. Their level of frustration with misuse and misinterpretation of the model outputs is also often on display in a regulatory framework where models are asked to do more than they may be capable of.
Luke says
More tedious ranting of little substance from the non-greenhouse theorists. Here’s Gordon again shamelessly ranting on the same old same old after having been medicated for his previous climate kookiness. Mate don’t quote Spencer as source – you disagree with him. But like all sceptic harlots – you’ll jump on anything.
SJT says
Thusly?
Gordon Robertson says
Luke “More tedious ranting of little substance from the non-greenhouse theorists”.
And still more rhetoric and non-science from the AGW contingent.
AGW = consensus + bafflegab = no science
Gordon Robertson says
SJT “Thusly?”
SJT is speechless, down to a one-word vocabularly. I guess all those 10 word responses wore him out.
cohenite says
“sceptic harlots”; that’s a bit colourful; personally I think the GCM is a fabulous concept; it’s just they have been politicised; the GCMs have 2 weaknesses; parametrics and uncertainties; in respect of parametrics this is the problem;
“Example of well constrained parameters
This would include things like the freezing point of water, direct radiative effect of CO2 and other items that are well constrained by lab experiments”
The “direct radiative effect of CO2” for instance has a logarithmic decline, in isolation; that is, in a test tube, a layer of increasing CO2 will absorb an exponentially declining amount to an asymptopic fraction of a constant source of IR at the relevant wavelengths; this in itself puts a severe limitation on the ‘heating’ capacity of increases in CO2. But outside the lab 2 further constraints limit CO2 heating; the first is the formation of local thermodynamic equilibriums at the surface/atmosphere interface; since reemission requires a temperature continum no reemission within the LTE takes place until that LTE is convectively moved upwards to the characteristic emission layer where the temperature in the LTE equilibrilises with the general external atmosphere; at the CEL isotropic emission is constrained by lower opaqueness so emissions are upwards.
The second profound constraint is water in the atmosphere which overlaps the absorbing frequencies of CO2; but unlike CO2 water’s feedback is dependent on its form and location and can be either a positive or negative feedback; high cloud will let in insolation but block OLR; low cloud will reflect insolation, absorb upward IR and then reemit the IR from the top of the clouds; since more of the atmospheric water has been used to form the low clouds there is less high cloud to block the reemitted water IR; since water absorbtion overlaps CO2 this is a double -ve feedback to any heating trend.
Neither of this real life factors which differ from the lab experiments are adequately dealt with by the GCMs which is why the GCMs have such a lousy record of forecast/projecting/predicting.
SJT says
“SJT is speechless, down to a one-word vocabularly. I guess all those 10 word responses wore him out.”
It’s an empty opinion piece, devoid of any actual content.
Eli Rabett says
First, almost all data is derived from models of instrument behavior. Second, just about all physics papers describe their results in terms of models. You have a point?
Graeme Bird says
“First, almost all data is derived from models of instrument behavior. Second, just about all physics papers describe their results in terms of models. You have a point?”
Why invoke the physicists? What is YOUR point?
The modelers don’t blur science with advocacy? They are a REBELLION against science. William Connelly and other science-challenged morons of this ilk. The modelers refuse outright to conduct science. When has a modeler tested the hypothesis that CO2 has a cooling effect? A neutral effect? When has a modeler gotten hold of Becks history of CO2 levels and tried to plug that in? When have they confessed out loud they cannot do the job unless people get serious about getting together an accurate history of CO2-levels.
When have they used fuzzy logic algorithms for that matter? Is it all just computer-speed number crunching?
There is no need to suggest these guys are blurring science with something else. They are not conducting science in the first place. At least not where CO2 is concerned.
First, almost all data is derived from models of “instrument behavior.” What does that mean? One would have thought that they were modelling the climate. Not how the instruments behave. Is this a jargon term?
Gordon Robertson says
Eli Rabbett “First, almost all data is derived from models of instrument behavior. Second, just about all physics papers describe their results in terms of models. You have a point?”
Before you go asking if anyone has a point, what is yours?
What are ‘models of instrument behavior’. Is that more of the vague science you used in your attempt to discredit the G&T paper? Instruments are normally used to monitor and measure observed phenomena, not to model it. You are probably refering to software packages like Matlab, which are very good for modeling established science like electronics. There’s no way Matlab or any other software can model something as complex as the atmosphere.
We have been royally jobbed by modelers like the crew at realclimate. I was just reading about the modeling effort by Stieg, Mann et al, trying to prove the Antarctic has been warming since the 1950’s. What a joke. Stations buried in snow, the use of software to fill in missing data, the emphasis of 35% of the stations located in the warmer peninsula with the rest near the periphery, and the choice of a minimalist statistical analysis all contributed to a highly misleading conclusion. As if it wasn’t bad enough that NAS and Wegner had already discredited Mann, here he is participating in a study with more questionable science.
My point, if I am who you are targeting? Models as used in climate science are crap and they have been popularized by people like yourself fronting as experts. I have little doubt that modeling is useful, perhaps even accurate in many discipliness. Let me ask you this. Would you find it acceptable if a biologist, a geologist, a mathematician or an astronomer ran a model to investigate complex electronics phenomena? If so, my estimation of you is worse than I thought. If so, why are people from those disciplines investigating atmospheric phenomena, and why are they being taken seriously?
Where’s your proof that just about all physics papers describe their results in terms of models?
Gordon Robertson says
Greame Bird “When has a modeler tested the hypothesis that CO2 has a cooling effect?”
Good stuff, Birdie, here’s a few more.
When have they even made the slightest attempt to consult with an expert on the atmosphere, like a meteorologist or a degreed climate scientist? Christy, Spencer and Lindzen have offered advice only to have it rebuffed and snickered at, by mathematicians and computer programmers. When are we going to wake up to this buffoonery? This would have made good Monty Python stuff.
When have they read a real textbook written by a real physicist and/or meteorologist? Reading through The Fundamentals of Atmospheric Radiation, by Bohren and Clothiaux, boggles the mind as to the complexity of radiation in the atmosphere. The concepts of scattering, phase shifts, polarization, vector/tensor analysis, to name a few, are completely unaddressed. In it’s place is an over-simplified model of heat transport in the atmosphere, the basis of which is two theoretical surface radiating against each other.
A moment’s deliberation gives rise to the huge problems involved. For one the atmospheric gases thin out in a continuous gradient from the surface outward. Where is that atmospheric surface? For that matter, where is that greenhouse? Most of us are walking around breathing that surface. With reference to surface temperature, are we refering to solid surface, or a layer of air above the surface? That’s what surface stations measure. As Bohren points out, solar radiation incident on a surface excites charges in the surface to radiate their own EM, which interferes with the solar EM, just like in a diffraction grating.
Incoming solar radiation contains just over 50% of its energy in the infrared. If GHG’s are an influence, they are being warmed on the way in and must behave like a surface. Where does that leave us? A surface radiating against a surface radiating against another surface, with all the surfaces intertwined? Where does it end?
This stuff is infinitely complex, yet we are being spoonfed trivia by mathematicians, biologists and astronomers with no background in theoretical physics.
Louis Hissink says
Eli Rabett
““First, almost all data is derived from models of instrument behavior”
Oh that is really interesting, data are not measurements but computer outputs of what the modeller thinks an instrument might measure.
This is fast becoming a farce – the climate equivalent of econometrics.
W. Pounder says
Can Jeremey C run those GCM assumptions past us or do we have to ask G Bird to deliver the goods?
Luke says
Rules for denialists
http://larvatusprodeo.net/2009/08/01/the-rules/
hahahahahahahahahahahahaha
Luke says
And you must admit Coho – you’d never see this here – but your fav even made the list.
http://rabett.blogspot.com/2009/07/best-of-worst-john-mashey-asks-maybe.html Comments even funnier.
HAHAHAHAHAHAHAHA
Larry Fields says
Louis wrote:
“Oh that is really interesting, data are not measurements but computer outputs of what the modeller thinks an instrument might measure.
This is fast becoming a farce – the climate equivalent of econometrics.”
You’re too charitable. And Ian has a lot more patience than I do. The Japanese scientists’ characterization of IPCC computer models as “ancient astrology” is more to the point.
Computer simulations have a limited role to play in scientific investigation. However masturbating with silicon does not even come close to proving the assertion that the ‘science’ of climate change is settled. Garbage-in-garbage-out computer models, with zero predictive value, are the ONLY support for the Alarmists’ central claim that GHGs played a significant role in the latest round of global warming that ended back in 1998. The Doomsday scenarios are a house of cards, predicated on this scrap of faith-based ‘science’.
Luke says
Larry – “Garbage-in-garbage-out computer models, with zero predictive value” – simply blather mate. You’re all just ranting. Blah blah blah blah … world conspiracy – Al Gore is fat – blah blah blah
“the ONLY support for the Alarmists’ central claim that GHGs played a significant role in the latest round of global warming that ended back in 1998.” – what an utterly STUPID comment
What moronic denialist scum.
Neil Fisher says
1. “Validated”
This means:
1) Demonstration that the discrete solutions coverge to the continuous solutions; AND
2) Demonstration of numerical stability of the implementation.
2. “Verified”
This means that the model shows usefully predictive results: if we know the starting condition
with the required resolution and accuracy, we can know the future state of the system with the
desired resolution and accuracy.
If you know of any “climate model” that meets the validation criteria, let alone the verification criteria,
I would appreciate it if you could tell me which one.
Note that a model cannot be verified until it has first been validated.
Note also that the models used by engineers for designing bridges, cars etc are all validated and verified
finite element analysis tools – climate models, which use similar techniques are not, to my knowledge
either validated or verified. That doesn’t mean we can’t use them, but we need to recognise the limitations.
SJT says
“Computer simulations have a limited role to play in scientific investigation. However masturbating with silicon does not even come close to proving the assertion that the ’science’ of climate change is settled.”
Have you ever even bothered to read the IPCC report? The supporting evidence for AGW is much more than just models. If all that they had was models, I wouldn’t accept it either.
Graeme Bird says
“Have you ever even bothered to read the IPCC report? The supporting evidence for AGW is much more than just models.”
No they don’t have any evidence. No use lying about that. If you think I’m wrong find any evidence at all for the likelihood of catastrophic warming.
cohenite says
luke; say hi to my old chum Eli for me; I agree with his number 5 but the others are good stuff; about Miskolczi, developments there should cause some consternation amongst the rabbit folk and some support for McLean will be presented next week; life goes on luke, nothing is forever and we should bask in the warmth while we can.
Luke says
Coho – give it away mate. McLean – shhhhh – easy does it – just very very quietly freeze, slowly back up, then RUN AWAY very quickly. Deny having ever seen it.
James Annan has now shot it up with another error.
http://julesandjames.blogspot.com/2009/07/editorial-standards-at-agu-journals.html
Why did McIntyre and Stockwell not tell us all this – as climate science guardians of the plant – they’re obviously not real sceptics. Otherwise they would have absolutely panned it.
You guys are faux sceptic sluts – you’ll jump on anything that takes your eye.
SJT says
“about Miskolczi, developments there should cause some consternation amongst the rabbit folk ”
The only consternation Miskolczi causes is that due to the number of people who can believe utter nonsense.
Eli Rabett says
http://www.temperatures.com/rtds.html for a start. You need a physics based model of the response and noise in the measurement device for interpolation between calibration points and that is true for any temperature measurement device. Think about how you define absolute zero, the one fixed point on the temperature scale. Think about why triple point cells are important in thermometry. Think. (Well, ok, Eli knows that above this blogs URL stands the motto,
Abandon thought ye who enter here
You need physical models to even start to discuss temperature.
Graeme Bird says
“This ridiculous paper has already been eviscerated by Tamino, RC, and mt, so I won’t waste too much time on it, but I have spotted one more error that no-one else has commented on so far before I get to the main point of my post.”
Why link to rubbish like this Luke? The loopy tandem-riding Scotswomen has nothing to say about the science. And she erroneously claims that the anti-scientific lunatics like Tamino and the science frauds of real-climate have punched holes in it. Why not link to someone competent who at least admits to having a serious critique to it. The absolutely useless Jules starts off admitting up front that she has no science to set this matter straight.
Larry Fields says
SJT wrote:
“The supporting evidence for AGW is much more than just models.”
Yes, there’s evidence that atmospheric CO2 concentrations have been increasing. Yes there’s evidence that global temperatures increased to a peak back in 1998. However there’s ZERO physical evidence that the first contributed in a SIGNIFICANT way to the second.
Yes, CO2 is a weak GHG. So what? Yes, increasing atmospheric concentrations of the stuff probably contributed in a very small way to the recent round of ho-hum global warming that ended back in the last century. So what? Does SJT understand the concept of “physical evidence”? Does SJT understand the concept of “significant”?
Yes, you’ve trotted out your sweeping claim. How cute! Now, without appeals to authority figures, appeals to fear, and other emotional codswallop from the Alarmist bandwagon, please show ONE piece of physical evidence that supports your belief in the Flying CO2 Monster. And remember that correlation–by itself–does not prove causation.
Neil Fisher says
Luke, SJT & Eli: no pointer to a validated and verified model yet? Here, I’ll make it easier for you: point me to evidence that the discrete and continuous solutions of the 3D Navier-Stokes equations converge – since this is a major part of the underlying physics embodied in CAOGCM calculations, someone, somewhere, has evidence that these calculations are more than just numerical noise, right? So we *must* know the spacial and temporal scales where that convergence starts to fail, right? Because if we don’t have this information, the models would be COMPLETELY unreliable, wouldn’t they? Without that information, all the models are is a waste of time, effort and energy.
Eli especially – I know you have access to university resources and can easily find this information. You won’t because you CAN’T – it doesn’t exist, does it? Show me the cite; name the paper and authors and where and when it was published. It’s a simple, basic question on models and no-one has EVER shown this basic first step on the road to validating the models. No-one. Ever. So prove me wrong.
Louis Hissink says
Eli
“You need a physics based model of the response and noise in the measurement device for interpolation ”
I will think about that when I next use a mercury thermometer to indicate temperature.
SJT says
“Luke, SJT & Eli: no pointer to a validated and verified model yet? Here, I’ll make it easier for you: point me to evidence that the discrete and continuous solutions of the 3D Navier-Stokes equations converge – since this is a major part of the underlying physics embodied in CAOGCM calculations, someone, somewhere, has evidence that these calculations are more than just numerical noise, right? So we *must* know the spacial and temporal scales where that convergence starts to fail, right? Because if we don’t have this information, the models would be COMPLETELY unreliable, wouldn’t they? Without that information, all the models are is a waste of time, effort and energy.”
The models don’t pretend to predict how the atmosphere will behave precisely in the future, that is impossible. All they are doing is saying that for the given mix of climate “ingredients”, what can I expect the overall climate to be. That is why there is no point in entering the current weather as a starting point into the models. It only makes sense to give them a starting point of the average climate conditions.
Gordon Robertson says
Eli Rabett “Think about how you define absolute zero, the one fixed point on the temperature scale”.
I understand what you’re getting at, and I think that way myself. In fact, I get ridiculed for it. I think that physics has been described by humans and passed off as reality whereas real phenomena is quite separate from the definitions we impose, such as force, mass, momentum, especially time.
David Bohm described physics in such terms but I have never heard anyone else do it. Craig Bohren pointed out that momentum is a phenomenon and does not necessarily obey the law of p = mv. We humans described it as such and that’s why a photon can have momentum and no mass.
With respect to models, I can agree with you that humans have modeled reality as psychological concepts, however, using such an argument seems to take away from what is being done with models in climate science. I have no issue with using models to represent well understood theories, I just don’t think that’s the case with climate modeling. Although models have been used all along in teaching electronics, I distinguish between models that represent phenomena and those that represent large systems.
I have worked in electronics most of my life, studied electrical engineering at university and I have done some basic design. I had never heard of a model in electronics till about 10 years ago. Computers in the early to mid-1980’s simply did not have the power, and modeling software was not available. Electronic modeling software like pspice, simulink, etc., are recent ‘tools’ to aid in design, but the basics still have to be learned to use them properly and effectively.
I am the first to admit that having software to model electronic circuits is both a boon for design and a wonderful teaching aid. I would quickly amend that statement to emphasize the need to understand basic electronics (physics) before using such a model. There’s no point plugging electronics equations into a model if you don’t know what the equations represent in reality. I’m sure people can get away with doing so but they will never be good designers till they learn the physics. I think it’s emminently fair to draw the same parallel with people using climate models, when they don’t have the background in physics to fully understand the equations they are programming into the model, or the ability to interpret the theory of atmospheric physics.
I learned basic electronics the hard way, through studying theory, doing problems sets and running experiments. When I got out of school, another learning process began, that of fitting my theory into practice. It took about 20 years to get comfortable with the relationship between theory and practice. I think of Dick Lindzen, with his 40 years experience in atmospheric physics, being told by the mathematician Gavin Schmidt that he is ‘old school’. I have encountered the same problem with young bucks in electronics, thinking they know it all. Many of them are missing certain basics obtained by old-timers who learned from experience. Schmidt would be well-advised to drop his arrogance and consult with Lindzen.
We’ve had our models in electronics. There’s the primitive Bohr model of the atom, of course, but for many years there was the model of current in a circuit flowing from positive to negative. It is based on the positive test charge, a totally theoretical model based on how electrons flow in a circuit. It was presumed initially that electrons vacating a position in an atom’s valence shells left behind a positive charge which moved in the opposite direction as another electron moved into the hole position and the hole kept moving the other way. Later a hypothesis formed that the empty, moving hole had a mass. Electrical theory in engineering schools is still taught based on that phantom hole flow.
Schockley, who discovered the transistor, was quick to squelch that notion. He reminded us that such holes were imaginary and to be used only for visualization. It has been well established for a long time that electrons, through charge flow, are the only current carrier in a circuit, yet the notion of positive holes persists. In mainstream electronics, it is accepted that electric current flows from negative to positive, but universities, through their fetish with paradigms, still insist on teaching imaginary concepts through models.
From my own experience with transistors, I have never used the hole model. It is taught that working with PNP transistors is easier if one imagines hole flow instead of electron flow. I simply imagine the electrons flowing against the arrows in transistors and diodes, which is what they really do, whether NPN or PNP, and all is well. There’s no need for those models, whether they be holes or using the inverse of resistance which is conductance, another model. I don’t understand why people insist on using models when they are simply not necessary. If I hear one more person describing electrical current flow, modeling it with water flowing through a pipe, I wont be responsible for my actions.
I would like to see universities take the advice of Richard Feynman. If you can’t explain the theory practically, don’t teach it. Offer it for thought, but don’t make it a paradigm or include it in exams as a certainty. Models should be tools only, and it should be against the law to use model theory without absolue verification to form government policy. I might add that we should empahsize early on that most of what we learn is modeled.
WRT to your RTD link, I missed your point. I understand that temperature is an imaginary concept devised by humans to model the state of molecular energy. So we arbitrarily decided on a range with a point we think is zero. There is nothing arbitrary about an RTD since it represents the agitation of atoms that make up the device. It’s resistance is in direct relationship to that state of agitation. If I was working with an RTD in a circuit, I would not care about temperature, unless it blew up or overheated. I would go entirely on resistance, voltage and current.
Gordon Robertson says
Sometimes I marvel at my abuse of the English language, “It has been well established for a long time….”
Some Yogi Berraisms along the same line:
“This is like deja vu all over again.”
“You can observe a lot just by watching.”
“He must have made that before he died.” — Referring to a Steve McQueen movie.
“If you don’t know where you are going, you will wind up somewhere else.”
SJT says
“If I hear one more person describing electrical current flow, modeling it with water flowing through a pipe, I wont be responsible for my actions.”
But it is, that is a very useful model to use for understanding electrical current.
Neil Fisher says
SJT wrote:
You avoid the question – where is the evidence that COAGCM output is other than numerical noise? Where are the papers that define the spacial and temporal sizes required to correctly resolve the equations used in the models? If this research has been done (and I do not believe it has, but am willing to be shown otherwise), then we should have a conclusion along the lines of “spacial grid scales of less than 10km and temporal grid scales of less than 1 month show convergence” (obviously, those numbers are not real numbers – they are but an example of the sort of thing I’m looking for).
IF YOU DO NOT HAVE THIS INFORMATION, THAT IS FINE WITH ME, JUST SAY SO!
Eli Rabbit is noticably quiet in discussing this issue – because he KNOWS both that such information is currently not available and also that without it the models are just toys – useful for research purposes ONLY.
We are all now well aware of the value of unvalidated, unverified models – the global financial crisis is the result of people having faith in such models, yet you want us to radically change our entire economy based on similarly unproven models. I say, give us the same level of certainty for climate models as we insist upon for engineering models where peoples lives and lifestyles are at risk – anything less is completely unacceptable. IF WE DO NOT KNOW, THEN WE MUST ENSURE THAT EVERYONE KNOWS THAT WE ARE GUESSING!
Graeme Bird says
“The models don’t pretend to predict how the atmosphere will behave precisely in the future…”
Well of course they don’t you terminal twit because they are always proved wrong. They don’t forecast, hindcast or sing or dance or nothing. They are useless. They are not scientific. And the people running them are not scientists.
Now if they were being operated by people who had some sort of affinity for the scientific method some usefulness might come out of them. Until then they are a waste of taxpayers money and an ongoing advertisement for spending cuts.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
I see now that the earlier link of Lukes was to James Annan. The other half of the Scots tandem-riding team. Not a scientist by a long shot. Someone so unsound as to use Bayesian analysis in a controversy already polluted by leftist politics.
SJT says
“We are all now well aware of the value of unvalidated, unverified models – the global financial crisis is the result of people having faith in such models, yet you want us to radically change our entire economy based on similarly unproven models.”
The models have been validated and verified on the past climate records. The components of climate cannot decide they will follow different laws of physics at sometime in the future.
Eli Rabett says
Louis, if you thought about anything it would be a first, but how about thinking how the coefficient of volume expansion of the glass in a thermometer could make a difference if you dipped it completely into very hot liquid. Yes, thinking about things is a feature, not a bug.
Neil, you ignore the fact that models do not have to be complete descriptions to be useful.
MAGB says
The story is always the same for environmental models. Have at look at the ones predicting toxic air exposures from soil and groundwater contamination, or the ones predicting mass death and destruction from minuscule levels of air pollution. I’ve looked at them closely and they are so full of uncertainties and assumptions that the outputs are completely meaningless. They have been proven invalid by objective measurements, but that doesn’t stop green ideologues using them, and having them entrenched in law.
Even a cursory look at climate models shows exactly the same thing. Just count up the number of variables affecting climate, that you have to get right to make your model work – laughable.
Ian Read is correct – these models are invalid and should be ditched.
Louis Hissink says
Eli,
Your practical example seems a bit of a non sequitur – try again please, since you seem not to have thought it through.
You have used a mercury thermometer to measure temperature, no ?
And the coefficient of volume expansion of glass when dipped into a very hot liquid?
You seem to be overfull in erudition but empty in its comprehension.
cohenite says
OH, the pain; luke, your taunts, mercy; any distributive function, ie a trend, will be nullified in differencing, which is what McLean et al did; but differencing highlights a correlation such as between ^T and ^SOI, which is what McLean et al found with a 7 month lag; as I have tried to explain to the B&Ts at Deltoid, the nominated period of relevance for AGW [by Raupach and his wild bunch] is from 1950 onwards; for that period a dominant correlation, variable or linear or loop the loop, is also the dominant trend. I’ll put it to you luke and your hit and run mentor, eli; if the temperature increase since 1960 is 0.14C per decade how much of that is variability and how much is trend, and how would you isolate the two?
SJT says
“Even a cursory look at climate models shows exactly the same thing. Just count up the number of variables affecting climate, that you have to get right to make your model work – laughable.”
What variables. What model are you talking about?
Eli Rabett says
Well Louis, think of it this way, you could shove it where the sun don;t shine. Same effect.
Thank you for playing
Louis Hissink says
Eli,
What has been shown is your descent into ad hominems – and hence the bankrupty of your argument.
Thanks for proving the AGW proselytiser’s intellectual vacuity yet again.
Gordon Robertson says
SJT “But it is, that is a very useful model to use for understanding electrical current”.
It might be in ‘Electronics for Dummies’, but not for the serious student of electronics. For one, that analogy applies very loosely to copper conductors, not to semiconductors or waveguides. In other words, it has a very limited scope.
A water pipe model leads students into visualizing current in a conductor as if electrons are a stream, like water. The thing that needs to be understood is that electric current travels via charges and not just electron motion per se. The charges travel much faster than the electrons, much like a row of marbles on a grooved ruler transfers the energy from a tapped marble at one end to dislodge the marble on the other end. None of the marbles in the middle move, but the end marble shoots off with good energy (oops…a model).
The point I’m trying to make is that many models are introduced supposedly to make learning easier. It has been my experience that many such models get in the way of understanding what is actually going on. I may come across as being opposed to climate models, but I’m not. I just don’t think we should be trying to understand atmospheric processes based solely on immature models, and certainly not through modelers lacking in physics theory or an in-depth understanding of atmospheric processes..
Gordon Robertson says
Eli Rabett “Thank you for playing…”
I get the impression that playing is all you’re doing. You’re the king of the castle, and if people agree with your POV, you’re a benevolent king. If anyone disagrees, you get your nose out of joint.
Ego imposes a severe limitation on intelligence, since both can’t exist in the same mental space at the same time. You need to decide which is more important, ego or intelligence. You can’t have both.
Actually, it’s not a choice, it’s awareness, which is the begining of intelligence.
Neil Fisher says
SJT wrote:
Validation is a mathematical exercise and does not rely on observations of the system you are attempting to model. Verification involves comparisons to real-world data. As I stated, you cannot verify a model until it is validated. I would suggest that if you are ignorant of the process, then perhaps you should study it a little before you make such inane comments.
Eli wrote:
You are right that models do not need to be complete to be useful. They must be validated to be useful though – an incomplete model can be shown to be calculating exactly what it should be, yet due to incompleteness, fail verification. Can you show me a validated climate model?
For those who are lost in this conversation, as an example of why validation is required I will give the following example of the convergence test: take a high resolution digital photo of, say, a persons face. Run a “pixelisation” filter on it at various numbers of in to out pixels. At low values, the face still looks like a face, just blurry. But at some point, it stops looking like a face and is just a pile of coloured blocks. This change is quite easily seen, because a small change in the value makes a huge change in the output. In terms of climate models, we do not yet, as far as I am aware, know the value where that change takes place – in fact, while I do not have the references to hand, it is my recollection that there are published papers showing significant changes in model output for very minor changes in temporal step sizes. This would seem to indicate that those particular parameters are producing little other than numerical noise. Surely Eli, if we DO know these values, you can both quote them to me (or give a cite that shows them), and show model output using the same initial conditions but with slightly different spacial and temporal resolutions and thus demonstrate that a sequence of minor changes produces very similar output – say, 100km, then 90km, then 80km etc grid size with the same initial conditions should show simply more detail, not radically changed output. Ditto for the time step. Can you show this to me please, or cite a published paper that shows it? This would be an important and required step on the way to a demonstration that climate models are useful, but without it, they are simply too unreliable IMO for decisions involving trillions of dollars and the lives and lifestyles of billions of people. Probably still useful as a tool for understanding and/or learning though.
SJT says
“Validation is a mathematical exercise and does not rely on observations of the system you are attempting to model. Verification involves comparisons to real-world data. As I stated, you cannot verify a model until it is validated. I would suggest that if you are ignorant of the process, then perhaps you should study it a little before you make such inane comments.”
You are ignorant, you mean. The models have been demonstrated to be useful and accurate to a reasonable extent. What you are talking about is a concept for models that is an abstract ideal that is nice to know about, but not really relevant to the issue we are dealing with.
spangled drongo says
cohers,
A bit o/t but I wonder if Luke, eli et al have seen this great example of peer reviewed “robustness” at JGR?
http://climatesci.org/
toby says
Eli, you are clearly blinded by your faith….and judging by so many of your posts an egotistical driven one as well. Oh and apparently not a very nice person either!
SJT says
“A bit o/t but I wonder if Luke, eli et al have seen this great example of peer reviewed “robustness” at JGR?”
It is not peer reviewed but it is bleeding edge. I think they have tried to push the technology too far with these medium range forecasts, but that is immaterial to the issue of long range climate change models. They do not pretend to claim the accuracy needed to make the forecasts of actual weather into the long range future. Pielke dodges their defence that their forecasts are still reasonably accurate for such new technology, by focussing on a specific example. If he was being honest in his criticism he would acknowledge.
* These forecasts are not related to climate modeling
* He has not looked at their overall success rate, only cherry picked one that was wrong.
Neil Fisher says
SJT wrote:
Then educate me, sir!
I ask nothing more than for you to show me evidence that this is so.
Any engineer knows that such “ideals” are vital to their continuance in their chosen profession – they are held accountable for their decisions. Surely it is not too much to ask that the same information is available WRT climate? Surely you are not suggesting that climate change is of lesser importance than designing a car or a bridge? Standards for proof exist – I ask only that such standards are adhered to. Show me your evidence, and should it prove compelling, then you will have my support. Speculation and expert opinion are nice, but hardly at the level required for public policy, especially when you assert that the proof exists yet cannot or will not supply it.
spangled drongo says
“There are several other gratuitous claims and errors in Benestad and Schmidt’s paper. However, the above is sufficient for this fast reply. I just wonder why the referees of that paper did not check Benestad and Schmidt’s numerous misleading statements and errors. It would be sad if the reason is because somebody is mistaking a scientific theory such as the “anthropogenic global warming theory” for an ideology that should be defended at all costs.
“Nicola Scafetta, Physics Department, Duke University”
SJT,
That paper was refereed!
Benestad and Schmidt [as in God Gavin from RC] made many errors including a simple one by misapplying an accepted algorithm.
This is a huge and obvious blunder by someone who considers himself to be above scepticism.
But don’t hold your breath waiting for apologies.
spangled drongo says
Not to mention the huge and obvious blunder by referees who you, Luke et al tout as being so essential to published cred.
cohenite says
Spangles; the Schmidt effort is par for the course; lucia has a repeating thread dealing with Schmidt’s statistical contortions; perhaps the most egregious peered paper recently is this one which simply says because estimates of climate sensitivity have a high degree of uncertainty climate sensitivity must be higher rather than less;
http://wattsupwiththat.com/2009/07/19/insufficient-forcing/#comment-161653
Luke says
OK Spanglers – you explain what they are debating? (SJT wait for crickets!)
SJT says
“Surely you are not suggesting that climate change is of lesser importance than designing a car or a bridge? Standards for proof exist – I ask only that such standards are adhered to.”
No, but it is not like designing a car or bridge. It is like modeling a climate.
Neil Fisher says
SJT wrote:
I did not ask if it was “like” it, I asked if it was of lesser importance.
Clearly, the evidence that I require – that the climate models have been validated – is not available, which is as I have said all along. In short, there is no evidence that these models are producing anything other than numerical noise. If such evidence exists, post it, or a link to it, or cite a paper that shows it. Put up or shut up – and you will notice that Eli, who, in other forums incites people to violence and relies on ad hom attacks, has dropped this issue like a hot potatoe. Why? Because he knows I am speaking the truth and that no such evidence exists. In fact, the only published research on the convergence issue for the 3D N-S equations (Ye et al) is that no such convergence exists. The evidence for numerical stability of the models is also negative – most require an “island” at the north pole or they simply cannot perform their calculations – this is unphysical and the “fix” for such things as negative mass is to add to the code and “constrain” values to what’s physically possible. Further, you will never see such model output in absolute temperatures, only as “anomolies”. Why? Because they are off by up to 10C or more in absolute terms!
Climate models are clearly not fit for the purpose of public policy decisions where peoples lives, jobs and lifestyles are at stake. They may have some academic uses, but that’s it. If you wish to believe otherwise, then that is your concern – I do not and will not accept that these things are anything other than cute toys unless and until someone can show me that they have been formally validated. That any govt. would accept these “projections” as being in any way related to reality is disturbing and disgusting.
Gordon Robertson says
spangled drongo…. re Benestad and Schmidt paper and Scafetta reply.
This closing commentary from Scafetta is telling:
“I just wonder why the referees of that paper did not check Benestad and Schmidt’s numerous misleading statements and errors. It would be sad if the reason is because somebody is mistaking a scientific theory such as the “anthropogenic global warming theory” for an ideology that should be defended at all costs”.
Roy Spencer has made similar accusation of the modern peer review process as has Lindzen. Spencer went so far as to claim the reviewer did not seem to understand what he was saying. This is also not the first time Schmidt has been called out for his lack of understanding of basic principles. Jeffrey Glassman nailed him on his understanding of feedback and solubility of CO2 in water.
http://www.rocketscientistsjournal.com/2006/11/gavin_schmidt_on_the_acquittal.html
Glassman concludes:
“Nowhere does Schmidt suggest that the models on which he relies to frighten the public might have been validated. He relies instead on an incompetent tutorial to support the AGW conjecture.
….The burden remains on the GCM operators advocates to revise their models. They need to abandon CO2 as a forcing, and instead make the atmospheric CO2 concentration respond to global temperature as dictated by the solubility of CO2 in water. This should be a fatal blow to anthropogenic global warming”.
SJT says
“Clearly, the evidence that I require – that the climate models have been validated – is not available, which is as I have said all along. In short, there is no evidence that these models are producing anything other than numerical noise. If such evidence exists, post it, or a link to it, or cite a paper that shows it. Put up or shut up – and you will notice that Eli, who, in other forums incites people to violence and relies on ad hom attacks, has dropped this issue like a hot potatoe. Why? Because he knows I am speaking the truth and that no such evidence exists. In fact, the only published research on the convergence issue for the 3D N-S equations (Ye et al) is that no such convergence exists. The evidence for numerical stability of the models is also negative – most require an “island” at the north pole or they simply cannot perform their calculations – this is unphysical and the “fix” for such things as negative mass is to add to the code and “constrain” values to what’s physically possible. Further, you will never see such model output in absolute temperatures, only as “anomolies”. Why? Because they are off by up to 10C or more in absolute terms!”
Just a bunch of rumours.
Neil Fisher says
SJT wrote:
Then you should have no trouble citing evidence that I am wrong. And yet you do not.
C’mon – show me a validation study on any climate model of your choice. Show me ANY peer reviewed paper that demonstrates convergence between the discrete and continuous solutions to 3d N-S. Show me ANY climate model output in absolute values that matches real world measurements. It should be simplicity itself to show these things if, as you insist, I am “wrong” or spouting “rumours”.
The truth is that you cannot show these things to be wrong. The truth is that the models are not up to the standard required for public policy decisions, and what’s more, those who run such models are well aware of this fact, yet propose we change the whole basis of our economy based on their output! The reality is that any engineer who designed something based on an unvalidated model would lose accreditation even if the resultant artifact caused no problems, no deaths and no injuries. This would be so even if later studies showed the artifact to meet or exceed the design specifications in every respect. And yet, oddly enough, we seemingly need to meet no such standard for climate models, and we seem to be going down the road of having faith in these models and making huge changes based on their output – changes that affect more people in more ways than most engineers would dream their product could. And you stand around cheering them on! It boggles the mind.
SJT says
“Then you should have no trouble citing evidence that I am wrong. And yet you do not. ”
You are the one making the claim, you provide the evidence. All I have seen is a random collection of denialist rumours that are floating around.
Neil Fisher says
SJT wrote:
Alas, I am not making the claim – modellers are. *They* must provide the evidence, and I have asked you to cite it. You have not and cannot because it doesn’t exist. In any case, I *have* provided a cite re: convergence (Ye et al – 2005 IIRC). Do you have a counter-cite? Clearly not, or you would have provided it. AFAIK, it’s the ONLY published work in this area in more than 40 years.
All I have seen from climate modellers is numerical noise. If they wish to be taken seriously, they need to show the models meet standards. They have not and do not. What you believe is no concern of mine, except where it affects me. Provide evidence I need to change, or go away and leave me be.
SJT says
“All I have seen from climate modellers is numerical noise. If they wish to be taken seriously, they need to show the models meet standards. They have not and do not. What you believe is no concern of mine, except where it affects me. Provide evidence I need to change, or go away and leave me be.”
Model E is available for downloading. You can run a cut down version on your PC.
Gordon Robertson says
Neil Fisher “All I have seen from climate modellers is numerical noise. If they wish to be taken seriously,…”
Unfortunately they are taken seriously, and by influential people who either don’t know better or influential people with agendas. Sir John Houghton, co-chair of the IPCC, has obviously had a major impact on the UK government. He’s an Oxford Old Boy, and from what I can gather his only backround in climate science is as a modeler. The IPCC has been steered by modelers since its inception.
We have a serious problem in climate science. Traditional observational science has been pushed aside by virtual science, and for no good reason. The only apparent reason is an agenda of some kind that is more political than anything. Greens and extreme environmentalists, who can’t get voted in on a platform, have found a way to manipulate science to do its bidding.
I find it deeply disturbing that arrogant people would be so dishonest as to force their agendas on the public under the guise of science.