The risk of a climate crisis, like the risks associated with sub-prime mortgage securitisation, are calculated using complex computer models and both are too complex for the average punter to understand.
As Graham Young wrote last week in a blog post entitled ‘Sub-prime and climate change’, these models were created by clever people with PhDs in maths and physics, but they are only as good as the information feed into them. GIGO (garbage in, garbage out) is how he described both the climate models and the models that helped created the current credit crisis.
According to Richard Mackey, a sceptic from Canberra, also writing on the issues of climate change and financial systems, a key limitation with both financial and climate models is the underlying false assumption that economic and climate systems are ergodic systems – that is they normalise to an equilibrium state.
Richard Mackey wrote:
“One of the lethal critiques of the United Nation’s Intergovernmental Panel on Climate Change (IPCC) models is that the climate system can never be at anything like an equilibrium state.
All the models assume that the climate system normalises to an equilibrium state, the state modelled. As the natural processes of the climate system are non-linear and non-ergodic, small variations may result in large changes. There are negative and positive feedback loops. There is randomness in the system. As a result, the simple deterministic computer simulations on which all climate change projections are based will have little to do with the real world.
The econometric models of the Treasury are also equilibrium models.
They too assume that the economic system normalises to an equilibrium state, the state modelled by those models.
As Nobel Laureate, Douglass North, has demonstrated, the real world is vastly more complex that the simulated world of the models and is never in an equilibrium state, more precisely, never anywhere near such a state.
He argued that we live in a non-ergodic world and explained that an ergodic phenomenon has an underlying structure so stable theory that can be applied time after time, consistently, can be developed.
In contrast, the world with which we are concerned is continually changing: it is continually novel. Inconsistency over time is a feature of a non-ergodic world. The dynamics of change of the processes important to us are non-ergodic. The processes do not repeat themselves precisely. Douglass North argued that although there may be some aspects of the world that may be ergodic, most of the significant phenomena are non-ergodic.
Douglass North stressed that our capacity to deal with uncertainty effectively is essential to our succeeding in a non-ergodic world. It is crucial, therefore, that the methodologies we use to understand the exceedingly complex phenomena measured in our time series, correctly inform us of the future uncertainty of the likely pattern of development indicated by the time series.” [end of quote]
***********
Additional Reading
In 1993 Douglass North, along with fellow economic historian, Robert W. Fogel, received the Noble Prize for Economics for pioneering work which resulted in the establishment of Institutional Economics, now a central school of modern economics. There is a substantial economic literature that identifies the fatal flaws in the neoclassical deterministic equilibrium models that the Commonwealth Treasury uses and that Ross Garnaut will rely on to tell the Australian Government of the (almost certain) economic consequences of the (almost certain) predictions of the equilibrium climate models.
North, D. C., 1999. Dealing with a Non Ergodic World: Institutional Economics, Property Rights, and the Global Environment. Duke Environmental Law and Policy Forum Vol 10 No. 1 pps 1 to 12.
Professor North’s opening address at the Fourth Annual Cummings Colloquium on Environmental Law, at Duke University, April 30, 1999, is available on line here: Global Markets for Global Commons: Will Property Rights Protect the Planet?
Classical time series analysis that features in the reports of the IPCC necessarily underestimates future uncertainty. Of great relevance here is that two scientists at the Department of Civil and Environmental Engineering University of Melbourne, Dr Murray Peel and Professor Tom McMahon, have recently shown that randomness in the climate system has been on the rise since the 1950s. The authors used the time series analysis technique, Empirical Mode Decomposition (EMD), to quantify the proportion of variation in the annual temperature and rainfall time series that resulted from fluctuations at different time scales. They applied EMD to annual data for 1,524 temperature and 2,814 rainfall stations from the Global Historical Climatology Network.
Peel, M and McMahon, T. A., 2006. Recent frequency component changes in interannual climate variability, Geophysical Research Letters, Vol.33, L16810, doi:10.1029/2006GL025670
Richard Mackey’s submission to the Garnaut Climate Change Review is entitled ‘Much more to the Earth’s climate dynamics than human activity’ and can be read here.
david says
>One of the lethal critiques of the United Nation’s Intergovernmental Panel on Climate Change (IPCC) models is that the climate system can never be at anything like an equilibrium state.
Richard has clearly never worked with a climate model. I’m also guessing he has no formal climate qualifications, and has never published a climate science paper in a peer reviewed journal.
Climate models do not assume equilibrium. That is self evident by all those wiggly lines in the IPCC reports which show warming as you increase CO2.
jennifer says
Hi David,
Is it fair to assume with the IPCC models that carbon dioxide is a key driver of change away from an assumed ideal equilibrium state?
In which case Richard’s assessment is still valid.
oil shrill says
Excellent retort, Jennifer. There appears to be an implicit assumption that their is a optimum level of CO2.
The fact that the paleoclimatology record indicates that it varies widely would indicate otherwise.
http://www.nocarbontaxes.org
oil shrill says
more fully, that there is an optimum level of CO2 resulting in an optimum level of climate. The historical record indicates both vary widely and bear few correlations.
http://www.nocarbontaxes.org
SJT says
“Is it fair to assume with the IPCC models that carbon dioxide is a key driver of change away from an assumed ideal equilibrium state?”
Absolutely not. It is not fair, it is a statement of ignorance.
SJT says
“The risk of a climate crisis, like the risks associated with sub-prime mortgage securitisation, are calculated using complex computer models and both are too complex for the average punter to understand.
As Graham Young wrote last week in a blog post entitled ‘Sub-prime and climate change’, these models were created by clever people with PhDs in maths and physics, ”
You have absolutely no idea what you are talking about. Maybe your first step should be to find out some evidence for your claim, then present that. All we have is opinion to date.
Gordon Robertson says
david said…”Richard has clearly never worked with a climate model”.
Maybe he hasn’t but Dr. Joanne Simpson has. She was the first person to get a degree in meteorology and she worked at NASA till she retired. She said, “However, the main basis of the claim that man’s release of greenhouse gases is the cause of the warming is based almost entirely upon climate models. We all know the frailty of models concerning the air-surface system. We only need to watch the weather forecasts”.
Her original words are here:
http://climatesci.org/2008/02/27/trmm-tropical-rainfall-measuring-mission-data-set-potential-in-climate-controversy-by-joanne-simpson-private-citizen/
This is coming from a scientist who pioneered in atmospheric and cloud modeling, yet she is skeptical about the ability of models to predict climate. She flew into hurricanes to get first hand information and made major discoveries about them. This woman is no lightweight.
When you couple what she says with what Christy and Lindzen are saying about models, you can’t help but at least wonder, can you?
SJT says
“However, the main basis of the claim that man’s release of greenhouse gases is the cause of the warming is based almost entirely upon climate models. We all know the frailty of models concerning the air-surface system. We only need to watch the weather forecasts”.”
Some scientists who are not current with the state of the art seem to have this opinion. They are not modelling weather, they are modelling climate, they do not pretend they are modelling weather. The IPCC case uses several streams of research to come to it’s conclusion, as relying only on models would be incorrect. Read the report.
ad says
Small gripe. There is no Nobel prize for Econmics. In 1969 they added a little sop award named ‘in honour’ of Nobel. See: http://en.wikipedia.org
[posted edited, url shortened as the longer url was greating problems, sorry, jennifer]
Joel says
SJT – “Some scientists who are not current with the state of the art seem to have this opinion.”
This is one possible explanation. The other is that the models haven’t changed much in the past 20 years (as evidenced by the uncertainty over a doubling of CO2 being almost identical during that timespan). Greater computing power equals more paramterisations as there is very little physics involved with the models, and greater grid resolution. Although grid sizes in the 200+ kilometre ranges is still pretty large.
Louis Hissink says
The climate models have to be designed to be in equilibrium because otherwise one could not change one parameter to see what it does. Models that don’t assume equilibrium are intrinsically unstable.
This means that David and SJT and the rest of the AGW people don’t understand modelling at all.
(A computer model not in equilibrium plainly goes in one direction).
Louis Hissink says
SJT
Climate is average weather by definition, so if they are not modelling weather, then they certainly cannot be modelling climate.
Unless you mean modelling a wether and then you should be in the department of agriculture?
jennifer says
In the context of the original note from Richard, it would be good if SJT, David, or someone else explained the key assumptions underpinning the models.
And I disagree with Louis, all sorts of states can be modelled, including for example, ‘punctuated equilibriums’.
SJT says
They are called GCMs, General Circulation Models, they are not called CCMs, Carbon Circulation Models. If the only intent was to model action of CO2 on the atmosphere, they would be a waste of time, since there are so many components of the climate.
But there’s no point asking me for the details, I’m just a layman. If you want to konw the specific details of how the models work, the assumptions, the calculations, the compromises, the development, what they claim to be able to achieve, you are going to have to read the papers that are publicly available.
Here is one paper I found.
http://www.mssanz.org.au/modsim07/papers/10_s61/CSIROmk3_s61_Collier_.pdf
jennifer says
SJT, A layman should be able to understand some of the key assumptions on which the GCMs are built. If GCMs don’t assume equilibrium at a constant level of atmorpheric carbon dioxide, then what do they assume?
I understand that the early models (going back 20 years or so) while assuming an equilibrium state, in fact generated warming even at the same levels of carbon dioxide. Could David confirm this?
david says
The assumptions that underpin climate models are that energy and momentum are conserved.
The IPCC and climate scientists have published 10,000s of pages on this.
SJT says
“SJT, A layman should be able to understand some of the key assumptions on which the GCMs are built. If GCMs don’t assume equilibrium at a constant level of atmorpheric carbon dioxide, then what do they assume?”
What are you trying to understand.
Here is the wiki link.
http://en.wikipedia.org/wiki/Global_climate_model
I am just wondering at what level you want the detail. Luke has spent hours collating detailed information before, and it’s been completely ignored. General information is dismissed as not being science.
Gordon Robertson says
SJT said…”Some scientists who are not current with the state of the art seem to have this opinion. They are not modelling weather, they are modelling climate, they do not pretend they are modelling weather”.
Are you sure you’re not a bot, programmed to respond mechanically to any arguement that is counter AGW? Dr. Simpson only retired recently, in the last couple of years.
You claim they are not modeling weather but climate. What do you think climate is if it’s not weather? Weather is a short range, localized condition while climate is the long range version of the same. They are interconnected.
Don’t you think meteorologists keep tract of their daily forecasts over the long term? Besides, meteorologists are not confined to predicting weather, there are research meteorologists like Dr. Simpson.
Climate is the long range weather ‘trend’. Guess what the IPCC said about that in TAR? They said it was not possible to predict future climate states. That’s still true. The weather can change daily and there’s no way to predict the long-term trend, which is climate.
It’s perfectly valid to use the arguement that models cannot be relied on to predict short term weather, so how the heck can they predict long-term trends? The IPCC said they can’t. It’s much easier to predict a short term weather environment than it is a global climate.
What more do you need? The IPCC is basing it’s ‘guesses’ on the ‘probability’ of different scenarios and many different computer models. That’s how they get such wide ranges in their guesses.
Hey…I can guess too. I’m guessing the AGW theory is wrong and that we have nothing to worry about.
SJT says
“It’s perfectly valid to use the arguement that models cannot be relied on to predict short term weather, so how the heck can they predict long-term trends?”
Once again, the argument is “If I can’t understand it, you can’t prove it”. You just don’t understand the difference, do you? No one is going to be able to explain it to you, because you are unaware your own intellectual limits.
I’ll try a simple analogy that I think is valid.
We know the roughly average height of an adult in Australia. If I was to sit in a room, and a door opened and someone walked in, I would not be able to predict the height of the next person to walk through that doorway. I would be able to predict the average height of the people who walked in that door, if enough came through it.
The average height depends on nutrition and other factors. If, over time, the standard of nutrition the people were living on who entered that room declined, over a long period of time, the people who came through that door would be shorter. I would be able to predict that, but still not be able to tell you the height of the next person who came through that door.
ianl says
…sigh …
The GCM’s are neither initialised to current climate conditions nor regarded as predictors or forecasters – Trenberth (Lead Writer, IPCC)
These models are “what-if” scenarios obviously based on some critical assumptions, the most critical of which is perhaps the quantum of negative feedback from increased CO2 levels in the atmosphere.
One of the sharpest criticisms of these models is from Roy Spencer, who points out (Spencer & Braswell 2008) that the feedback quantum correlates very much better with ephemeral cloud cover than atmospheric CO2 levels.
Not one AGW proponent has even admitted that this critique exists, let alone address it.
So they drone on endlessly debating physics theory and blithely ignore empirical data – such as the satellite temperature differentials over the last decade.
Science is not a dialectic. Hypotheses are tested against empirical data and modified or abandoned if the data requires it.
SJT says
…sigh…
Hey, I can say that too.
ianl says
Try answering the questions too !!
TheWord says
I’ve said it before and I’ll say it again: computers cannot predict the future. The colossal amount of money wasted on these things, upon the premise that they can do so, is obscene and scandalous.
SJT says
“The GCM’s are neither initialised to current climate conditions nor regarded as predictors or forecasters – Trenberth (Lead Writer, IPCC)”
They initialised to current climate conditions because there is no point. They just come up with a broad, global forcast, with some implications for local conditions. It’s not a lot, but it’s a lot more than nothing.
If you want to take no guesses at the result of knowing that there is a physical basis for CO2 being a significant climate forcing, that’s you. I’d rather we used our “ultimate resource” to it’s capacity.
Louis Hissink says
Jennifer,
A punctuated equilibrium is in equilibrium by definition.
TheWord says
Trenberth speaks with forked tongue. GCM’s are promoted by AGWers and the cheerleading media as both being able to reproduce current conditions and predict the future.
What I’d prefer is that the entire AGWing fraternity signed a request to the worldwide media saying that GCM’s don’t predict current or future conditions and would the media be so kind as to stop suggesting that they do so.
The AGWers won’t do that, because they’re all dishonest gravy-trainers.
J.Hansford. says
SJT…. “As Graham Young wrote last week in a blog post entitled ‘Sub-prime and climate change’, these models were created by clever people with PhDs in maths and physics, ” ”
—————————————————-
No SJT…. The market is supply and demand with people allowed to buy a bargain…. It’s no complicated than that.
It gets complicated when Socialists start trying to regulate “greed”…. Nothing wrong with greed. It’s a subjective term. One persons greed is anothers bare minimum.
Dishonesty is what destroys markets.
Socialist ideology is root and branch, basic dishonesty….. It is dishonest because it promises equality…… There is no such thing…. Just as there is no equilibrium in Climate or in any natural system.
Patrick Caldon says
“It’s perfectly valid to use the arguement that models cannot be relied on to predict short term weather, so how the heck can they predict long-term trends?”
There are very very many physical phenomena about which is it impossible to make short term predictions about precise behaviors but comparatively easy to make good long term predictions about broad scale behavior.
“In the context of the original note from Richard, it would be good if SJT, David, or someone else explained the key assumptions underpinning the models.”
Why not look at NCAR CCSM? It has a lengthy “scientific description” attached, which outlines the assumptions one by one. I don’t recall any assumptions of equilibrium, but I have only skimmed the document.
Alternatively I referred you to a textbook a month ago which does exactly that in a more readable fashion.
TheWord says
Patrick said:-“There are very very many physical phenomena about which is it impossible to make short term predictions about precise behaviors but comparatively easy to make good long term predictions about broad scale behavior.”
Oh? Such as?
And, pray tell, do select one or two examples which might just approach the levels of chaos and complexity inherent in the atmosphere.
cohenite says
It never fails to amaze me that every time the GCM defects are discussed we have to wade through an ensemble of half-baked epistemological justifications about what the GCM’s are and are not doing; what they are doing is predicting; every spokesperson for AGW, from Garnaut to Wong will refer to the catastrophes predicted by the GCM’s to justify the measures proposed to combat ‘carbon pollution’; it is absolutely disingenuous and hypocritical for laymen like Will Robinson, or specialists like the cartel of concerned scientists who recently had their latest grotesque predictions of doom published in the msm, to defend GCM’s and AGW on the basis that they don’t predict; they do predict; they predict badly as Koutsoyiannis has shown; and they predict badly because the mechanism used to explain AGW, CO2 increase and enhanced greenhouse, are scientifically wrong, as Spencer and braswell and countless others have shown.
Patrick Caldon says
Some examples then:
1) Tossing a fair coin
Short term prediction about precise behavior: will the coin come up heads or tails in one particular toss?
Long term prediction about broad scale behavior: approximately 50% of the flips will come up heads.
2) Boiling water on a stove in a stovepot
Short term prediction about precise behavior: where precisely and at what instance in time will bubbles form in the pot
Long term prediction about broad scale behavior: How much gas is given off by the stovepot in some time period
3) A sequence of motor vehicles crashing into a wall at a speed of over 200km/h
Short term prediction about precise behavior: How precisely will the metal of the car be deformed in each particular crash
Long term prediction about broad scale behavior: any occupant of the car would die in the vast majority of crashes
4) Earth atmosphere and ocean
Short term prediction about precise behavior: What will be the weather in Albuquerque on 30 December 2008
Long term prediction about broad scale behavior: For the next 10 years the average temperature in Albuquerque for the three months DJF will be colder than the average temperature for the three months JJA. (this I think approaches your desired level of chaos and complexity)
I could go on in this vein, but perhaps you get the point. Note all of these can be modeled and the modeling will have the phenomena (short term precision hard, long term broad prediction possible) I just described.
Patrick Caldon says
“instance” above should read “instant”
bazza says
The Word seeks ‘examples which might just approach the levels of chaos and complexity inherent in the atmosphere’. I have but one. Who could predict precisely the next short term behaviour of the brain of The Word.? I only have a sample of 2. But one can predict the broad scale behaviour – the next comment will be evidence-free and unconstrained by any attempts to get across a litle evidence or usefully contribute.
Will Nitschke says
SJT: “However, the main basis of the claim that man’s release of greenhouse gases is the cause of the warming is based almost entirely upon climate models. We all know the frailty of models concerning the air-surface system. We only need to watch the weather forecasts”.”
“Some scientists who are not current with the state of the art seem to have this opinion. They are not modelling weather, they are modelling climate, they do not pretend they are modelling weather. The IPCC case uses several streams of research to come to it’s conclusion, as relying only on models would be incorrect. Read the report.”
What are those other strands please? I have read the report and have difficulty finding them. Paleoclimateology doesn’t lend support to the thesis, except for some biased and discredited studies… Ice core data doesn’t either (and AGW scientists have backed away from this claim now as well).
I would be very interested in looking at any empirical support for the theory if you know of any. Not really sure why it should be so hard to locate. Why not just list a few bullet points outlining what those are?
TheWord says
Patrick:-
1) Coin toss – simple, random but not chaotic, limited (very) number of variables and outcomes. Even then, you can only describe the probable shape of the distribution, not accurately deduct what number of heads vs. tails you will have after, 10 or 10 trillion tosses. Failed example.
2) Boiling water – do tell, how much gas would be given off in a given timeframe? Are you boiling normal tap water, rain water, sea water, etc? Are you boiling it at sea level, on a mountain, on a dry or humid day, etc? Failed example.
3) Crash tests – c’mon, you didn’t even really try with this one. Are you looking at metal? No. of occupants killed? What are you really trying to say here? Failed example.
4) Atmosphere and ocean – you are kidding, aren’t you? You’re seriously trying to argue your case, based on the differences between winter and summer temperatures? And, no, it doesn’t even start to approach my chaos and complexity issues. It’s like saying that, because it’s warmer during the day than it is at night, we can accurately say something about mean temperatures in a decade from now. It’s a nonsensical argument. Failed example.
Face it, Patrick, computer models cannot predict the future.
(However, if you really do believe that they can, I’ve got a really great stock market prediction program. I’ve back-tested it an it makes a fortune, guaranteed! It’s yours for the bargain introductory price of $20,000. Interested?)
SJT says
“The Word seeks ‘examples which might just approach the levels of chaos and complexity inherent in the atmosphere’. I have but one. Who could predict precisely the next short term behaviour of the brain of The Word.? I only have a sample of 2. But one can predict the broad scale behaviour – the next comment will be evidence-free and unconstrained by any attempts to get across a litle evidence or usefully contribute.”
I can confidently predict he will sleep, eat, drink and excrete, as long has he is alive, barring any major medical issues. Not exactly when, but I know it will happen. This despite not having any idea what his brain will do in the next second.
Louis Hissink says
SJT,
Your prediction is based on a long period of observation of an objects behaviour, hence your ability to predict that object’s behaviour forward in time.
In complete contrast is the earth from which observation of its past gives no clue at all what its future climate state might be. The only climate data available to us is about 100 years, to three periods of 30 years.
In a climate sense these represent 3 discrete data points.
I think someone should toss you some water wings or a lifebelt because you are totally out of your depth on this one.
SJT says
I’ll raise you a homopolar motor, Louis.
Patrick_B says
Hi Jen,
Could you post some references to your majors sources work on climate change or modeling in general? Oh hang on, I note that he is “a sceptic from Canberra”. IN that case he is to believed on faith alone. If he had been a sceptic from Albury I’m sure you would have provided the necessary examples of peer reviewed, published work to help support your opinion. Really this is a great laugh …
Patrick_B says
“As Graham Young wrote last week in a blog post entitled ‘Sub-prime and climate change’”
G. Young was also given a free kick on that model of balanced debate “Counterpoint” on RN last night. It’s a bit embarrassing to listen to two people with no qualifications or experience criticise the work of those who have both those commodities in abundance. Embarrassing to rational, critical thinkers. So most of the crew ’round here will probably find it insightful and nod sagely in agreement.
Louis Hissink says
Useful idiots 0
Patrick Caldon says
“1) Coin toss – simple, random but not chaotic, limited (very) number of variables and outcomes. Even then, you can only describe the probable shape of the distribution, not accurately deduct what number of heads vs. tails you will have after, 10 or 10 trillion tosses. Failed example.”
TheWord, you’ve essentially repeated my point and then put the words “failed example” on the end. You should read my earlier posts more carefully.
To repeat: the point is that predictions of the broad properties of sequences of fair coin flips can be made very successfully (i.e. the prediction will be accurate most of the time). Predictions of the precise properties of individual flips cannot be made with better than a 50% accuracy. Nothing you have written contradicts this.
The other examples are essentially similar, stochastic or chaotic processes with predictable long term behavior. You apparently don’t believe these things exist.
With (3) for instance, I can give you a lot more detail. Suppose we take 100 identical vehicles from a production line, and crash them at very high speed into a wall. We identify one particular point on the front of the vehicle body, and ask for a prediction to the nearest millimetre to where that point of metal will be deformed. Impossible. We ask if (on average) the occupant of the vehicle will survive. This latter question can be answered, even if you cannot precisely describe every crash.
SJT says
http://www.theage.com.au/national/september-driest-in-more-than-a-century-20080930-4r0m.html
Driest September on record in Melbourne. Flooding rains, right.
SJT says
“(However, if you really do believe that they can, I’ve got a really great stock market prediction program. I’ve back-tested it an it makes a fortune, guaranteed! It’s yours for the bargain introductory price of $20,000. Interested?)”
You still don’t understand the difference, do you? What are you modeling? What you are expecting your model to tell you?
The climate models don’t just run on the past then model the future. There is a long enough temperature record that you can test against, say, half the temperature record, get that correct, then let it run on the rest of the temperature record. It’s a lot more complex than you are led to believe.
TheWord says
Patrick,
Actually, I think I understood your earlier posts pretty well. Maybe you misunderstood my criticisms of GCM’s.
We are led to believe that, not only can computer programs accurately predict the sign (positive/negative) of future climate change up to 100 years into the future, but that they can do so to an accuracy of less than 1C per decade.
I say, that’s twaddle: they can’t even predict next week. However, such is the enormous dross masquerading as our education system, that people believe you can gloss over such failures and that the short-term failures don’t matter.
So, in other words, a continuing and unremedied sequence of failures to predict short-term outcomes can still magically translate into long-term, extreme accuracy.
Two wrongs don’t make a right, but 1,000 wrongs do?
Just to address your crash test scenario, for a moment: we can’t crash test the next 100 years of climate over and over and over again until we understand it to within engineering tolerances, can we? You are conflating real-life experimentation and subsequent analysis, with purely computer-based extrapolations of the future, based upon an incompletely and poorly understood environment, variables, forcings, etc, etc, etc.
How many myths would be busted on the TV show, if all they did was much around with computer simulations?
As I have also said before, I think that attempting to understand these things is admirable. Even attempting to model them is a good and worthy pursuit. What I object to are the claims being made, which are exactly the opposite of what should be said.
Rather than telling the uninformed population and body politic that the models can predict the future, a true scientist would warn regularly and prominently that this is not like gravity or the path of light, but an entirely less certain subject and one cannot predict the future. They would also not brook the hyperbole of journalists looking for a good scare.
But, then, the gravy-trainers of AGWing sold their scientific ethics down the river a looong time ago.
TheWord says
SJT said:-“You still don’t understand the difference, do you? What are you modeling? What you are expecting your model to tell you?
The climate models don’t just run on the past then model the future. There is a long enough temperature record that you can test against, say, half the temperature record, get that correct, then let it run on the rest of the temperature record. It’s a lot more complex than you are led to believe.”
No, it’s you that doesn’t understand, mate. You’ve just shown that, by your statement. We have an even longer stockmarket record than we do a climate record.
Unlike the climate, the stockmarket has less variables, is more accurately documented and has been studied more intently, by many more people, for far longer than the climate.
And, are we any closer to predicting it’s future, even with the advent of supercomputers? Nope, not one bit.
So, why do you reckon we have a better chance with a much more complicated, less studied thing like climate?
Maybe, you don’t really understand what you’re on about, SJT.
Richard Mackey says
If any of the readers of this discussion are from the Bureau of Meteorology, I urge them to get Geoff Love or someone else with sufficient authority in the BoM to commission Demetris Koutsoyannis to visit the BoM and run workshops in which he could present his critiques of the IPCC/GCM models. Better still, invite Demetris to not only conduct the workshops, but to recommend at least two other world experts to join him and help the BoM work through those critiques in a thoroughly scientific manner.
Demetris could not only help the BoM significantly improve its analysis and use of geophysical time series, he could also help BoM in its water resource management role. I refer readers to Demetris’ website on which you can read about the Athenian approach to water resource management in which he played a key role.
As a result of disastrous droughts and the need for reliable and plentiful supply of water for the Olympic Games in 2006, the Government of Greece invited Professor Koutsoyiannis to apply the understanding of hydrological phenomena and times series outlined above to the management of the water storage and distribution systems of Athens. The Athenian water resource management system is the result (Koutsoyiannis (2006) and Koutsoyiannis et al (2007)).
Koutsoyiannis, D., 2006. A new stochastic hydrological framework inspired by the Athens water resource system, Invited lecture, Atlanta, School of Civil and Environmental Engineering, Georgia Institute of Technology, 2006.
Koutsoyiannis, D., Efstratiadis, A. and Karavokiros, G., 2007. Theoretical documentation of model for simulating and optimising the management of water resources “Hydronomeas”, Integrated Management of Hydrosystems in Conjunction with an Advanced Information System (ODYSSEUS), Department of Water Resources, Hydraulic and Maritime Engineering –National Technical University of Athens, January 2007.
These are both available from Demetris’ website http://www.itia.ntua.gr/dk/
The main features of the Athenian water resource management system are:
• prediction of future water is supply based on the knowledge that regular episodes of drought and flood are as much part of the future as they have been a normal part of the past;
• stochastic simulation and forecasting models of hydrological processes are used because climate records, especially the hydrologic ones, are too short; and,
• the use of an adaptive method for the release of water that takes into account the near past and near likely future on both supply and demand.
As previously mentioned, it has been known for decades that ENSO regulates Australia’s climate generally and patterns of rainfall and drought in particular.
Professor Franks of the University of Newcastle and his colleagues have shown how this knowledge can be used to better manage Australia’s water resources and bush fire risks. However, this knowledge has not been generally applied by the relevant authorities.
I would be willing to participate in any such workshops. However, I would be more interested in having a workshop or two with BoM staff in which I went through the detail of my submission to the Garnaut review. This submission was based on a talk I gave to the European Geosciences Union General Assembly 2008 in Vienna in April this year
(see here http://www.happs.com.au/downloaders/SolarVariability_Report.pdf
It’s a large file (6.6mb) so takes a little while to download – you may have a blank browser window that appears as if nothing much is happening while the file downloads, before it displays)
I encourage the BoM to re-establish the capacity it once had to do serious meteorologically relevant research into relationships between the Sun and climate. I refer, of course, to the 1920s and 30s when Dr Edward Kidson headed up the BoM’s research division and was ably assisted by Mr E. T. Quayle. In 1913 Mr Quayle jointly authored with his BoM colleagues Henry Hunt and Griffith Taylor the first and definitive meteorological treatise The Climate and Weather of Australia. Kidson and Quayle published several papers in the 1920s and 30s that reported solar activity/climate relationships. In 1938 the Bureau published one of Mr Quayle’s reports that noted:
“A rough generalisation from the winter rainfall over northern Victoria would suggest that when the new solar cycle begins with a rapid rise to a definite peak then the heaviest rains are in the early years, but when the solar activity begins more gradually and takes four or more years to reach a low or moderate maximum, then comparatively poor seasons may be expected in the early part.”
This report updated one of Quayle’s 1925 papers, the first scientific paper published in Australia about the relationship between the sunspot cycle and climate. Quayle’s ‘rough generalisation’ has been corroborated by recent research. Leading solar physicists are predicting precisely this pattern of gradual rise to a very moderate maximum for the emergent Sunspot Cycle No. 24. Australia’s Bureau of Meteorology was one of the first government agencies in the world to publish a report linking solar activity and climate.
A first step towards the establishment of this capacity would be having a workshop in which I and BoM staff explore the vast amount of published science that documents the extensive and profound relationships between solar activity (gravitational, electromagnetic, plasma and irradiance) and the Earth’s climate dynamics.
Malcolm Hill says
Richard Mackey,
That is a good idea .Good luck
If you do succeed perhaps you could add this paper as well. This also indicates a poor fit between models and actual measurements by the AIRS satellites.
http://www.agu.org/journals/gl/gl0817/2008GL035022/
bazza says
Malcolm Hills ‘poor fit of models’ has been the subject of much study in many disciplines. But psychology has the best insights. For example models do better on physical systems than on systems involving human behaviour. The unaided intuition (that underpins most comments of AGW sceptics) has been shown to be generally inferior to models that assemble evidence in a logical way. Check out Wikipedia on decision analysis. “Several studies conclusively show how even the simplest decision analysis methods are superior to “unaided intuition”. The circular irony is that the evidence for more formal models of decision analysis is often ignored by the primitive half of our brain. If you dont believe me , read on.
ianl says
SJT
“They just come up with a broad, global forecast,”
No, they do not. GCM’s run “what-if” scenarios, not forecasts.
“If you want to take no guesses at the result of knowing that there is a physical basis for CO2 being a significant climate forcing”
Straw man … the point here is the quantum of negative feedback from CO2. Using the word “significant” presumes the quantum and actually avoids answering the question.
Still no comment on counter-models such as that from Spencer & Braywell ? …tsk tsk. It’s been months now.
SJT says
“If any of the readers of this discussion are from the Bureau of Meteorology, I urge them to get Geoff Love or someone else with sufficient authority in the BoM to commission Demetris Koutsoyannis to visit the BoM and run workshops in which he could present his critiques of the IPCC/GCM models. Better still, invite Demetris to not only conduct the workshops, but to recommend at least two other world experts to join him and help the BoM work through those critiques in a thoroughly scientific manner.”
He’s addressing a problem that everyone already knew about, the resolution of the models is too large to make specific predictions for small regions, such as Greece. It’s the CSIRO he needs to be talking to for some education, they create the models.
DHMO says
You guys why don’t you believe the assigned priest SJT for this thread? He has proclaimed the word and the word is GCM. Just believe as he does o ye of little faith. Stop this prattle of dissent about things your tiny minds cannot possibly understand. SJT does not understand but has faith and that is what matters, the prophecy has been made repent and bow down before the computer before it is too late. A computer can be programmed to foretell the future that is what is being said no more no less. Why wouldn’t any sane person believe that? But beware false gods could a GCM be faked? That is worth pondering, 30 years experience in software development makes me certain I could. I doubt the difference could be detected. You all have faith in GCMs having some worth for telling us how we should live. We should be talking about how we start burning Bibles, oops I meant GCMs. One last thing SJT if you proclaim the word how about go and read your link “GCM Even the degree of uncertainty is uncertain” how are we to keep the flock in check with that. Look it would be better to say something like “If you wish find the true way and know more about the prophecy then study the works of Hansen, Mann and Schmidt”. That would be much better and whatever you do don’t show these simple minded people empirical data such as http://www.woodfortrees.org/notes the damn temperature is about the same as it was in 1980 we’ll keep that to ourselves.
Patrick_B says
“For example models do better on physical systems than on systems involving human behaviour.”
A very salient point with regard to those who continue to bang on about the stock market. They appear to be laboring under the misapprehension that predictive models of the stock market (or any market for that matter) are as valid as models of physical systems. What total bollocks!
As for faith DHMO, well I’d say that the only fans of revealed truth are those who cling to outmoded ways of thinking in the face of substantial evidence to the contrary. They are in essence flat earthers, what I still can’t work out is what’s in it for them, surely there isn’t a market for stupidity and ignorance? Hell yeah, it’s called conservative politics and there’s a nice little earner in contrariness. Sure the pond life around here are only fodder for the big fish, but hey Jen’s on a wage and Andrew Bolt’s doing OK and people like Lois and G.Bird provide the fatuous filler. Nice little racket, no pesky qualifications required just a 2 year old’s temperament and a retirees (or IPA “fellow’s”) free time. Good luck to ya.
Patrick Caldon says
“We are led to believe that, not only can computer programs accurately predict the sign (positive/negative) of future climate change up to 100 years into the future, but that they can do so to an accuracy of less than 1C per decade.
I say, that’s twaddle: they can’t even predict next week.”
TheWord, I am obviously not being clear. Let me spell it out.
You appear to be applying the following syllogism:
If I cannot predict the results of X, then I cannot predict the average result of doing X a great many times.
This syllogism is false; if we substitute “tossing a coin” for X this is clear.
Therefore a statement: “If I cannot predict the weather in a month, I cannot predict the average weather for the next hundred years” (essentially substituting the phrase “the weather” for X)
Given the syllogism is false, you need to come up with some argument beyond recourse to the broken syllogism to establish your point.
Here you’re attempting a different argument: “So, why do you reckon we have a better chance with a much more complicated, less studied thing like climate?”
The answer is that we have physical laws (e.g. the ideal gas law, law of conservation of momentum) which describe things (like atmospheric gases) much better than models of idealized rational people describe real people.
Patrick_B says
Oh and I just noted the side bar:
“A branch of the Victorian Liberal Party has been attacked for holding …” Ha, ha ha … is that being held as part of the comedy festival … oh man that’s a killer …
SJT says
“You guys why don’t you believe the assigned priest SJT for this thread? ”
I’ve been promoted?
SJT says
“Still no comment on counter-models such as that from Spencer & Braywell ? …tsk tsk. It’s been months now.”
It wasn’t that long ago that Spencer and Christy were saying that the models were wrong and the satellites were right. It turned out the satellites were wrong, and the models were right. The satellite data is not some pristine source of temperature, the data has to be adjusted and manipulated to compensate for several factors, such as drift, height, etc. The models aren’t perfect, but they should be respected as one more useful tool we have in understanding climate and climate change.
Jan Pompe says
SJT “I’ve been promoted?”
Looks like it congratulations
cohenite says
“The other examples are essentially similar, stochastic or chaotic processes with predictable long term behaviour. You apparently don’t believe these things exist.”
That’s because they don’t. Hurst scaling of LTP can only establish correlative relationships; that is why it is statistically useful. But it cannot distinguish between periodicity and perturbations within a secular event. For example, even if the Hockey-stick were real and unique, it could still be predicted, with sufficient interpolation, to happen again without the correlative presence of CO2 levels; and historical proof shows it has happened in the past without the correlative levels of CO2; in fact, this is why the HS, if real would be unique; not because it is caused by CO2, but because it is stochastic; it can be predicted but its cause cannot; this is also why AGW is bunk, and the models are useless.
Peter says
SJT: “The satellite data is not some pristine source of temperature, the data has to be adjusted and manipulated to compensate for several factors, such as drift, height, etc.”
Are you seriously suggesting that those long-known factors weren’t considered right from the design stage?
“The models aren’t perfect, but they should be respected as one more useful tool we have in understanding climate and climate change.”
Or, it could be argued, for mis-understanding climate and climate change. However, I’ll concede that they are a useful
tool – for governments to justify taxing us to death.
DHMO says
I am glad you agree SJT you are promoting a religion. Where would you place your yourself in your religion, choir boy? For the rest of you go and read SJT’s wiki link on GCMs. I don’t think SJT did. Have a look at the graphs why do they only go to 2000. Why does the the link “woodfortress” show something quite different results. Wonder if Crystal Balls are actually being used.
Eyrie says
Patrick_B “A very salient point with regard to those who continue to bang on about the stock market. They appear to be laboring under the misapprehension that predictive models of the stock market (or any market for that matter) are as valid as models of physical systems. What total bollocks!”
The stock market is the price of companies that provide goods and services which are physical things. The GCM’s start with physics and rapidly degenerate into “parametrisations” making them no more physically based than the stock market.
I have seen the stock market described as “a random walk modified by insider trading”(Charles Sheffield). The Earth climate system may be the same but without the insider trading.
While the future “average” state of some physical systems can be predicted while the fine details cannot be, even for short periods, you need to prove that this is so for a particular physical system, either mathematically or much less certainly by weight of observation over time.
It is a nice hypothesis that GCMs can do this but no more than that.
As for your sneer at people’s qualifications Jen is a biologist(take a look at the biological processes modelled by GCMs), Louis a geologist and I was professional meteorologist from 1971 to 1977. Not that any intelligent layman shouldn’t be able to understand the science of this issue and raise relevant questions as to validity of the various arguments and data and come to his or her own conclusions.
I know I’m not motivated to comment here because I’m retired(I’m not) or unqualified(see above) but I am worried that technic civilization may cease to exist because of the spread of the irrational “burn the witch” meme which will give more power to the people who get their kicks by holding guns to people’s heads to make them do what they say.
So what are your qualifications and your job?
Louis Hissink says
Patrick_B
In addition to Eyrie’s post, you just regard me as a fatuous filler?
I’ve just noticed that our comrades have started to dispense with the politeness and gone back to normal lefty mode – Gavin has raised his colours on the last post, and Patrick B has raised his, here.
“As to behavioral matters, defenders of climate change orthodoxy display the same characteristics as one finds within the mother of all pseudo-skeptical organizations, CSICOP, the Committee for Scientific Investigation of Claims of the Paranormal, and among its associated groups. Most members of CSICOP are science groupies, not scientists — and similarly some of the most extreme climate change vigilantes are economists, psychologists, and the like. Among those with technical scientific credentials, the vast majority have not themselves contributed anything of much note — for obvious reason: as Bernard Shaw remarked long ago, “Those who can, do” — it’s the low achievers who spend (waste) their time attacking characters and denigrating open-mindedness. The orthodoxy-defenders reveal deep personal insecurity, behaving as though it were life-threatening if everyone doesn’t agree with their views.
Describes you, Gavin, SJT and the rest of your fellow travellers quite well, don’t you think?
(Thanks to Henry Bauer for the inspiration)
John F. Pittman says
>> Comment from david
Time September 30, 2008 at 10:33 am
The assumptions that underpin climate models are that energy and momentum are conserved.
The IPCC and climate scientists have published 10,000s of pages on this.>>
Gerald Browning Says:
16 May 2008 at 1:45 PM
Well I adressed Anthony Kendall’s comment (#127) and appeared to be answered.
by Gavin. A rather interesting set if circumstances. Now let us see why the responder refused to answer the direct questions with a yes or no as asked.
Is the simple linear equation that Pat Frank used
to predict future climate statistically a better fit than the ensemble of climate models? Yes or no.
[Response: No. There is no lag to the forcing and it would only look good in the one case he picked. It would get the wrong answer for the 20th Century, the las glacial period or any other experiment. – gavin]
So in fact the answer is yes in the case that Pat Frank addressed as clearly shown by the statistical analysis in Pat’s manuscript.
Are the physical components of that linear equation based on
arguments from highly reputable authors in peer reviewed journals?
Yes or no.
[Response: No. ]
The references that Pat cited in deriving the linear equation are from well known authors and they published their studies in reputable scientific journals.
So again the correct answer should have been yes.
Is Pat Frank’s fit better because it contains the essence of what is driving the climate models? Yes or no.
[Response: If you give a linear model a linear forcing, it will have a linear response which will match a period of roughly linear warming in the real models. Since it doesn’t have any weather or interannual variability it is bound to be a better fit to the ensemble mean than any of the real models. – gavin]
Again the correct answer should have been yes. If the linear equation has the essence of the cause of the linear forcing shown by the ensemble of models
and is a better statistical fit, the science is clear.
Are the models a true representation of the real climate given their unphysically large dissipation and subsequent necessarily inaccurate parameterizations? Yes or no.
[Response: Models aren’t ‘true’. They are always approximations. – gavin]
The correct answer is no.A simple mathematical proof on Climate Audit shows that if a model uses a unphysically large dissipation, then the physical forcings are necessarily wrong. This should come as no surprise because the nonlinear cascade of the vorticity is not physical. Williamson et al.
have clearly demonstrated that the parameterizations used in the NCAR atmospheric portion of the NCAR climate model are inaccurate and
that the use of the incorrect dissipation leads to the wrong cascade.
Does boundedness of a numerical model imply accuracy relative to the dynamical system with the true physical Reynold’s number?
Yes or no.
[Response: No. Accuracy is determined by analysis of the solutions compared to the real world, not by a priori claims of uselessness. – gavin]
The answer should have been no, but the caveat is misleading given Dave Williamson’s published results and the simple mathematical proof cited.
Given that the climate models do not accurately approximate the correct dynamics or physics, are they more accurate than Pat Frank’s linear equation? Yes or no?
[Response: Yes. Stratospheric cooling, response to Pinatubo, dynamical response to solar forcing, water vapour feedback, ocean heat content change… etc.]
The correct answer is obviously no. All of those supposed bells and whistles in the presence of inappropriate dissipation and inaccurate parameterizations were no more accurate than a simple linear equation.
What is the error equation for the propagation of errors for the climate or a climate model?
[Response: In a complex system with multiple feedbacks the only way to assess the affect of uncertainties in parameters on the output is to do a Monte Carlo exploration of the ‘perturbed physics’ phase space and use independently derived models. Look up climateprediction.net or indeed the robustness of many outputs in the IPCC AR4 archive. Even in a simple equation with a feedback and a heat capacity (which is already more realistic than Frank’s cartoon), it’s easy to show that error growth is bounded. So it is in climate models. – gavin]
The problem is that Monte Carlo techniques assume random errors. Pat Frank has shown that the errors are not random and in fact highly biased. If you run a bunch of incorrect models, you will not obtain the correct answer.
Locally errors can be determined by the error equation derived from errors in the dissipation and parameterizations. Given that these are both incorrect,
one cannot claim anything about the results from the models.
I continue to wait for your proof that the initial-boundary value for
the hydrostatic system is well posed, especially given the exponential growth shown by NCAR’s Clark-HAll and Wrf models.
Jerry
SJT says
“to happen again without the correlative presence of CO2 levels; and historical proof shows it has happened in the past without the correlative levels of CO2;”
CO2 is not the only forcing.
cohenite says
Will, you’re being disingenuous again; CO2, or carbon pollution, is the nominated (en)forcer by the IPCC and AGW supporters; they also do not recognise -ve forcing, such as by clouds and biology; so you’re comment is misleading.
TheWord says
Patrick said:-“You appear to be applying the following syllogism:
If I cannot predict the results of X, then I cannot predict the average result of doing X a great many times.
This syllogism is false; if we substitute “tossing a coin” for X this is clear.
Therefore a statement: “If I cannot predict the weather in a month, I cannot predict the average weather for the next hundred years” (essentially substituting the phrase “the weather” for X)”
A few things here. First, you are correct in your assessment of what I argue. However, you are incorrect in assuming that my argument is false.
To take the coin toss example: we cannot predict the outcome of the next coin toss. However, we also cannot predict the outcome of tossing the coin many times.
We can only say something about the probability of the future outcomes. Indeed, there is a very small but non-zero chance that tossing a coin 100 times will result in 100 heads.
And, of course, all of this assumes that we are tossing an idealized coin consisting of a circular disk of zero thickness, without any bias in method of tossing, number of rotations, initial side facing up….
Do you begin to see the problems? Even coin tossing requires idealized, non-physical circumstances for the “model” distribution of expected outcomes to arise. In the real world, tossing coins isn’t so simple!
Now, look at climate. How little do we understand? How little is predictable? How many variables are we missing? What are the initial conditions? …The list goes on and on and on. There is no point in trying to defend GCM’s – they are wholly indefensible as anything other than interesting toys.
Will Nitschke says
Cohenite: “Will, you’re being disingenuous again; CO2, or carbon pollution, is the nominated (en)forcer by the IPCC and AGW supporters; they also do not recognise -ve forcing, such as by clouds and biology; so you’re comment is misleading.”
I think you risk damaging your credibility by positing such a hard-line view and then imposing it on everyone, as if the IPCC was a single individual.
There are those AGW ‘supporters’ who believe that we will see 1000m sea level rises within decades and other global devastation’s. There are others who hold the view that the most probable outcome will be a .05-1.0c increase in temperature within 100 years with no major impacts as a result of that, and then everything in between. To assert that every AGW ‘supporter’ is a fanatic who believes CO2 is the primary driver of climate change is to represent your own position as being fanatical. Which is disappointing if you label yourself a sceptic. Perhaps some emotionality has to be taken out of your reflections?
Gordon Robertson says
SJT said…”It turned out the satellites were wrong, and the models were right. The satellite data is not some pristine source of temperature, the data has to be adjusted and manipulated to compensate for several factors, such as drift, height, etc”.
I’m slowly beginning to see your problem. You skim biased reports from RC without verifying the facts for yourself.
Let me help you. Once again, here’s the ten year satellite data from 1998 to 2007:
http://www.weatherquestions.com/Roy-Spencer-on-global-warming.htm#satellite-temps
See Figure 9. Here’s a first hand explanations for the minor errors in satellite data you have turned into a non sequitar:
http://www.uah.edu/News/climatebackground.php
BTW…Spencer is an authority on satellite telemetry and he works with Christy.
A quote: “Spencer and Christy discovered three of the four major problems that have been identified — orbital drift, instrument body warming and inter-instrument calibration — found solutions to those problems and published their results in peer-reviewed journals.
The fourth problem, orbital decay, was identified by Dr. Frank Wentz, et al., and a correction technique similar to one that he and his colleagues developed has been applied to the UAH dataset”.
Four things to note: 1)the errors were corrected, and the correction was in the neighbourhood of tenths of a degree C ‘in the tropics’.
2)the two main sources of satellite data sets are UAH and RSS. Errors were found in the RSS data set that showed it was reading high. Both sets are now in step, and the graph in the first link is valid. It shows about an average 0.25 C warming in the atmosphere. How does the atmosphere get cooler than the surface, if it is warming it? And how can surface stations be called that when they are all located at least 5 to 15 feet in the atmosphere?
3)there is a direct correlation between satellite data and weather balloon data. I know the IPCC has tried to discredit the balloon data as well, but why are they in-step if they are so grievously in error? What’s the chances of that happening?
4)the RSS data set was used by the IPCC initially to discredit the UAH data. Unlike other back-biting disciplines, the UAH and RSS team worked together to find solutions to their problems. They are now both in agreement and everyone is happy but the IPCC. They have more egg on their faces a la Mann’s hockey stick and the cherry-picked pre Industrial CO2 density of 270 ppmv.
SJT…do you always back losing horses?
Here’s two links on the equalization of the UAH and RSS satellites:
http://www.worldclimatereport.com/index.php/2008/01/08/musings-on-satellite-temperatures/
http://www.worldclimatereport.com/index.php/2008/02/07/more-satellite-musings/#more-306
The IPCC and NAS acknowledged the discrepancy between satellite data and models in TAR. In AR4, Trenberth was the lead author of the section dealing with satellites, sondes, etc. He was the reason hurricane expert Chris Landsea resigned from the IPCC over what Landsea saw as a conflict of interest. A lead author is a peer-reviewer and should be impartial. Landsea claimed Trenberth was participating in pro global warming rhetoric, particularly making claims that severe hurricanes were caused by global warming. The expert, Landsea, and his whole hurricane crowd of experts say there’s not enough evidence.
Trenberth seems to have been on a mission to discredit satellites and sondes for some reason. He did not claim there were serious errors in satellite data and he did not dispute the satellite data was showing the atmosphere was cooler than the surface. He merely said the atmospheric warming was in-step with the surface. It seems enough to him and the IPCC that an acknowledgement has been made that the atmosphere is showing a warming, even if it is only 0.25 C on average and completely out of whack with the AGW theory that the atmosphere should be warmer than the surface.
Gordon Robertson says
DHMO said…”But beware false gods could a GCM be faked”?
your not suggesting someone would actually exaggerate finding to keep their funding, are you? Shocking!!
Patrick Caldon says
“To take the coin toss example: we cannot predict the outcome of the next coin toss. However, we also cannot predict the outcome of tossing the coin many times.”
If I can make a correct prediction 99% of the time (as I can with fair coin tosses) in common parlance one would say “I can predict the outcome”. Indeed the IPCC and suchlike always put modifiers such as “likely” and “very likely” in front of their statements.
As to the “real world”; in the real world casinos make money. Lots of it. I suspect the predictably of stochastic/chaotic processes has something to do with that.
“That’s because they don’t.”
cohenite, you’re apparently claiming here that things like fair coin tosses being long-term predictable and water boiling in a predictable way just doesn’t happen in the real world. How do casinos make money? How do chemical engineers design their plant? In the real world casinos make money. You are essentially claiming that they do not.
TheWord says
Oh dear, Patrick. Not all of them make money (what’s this got to do with GCM’s, anyway?). Google “casino” and “bankrupt”:-
Trump Hotels & Casino Bankrupt
The Donald’s bid to save the company from bankruptcy has failed, forcing the organization to restructure.
Tuesday, August 10, 2004
Glenn Haussman
ATLANTIC CITY, NJ — It looks as if the Donald is unable to roll anything but craps with his casinos.
Since April, Trump has been trying to salvage the money bleeding Trump Hotels & Casino Resorts (NYSE: DJT), but now is taking a drastic step by entering into Chapter 11 bankruptcy next month simply to keep the company afloat as it drowns under a torrent of debt.
The company’s debt alone is more than $200 million. Under this voluntary plan, from which he hopes to emerge within a year, Trump will be out as majority shareholder and give up his trademark rights to his likeness and name for any marketing associated with the casinos.
The company’s stock plummeted 72.97% in trading today, dropping from $1.35 to $0.50 a share.
As first reported by Hotel Interactive this spring, Trump’s company has been floundering for years. It was even in bankruptcy once before in 1992. Its Atlantic City properties — the Taj Mahal, Trump Marina and Trump Plaza — haven’t turned a profit in seven years and lost $87 million in 2003 alone. In 2002 that loss was $12 million and $25.3 million in 2001. At the end of 2003, they company had a working capital deficit of $46.9 million.
Gordon Robertson says
SJT said…”the argument is “If I can’t understand it, you can’t prove it”. You just don’t understand the difference, do you”?
Don’t understand what? You often replying obtusely and don’t form arguements that can be replied to.
I don’t understand computer models? Not really. I don’t understand the limitations of computers, programming, mathematics, mathematicians and why the models can’t predict accurately? Well…that one I do understand fairly decently.
You see, I have a whole lot of expertise in electronics, computers and electrical theory and I have a decent grounding in the type of mathematics they use in the models. The brains of any computer is a slab of silicon which is about as intelligent as a sack of hammers. When powered up, it is extremely fast, but it’s intelligence comes largely from its program. It has a certain amount of potential intelligence built in, in the way of logic and math operations, but it’s base intelligence is zilch.
The program is only as good or as smart as the programmer. Why don’t scientists like Spencer, Christy, Lindzen, Michaels, et al, use computer models? They have no need for them although they acknowledge the potential usefulness of models eventually. They study the atmosphere directly using instruments designed to do that. Furthermore, Michaels uses temperature history to predict the future and so far he’s been right. His method is superior to computer models IMHO.
Some models are very useful, but not in climate science. In electronics, they can model circuits very accurately, but that science is well understood and it’s easy to tell when the model is wrong. No one would build an industrial complex that used high voltages and currents based on a model without an independent verification system. The IPCC seems comfortable excluding that kind of verification, even overlooking it. I find that to be amazing in itself.
Arguably, the most important parameter of a model is its resolution. There’s no point using a model that only understands amperes if you’re building a system that uses microamperes. The base resolution of a GCM in longitude and latitude is something like 250 miles square. If you give it 3D, you have a vertical column above that base square. Remember that much of that vertical space is over the oceans.
Now you have to populate the dead space with clouds, rainfall, wind, GHG’s, etc. Darn…I forgot the ocean has depth too with temperature gradients and all that. Oh, and I forgot, many of the phenomena used to populate the 3D space is poorly understood. In the case of CO2 warming, it’s not understood at all.
How good do you think these computers are? The resolution is limited by computing power and there isn’t enough to get it below a 250 mile square area.
How do you do populate your resolution space with a computer program? If you have ever studied differential equations, you’ll know they are used to model many natural phenomena. Problem is, the solution to many differential equations begins with a guess. There are no convenient differential equations that come marked, “for climate”. So, modelers have to find some kind of differential equation, with boundary conditions, impulse responses and steady state conditions.
As far as I’m concerned, the word ‘forcing’ comes from differential equation theory and that should give you a clue to what it’s about. It’s not about atmospheric physics, it’s about mathematics.
There are no equations available, so they guess at one and it doesn’t work. Not only that, they invent their own kind of math and physics along the way. It’s typical that a guessed equation does not work right off and it’s not a big deal. You have to refine. The thing to be clear about, however, is that you are trying to refine in the direction of modeling natural phenomena and not trying to get the phenomena to fit the equation.
Climate modelers are introducing fudge factors to adjust the models and making rash claims that something must be wrong in the atmosphere because it doesn’t agree with the models. For example, the models are reading far too high, so they introduce the red-herring of aerosols as an explanation why it’s cooler than the models predict. Instead of looking at what might be wrong with the model, they assume it must be something hidden in nature, or that a highly accurate telemetry device is wrong.
What I have just simplistically described are the severe limitations of the modeling environment itself. On a smaller scale, you can get reasonably accurate weather forecasts for the short term, but the forecasts go awry as we know. Why is that? We don’t know much about weather variability, that’s why. It’s far too complex. It can be steady, then it gets whimsical. Patrick Michaels points out in his books that forecaster often have two models disagreeing with each other and they literally are forced to look out the window to see what is going on.
You should read some of the accounts from mountaineers who climb the likes of Eveerest and K2. They are in often touch with weather bureaus because they can die from exposure at altitude if a sudden storm arises. Several people died in a tragedy on Everest in 1996 when an unexpected storm arose within an hour. If you read The Perfect Storm, you’ll get first hand information of how weather forecasters get completely baffled by weather phenomena.
None of that can be programmed into climate models and just recently it has become clear that the oceanic oscillations are playing an unheard of role in global warming with a variability of decades. Climate models don’t understand El Nino never mind the Atlantic and Pacific Oscillations. So, climate, which is weather over a time span, can vary naturally due to the oscillations.
The entire premise of climate models is that a trace gas can warm the climate significantly. As Spencer points out, there is no instrumentation that can measure those tiny perturbances and the theory is based on mathematical manipulations.
You’re claiming I don’t understand. I think it is you who has an unfounded faith in the unlikely. What you should be wondering about is why the IPCC is so adamant about ignoring satellite and sonde data and embracing computer model theory. That’s what I don’t understand.
Louis Hissink says
Gordon: “Weather forecasters get completely baffled by weather phenomena because they are using the wrong theory for weather”.
Simple as that.
Patrick Caldon says
Gordon:
“There are no equations available, so they guess at one and it doesn’t work. Not only that, they invent their own kind of math and physics along the way. It’s typical that a guessed equation does not work right off and it’s not a big deal. You have to refine. The thing to be clear about, however, is that you are trying to refine in the direction of modeling natural phenomena and not trying to get the phenomena to fit the equation.”
This isn’t the way it works. For instance the equations for a GCM gas movement are derived directly from Newton’s laws, ideal gas laws, conservation of mass and energy. This work was done in the 1920s. Please don’t trust me on this, go get a text on the subject and look up the “primitive equations”. I can almost guarantee the first equation you’ll see is Newton’s second law.
ianl says
SJT
“It turned out the satellites were wrong, and the models were right.” The models make no predictions, remember – they cannot be right or wrong.
You use straw men a lot, don’t you (yes, that’s a demonstrable statement, not a question). I never claimed satellite measurements were perfect, you just made that up to indulge yourself in a silly answer.
Satellite temperature differentials have not been going well for you for about a decade – so of course they must be wrong. But what about the previous 10-20 years : were they wrong then too ?
The bleak silliness of your answers has shown me the futility of asking pointed questions of you. And you have still gone nowhere near the Spencer *& Braswell 2008 paper, except to say that Spencer makes mistakes. That is a smearo, of course. Glass houses and stones … etc
cohenite says
Will Nitschke; my comment was in response to SJT, aka Will Robinson’s usual disingenuous comment about CO2. In any event, I’m not sure of the validity of your comment about divergent viewpoints about climate effects within the AGW fraternity; that isn’t the point; the point is, that there is universal agreement within AGW that anthropogenic CO2 is behind the ‘troubles’; the issue of +ve feedbacks and the consequences of ACO2 is irrelevant to that point; as to IPCC; it puts out a unified opinion, and, if the last few years of this debate have established anything, it is that this unified, orthodox opinion does not tolerate heresy; it is not me, or sceptics generally, who should be accused of being fanatical.
Patrick; that is misunderstanding what I said; coin tosses and boiling bloody saucepans are not stochastic; nor can they be simplified by Hurst scaling; a boiling saucepan is not a manifestation of LTP; with these examples there is a definitive cause and effect; my point is that the elements of climate cannot be so defined in terms of causation; they can be reasonably predicted, but their cause cannot be defined. The models pretend they can define a climate cause and control it; they cannot; I don’t want to get into decoherence and quantum probability but your betting analogy is right for the wrong reasons (just like Mann; no, he was wrong for every reason); for a sequence of races, or coin tosses, the physical events are quite different from the odds that may be quoted; no event has a probability of one; even the probability of the sun rising tomorrow is less than 1; the coin toss is as close as we can get because there are only 2 possibilities; what probability assumes is that over enough tosses there will be a predictable split; in effect, this is saying that each toss plays a part in the accumulating odds for the next toss; if you like this is akin to Borges’ “garden of forking paths” where, no matter how miniscule, one event plays a part in a subsequent event’s occurance; the butterfly effect where small events have their consequence magnified by time; this exponentially increasing process is qualified by the infinite butterfly variables that interact; each event or stage is both determined by what has preceded it and is also unencumbered to create a new fork in the branching; Hugh Everitt’s work says the future is a infinity of potential branches, only one which will be perceived; and the past is an infinity of potential causes; for simple events we can statistically reduce these infinities to manageable odds; the boiling pan is caused by the flame underneath it; the coin landing on heads because the toss was a certain way; but each simple event plays a part in the next simple event, so that over enough time the causation is blurred; for complex events like the climate the cause is beyond prediction to begin with; think Brownian motion taken to near infinity; GCM’s are not only theoretically wrong, they take the fun out of life as well.
Will Nitschke says
Cohenite: In any event, I’m not sure of the validity of your comment about divergent viewpoints about climate effects within the AGW fraternity; that isn’t the point; the point is, that there is universal agreement within AGW that anthropogenic CO2 is behind the ‘troubles’
To me it appears there is a political component, an ideological or social component, and a scientific component. It is very difficult to pin the AGW scientists down on this claim by reading their papers. If you read the activist websites such as Real Climate, you will find that they do not argue such a simple position.
Louis Hissink says
Will Nitschke: “that there is universal agreement within AGW that anthropogenic CO2 is behind the ‘troubles'”
And here is the real problem – if there were solid scientific evidence that CO2 was behind the AGW, then even the sceptics would be on board.
But there isn’t – AGW is a consensus conclusion based on what some people think what increasing CO2 does by many others; it’s deductive reasoning at its best.
The fact that there has to be universal agreement rather than compulsion from experimental evidence shows that AGW is first class pseudoscience.
And Real Climate is indeed an activist website – it has hijacked science for political purposes.
But under no circumstances can AGW be considered a science.
DHMO says
Louis I thought AGW was as scientific as homeopathy!
Peter says
Patrick: “If I can make a correct prediction 99% of the time (as I can with fair coin tosses) in common parlance one would say “I can predict the outcome”. Indeed the IPCC and suchlike always put modifiers such as “likely” and “very likely” in front of their statements.”
If you do a million lots of 100 coin throws, you can state with some certainty that the average number of heads in each lot will be 50. But, if you only do one lot of 100 coin throws, you cannot predict with any amount of confidence how many heads there’s going to be in that particular lot of 100 throws. And we’ve only got one lot of throws to predict – one Earth, one climate system – and there’s an awful lot more variables than just heads or tails.
SJT says
“Will, you’re being disingenuous again; CO2, or carbon pollution, is the nominated (en)forcer by the IPCC and AGW supporters; they also do not recognise -ve forcing, such as by clouds and biology; so you’re comment is misleading.”
CO2 is not the “nominated” forcing. I wonder where you get your ideas from, it’s not from the scientists. There are several forcings on the climate throughout history. CO2 just happens to be the forcing of the moment, since the others are stable at present. I have no doubt that will change with time.
The IPCC also recognises positive and negative feedbacks. The new generation of models is being developed for several reasons, one of which is to model clouds better to determine their effects on the change, positive and negative.
TheWord says
Peter,
And I would also add that, if Patrick believes he “can make a correct prediction 99% of the time (as I can with fair coin tosses)…” then, he was obviously drunk at the time of writing, or flipping a two-headed coin.
SJT says
“3)there is a direct correlation between satellite data and weather balloon data. I know the IPCC has tried to discredit the balloon data as well, but why are they in-step if they are so grievously in error? What’s the chances of that happening?”
That would be with or without the RAOBCORE correction? The Douglass paper uses RAOBCORE itself, although it is an earlier version.
The reason I referred to the satellite errors, and it’s nice of you to bring up the weather balloons as well, is that in each of those cases it was the models that were correct, and lead to corrections of errors in the measurements.
SJT says
“If you do a million lots of 100 coin throws, you can state with some certainty that the average number of heads in each lot will be 50. But, if you only do one lot of 100 coin throws, you cannot predict with any amount of confidence how many heads there’s going to be in that particular lot of 100 throws. And we’ve only got one lot of throws to predict – one Earth, one climate system – and there’s an awful lot more variables than just heads or tails.”
I can. You are going to get close to 50 heads and 50 tails.
SJT says
I’m not going to be able to predict the sequence it takes to get there, but I can tell you, pretty accurately, the final result.
cohenite says
From AR4, chp 2 Executive Summary. p131;
“Increasing concentrations of the long-lived greenhouse gases (CO2, CH4, N2O, SF6) have led to a combined RF of 2.63 (+-0.26) Wm-2. Their RF has a high level of scientific understanding…. The global mean concentration of CO2 in 2005 was 379ppm, leading to an RF of 1.66 (+-0.17) Wm-2. Past emissions of fossil fuels and cement production have likely contributed about 3/4 of the current RF.”
Little Will should sit in a corner and channel George Washington… I must not tell lies, I must not tell lies, and so forth.
SJT says
“coin tosses and boiling bloody saucepans are not stochastic; ”
Coin tosses are stochastic. You cannot determine what the next result will be, based on the previous result.
TheWord says
SJT said:-“I can. You are going to get close to 50 heads and 50 tails.”
The Donald needs you and your money in his casinos!
SJT says
“Increasing concentrations of the long-lived greenhouse gases (CO2, CH4, N2O, SF6) have led to a combined RF of 2.63 (+-0.26) Wm-2. Their RF has a high level of scientific understanding…. The global mean concentration of CO2 in 2005 was 379ppm, leading to an RF of 1.66 (+-0.17) Wm-2. Past emissions of fossil fuels and cement production have likely contributed about 3/4 of the current RF.”
?? I don’t see your problem with that.
SJT says
“The Donald needs you and your money in his casinos!”
Casino’s always make sure there’s an edge. Roulette has 0 and 00.
TheWord says
SJT said:-“Coin tosses are stochastic.”
No. Theoretical coin tosses are stochastic. Real-world coin tosses always have biases. However, we suffer from an information asymmetry, in that we don’t know the: number of biases; their respective extents; nor their effects on the outcomes, in advance.
IOW, we know there will be biases, however we have no means to quantify them. For most coin toss purposes, these relatively small statistical differences might be difficult to discern. However, repeating the experiment a few times a day for 100 years might show up those biases.
Add to our lack of understanding of all of the variables, a lack of certainty that we are including all of the variables in our models, and you (or at least someone with a passing familiarity with reality) begin to understand the difficulties which face those who would predict the future.
Idealized, textbook coin tosses are stochastic.
TheWord says
Another reason why climate is different to coin tossing – each coin toss is a new event. However, each day’s weather/climate has a feed into the next day’s.
Days are part of a continuum, not the sum of a sequence of discrete events.
Patrick Caldon says
“If you do a million lots of 100 coin throws, you can state with some certainty that the average number of heads in each lot will be 50. But, if you only do one lot of 100 coin throws, you cannot predict with any amount of confidence how many heads there’s going to be in that particular lot of 100 throws. And we’ve only got one lot of throws to predict – one Earth, one climate system – and there’s an awful lot more variables than just heads or tails.”
Having done the arithmetic, I can state (with confidence) that if I toss a fair coin 100 times I will have 50 +/- 10 heads 97% of the time, and 50 +/- 5 heads 72% of the time.
Peter says
SJT: “I can. You are going to get close to 50 heads and 50 tails.”
Bravo Sierra!!!
Peter says
Patrick: “Having done the arithmetic, I can state (with confidence) that if I toss a fair coin 100 times I will have 50 +/- 10 heads 97% of the time, and 50 +/- 5 heads 72% of the time.”
But you only have one time, one shot at it. And the results of that one time could be anything from 0 to 100. OK, so the probability is against it, but probability means nothing after the fact, There was a microscopically small probability of the Titanic sinking on her maiden voyage, but it happened.
Peter says
Models are, at best, misleading and, at worst, useless – unless all the parameters and processes are thoroughly understood and quantified, and the input data beyond reproach. Especially when feedbacks are assumed without even knowing their sign.
The Ptolemaic model of planetary motion, with its complicated epicycles, held sway for ~1500 years.
Nobody seriously questioned it because it agreed perfectly with observations. The Copernican model wasn’t accepted because it could not accurately predict the observed positions of the planets. The Ptolemaic model was subsequently shown to be completely wrong when Kepler came along.
gavin says
Our GCM is only a tool.
A trowel by itself cannot build a brick wall. However it takes very little creativity to work out where the trowel fits in, good or bad.
Peter says
Gavin: “Our GCM is only a tool. ”
Then what other tools do you have which predict a ~4C rise by 2100?
SJT says
“No. Theoretical coin tosses are stochastic. Real-world coin tosses always have biases. However, we suffer from an information asymmetry, in that we don’t know the: number of biases; their respective extents; nor their effects on the outcomes, in advance.”
Yes, the bias could be 49.5 to 50.5, it’s still damn close to 50. You are just nitpicking.
SJT says
“Models are, at best, misleading and, at worst, useless – unless all the parameters and processes are thoroughly understood and quantified, and the input data beyond reproach. Especially when feedbacks are assumed without even knowing their sign.”
Science would not exist without models. e=mc2 is a model.
Jan Pompe says
SJT e=mc^2 is a relationship between energy and mass. Schroedinger’s equation might be considered a model.
SJT says
It’s a model, all maths is a model. It’s our representation of something else.
SJT says
“but each simple event plays a part in the next simple event, so that over enough time the causation is blurred; for complex events like the climate the cause is beyond prediction to begin with; think Brownian motion taken to near infinity; GCM’s are not only theoretically wrong, they take the fun out of life as well.”
You still don’t get it.
Jan Pompe says
SJT “It’s a model, all maths is a model. It’s our representation of something else.”
Nothing quite like a fallacy of equivocation is there?
If that’s the way you want to look at then all language models life.
It’s what is being represented by the mathematical language that is should be realised Einstein’s equation represents a relationship between mass and energy and the speed of light, while Schroedinger’s wave equation represents a wave like model for matter.
Gordon Robertson says
Louis said…”…if there were solid scientific evidence that CO2 was behind the AGW, then even the sceptics would be on board”.
Some of them really try. Michaels, the nemesis of Hansen in the old days, has offered that much of the warming is ‘probably’ due to the extra CO2. I like Christy’s acknowledgement that CO2 ‘should’ warm the atmosphere but his satellite data is not indicating that.
My favourite response comes from Tim Ball. “Rubbish”!!
Gordon Robertson says
Patrick Caldon said ….”…the equations for a GCM gas movement are derived directly from Newton’s laws, ideal gas laws, conservation of mass and energy…”
That’s even more amazing then. My understanding is rather primitive but I’d think the modelers would be working from a changing system, which implies the differential. From what you are saying, they are creating a small atmosphere and implying it models the real one. It seems they are playing God rather than observing what is going on.
Anyway, what’s all this about Newton? I’d think they would have applied the wave equation to it the way they are talking about probabilities. 🙂
Patrick Caldon says
Peter:
“But you only have one time, one shot at it. And the results of that one time could be anything from 0 to 100. OK, so the probability is against it, but probability means nothing after the fact, There was a microscopically small probability of the Titanic sinking on her maiden voyage, but it happened.”
You’re absolutely correct. However this doesn’t detract from my point, that the statement: “It’s perfectly valid to use the arguement that models cannot be relied on to predict short term weather, so how the heck can they predict long-term trends?” is nonsense. It is based on the idea that because we cannot predict the result of a single event, we cannot predict the results of a series of them.
There are many systems (e.g. coin tosses) where I cannot predict short term behavior with any accuracy (e.g. results of a single coin toss) but I can make a weaker statement about the results of many events with great accuracy.
You’re right, we only have one shot. So what? If I’m asked: what’s most likely to happen after a single run of 100 coin tosses? most people would say an answer like “50 +/- 10 heads will come up” captures the outcome pretty well. Many people would bet their super on it for instance.
In the AGW context, we’re not going to get absolute certainty for a one shot event. But so what? As cohenite (I think) was pointing out in his ramble above, there is a sense in which there are no future events with a probability of 1, we don’t know if the sun is going to rise tomorrow. We’ll get to a situation where enough people are prepared to “bet their super” (so to speak) on the outcome.
Gordon,
I’m not sure what you mean by “implies the differential”. The (partial differential) equations are transformed into difference equations and solved by finite difference analysis.
Gordon Robertson says
Patrick Caldon…you were right regarding the ‘primitive equations’, however, they are a set of non-linear differential equations. Today’s science is amazing, you plug in the numbers and out comes the climate.
Here’s a critique of models by Koutsoyiannis:
http://www.itia.ntua.gr/en/docinfo/850
who concludes: “…model outputs at annual and climatic (30 year) scales are irrelevant with reality”. He also makes some interesting comments on the IPCC:
1)The GCM outputs of AR4, as compared to those of TAR, are a regression in terms of the elements of falsifiability they provide, because most of the AR4 scenarios refer only to the future, whereas TAR scenarios also included historical periods.
2)According to IPCC AR4 (Randall et al., 2007) GCMs have better predictive capacity for temperature than for other climatic variables (e.g. precipitation) and their quantitative estimates of future climate are particularly credible at continental scales and above. However this did not prevent IPCC to give regional projections, not only of temperature but also of rainfall and even of runoff.
Here’s a discussion of the paper at climateaudit:
http://www.climateaudit.org/?p=3086
SJT says
“Here’s a critique of models by Koutsoyiannis:”
He doesn’t understand the regional limitations of the current models.
SJT says
“However this did not prevent IPCC to give regional projections, not only of temperature but also of rainfall and even of runoff.”
Everyone wants to know how AGW will affect them. The IPCC gave it’s projections within it’s limitations.
http://www.kosisland.info/new_news.php?action=fullnews&id=167
He’s tilting at windmills.
Greece is having to import water to many of it’s Islands due to prolonged drought.
“Charalampos Theopemptou, the Cypriot Commissioner for the Environment, insists the Mediterranean will be the worst affected by climate change because of the increase in temperatures and the decrease in rainfall.
“What we are seeing in Cyprus and the rest of the Mediterranean is extreme weather conditions and drought is one of them,” says Theopemptou.
“It is not just that we do not have enough rainfall to fill up our dams and rivers for irrigation but we are also seeing a 70 percent reduction in the replenishment of the aquifer* and this has had a catastrophic affect on agriculture.”
In the height of the tourist season, many of the Greek islands, however, are not receiving the imported water the government has promised.
“There was an agreement to transport water twice a week during the entire month of August, but until now we have received only one shipment of water and this is clearly not enough,” the mayor of the tiny island of Nisyros, Nikos Karakonstantinos, was quoted by the daily newspaper Ta Nea as saying.”
gavin says
Peter may not have appreciated my brick wall analogy but that was a simple point of the kind I use all the time. We could have gotten into more arguments that included the basic spirit level and a host of other implements we employ to build a city but that would have assumed some practical experience on the part of posters here.
For me it all starts with a stick that we can use to draw plans in the sand. The same stick can be used to measure timber in the forest and so on. That’s how we eventually got round to using a crystal detector for radio communications then started searching for other signals with more complex devices.
In my day I was a master in promoting practical outcomes with measurement systems for a wide range of industries. I can say that most of these arguments against a warming world are just fancy rhetoric from diehard sections in the business world where profit growth is the only important outcome.
Several weeks ago I looked down on a familiar combination of sheltered and exposed coastlines from a 737 concluded all the beaches had virtually disappeared under water. The view was the same a week later. There is no way this planet is cooling so get on your bike and start peddling backwards.
DHMO says
I have read a great deal about the difficulties of surface based temperature measurement. The only measurement we have is satellite the can be relied on. The record of these can be found here http://woodfortrees.org/notes#wti GCMs do not come near matching this since 1979. For our purposes they are useless we should not be discussing them in this way. It gives them credence that they might be able to foretell the future someday. How about we discuss the accuracy of bible it is the same sort of argument.
SJT says
“I have read a great deal about the difficulties of surface based temperature measurement. The only measurement we have is satellite the can be relied on.”
It was models that made the satellite temperatures suspect, don’t forget. Satellites are an important record, but to discard one in favour of another is foolhardy.
Patrick Caldon says
Gordon,
If you take two GCM model runs of the same model with slightly different initial conditions you’ll get results similar to the differences between a single model run and what K found.
If you take two runs with slightly different initial conditions of a numerical simulation of an aircraft flying and examine (i.e. graph) the wind direction/pressure in a turbulent region you’ll get vastly different answers.
You can still show the aircraft flies with numerical methods.
Louis Hissink says
Gavin: “Several weeks ago I looked down on a familiar combination of sheltered and exposed coastlines from a 737 concluded all the beaches had virtually disappeared under water. The view was the same a week later. There is no way this planet is cooling so get on your bike and start peddling backwards.”
Unsubstantiated rhetoric.
gavin says
“Rising sea levels threaten buildings
Posted Wed Jun 4, 2008 –
Roches Beach at Lauderdale is one of the erosion hotspots. (ABC News: Phil Long)
Map: Lauderdale 7021
A geoscientist has told a inquiry that some of the state’s coastal buildings will be under water in the future if there is no action taken to stop coastal erosion.
A Joint Tasmanian Parliamentary Committee is meeting in Hobart as part of its investigation into coastal erosion and its link to rising sea levels.
Geoscientist Chris Sharples recommended to the committee to cease further development around coastal towns prone to erosion.”
http://www.abc.net.au/news/stories/2008/06/04/2264806.htm
gavin says
“The discovery of 160 year old records in the archives of the Royal Society, London, has given scientists further evidence that Australian sea levels are rising.
Observations taken at Tasmania’s Port Arthur convict settlement 160 years ago by an amateur meteorologist have been compared with data from a modern tide gauge”
http://www.sciencedaily.com/releases/2003/01/030122072142.htm
gavin says
December 02, 2007 12:00am
“UNPRECEDENTED high tides are forcing scores of shorebirds to seek refuge on the main road into South Arm, where they are ending up as road kill”
http://www.news.com.au/mercury/story/0,22884,22855068-3462,00.html.
Louis Hissink says
Oh The John Hunter – John Daly issue.
And newspaper reports – by a media that refuses to report anything but climate alarmism.
Glib rhetoric I am afraid.
gavin says
Louis: We can invite anyone to google “rising sea level tasmania 2008” and see why you are also full of crap
Patrick Caldon says
Damn, that should read “wind direction/pressure at a particular point in the turbulent region” above. Sorry, it’s misleading as written.
gavin says
“Over 20 per cent of the Tasmanian coastline will be a risk from sea level rise and more severe storm surges associated with climate change.
Within in the next 50-100 years, 21 per cent of Tasmania’s coast is at risk of erosion and recession from sea-level rise affecting 17,000 coastal buildings”
http://www.climatechange.gov.au/impacts/publications/fs-tas.html
that’s the official view Louis
Gordon Robertson says
SJT said…”It was models that made the satellite temperatures suspect, don’t forget. Satellites are an important record, but to discard one in favour of another is foolhardy”.
Sources please. I gave you the source at UAH in which they explained quite clearly that UAH found three of the errors and the RSS team provided the other.
It was Trenberth who brought up the problem in the tropics and if you want to credit that on a model, it’s up to you. However, that error was in the order of 1/10th C. Why don’t you talk about and explain the over 1.0 C error between the model predicted atmospheric temperature and the surface. That’s the real issue that the AGW crowd and the IPCC avoid. In other words, the atmosphere is supposed to be warmer than the surface but it is cooler by a long shot.
Gordon Robertson says
Test…I’m having trouble with this post only. I am posting in two parts to see if one part has formatting the system doesn’t like.
Part 1
SJT said…”Greece is having to import water to many of it’s Islands due to prolonged drought”.
New buzzword…Arctic Oscillation
http://nsidc.org/arcticmet/patterns/arctic_oscillation.html
This site gives an interesting take on it:
The positive polarity of the Arctic Oscillation is characterized by a strengthening of the polar vortex from the surface to the lower stratosphere. The stronger polar vortex of the Arctic oscillation’s positive phase brings cool winds across eastern Canada, while North Atlantic storms bring rain and mild temperatures to northern Europe and Drought conditions prevail in the Mediterranean. Stronger Trade winds result in the Eastern Atlantic.
During the negative polarity of the Arctic Oscillation the weaker polar vortex, allows cool continental air plunges into the Midwestern United States and Western Europe while storms bring rainfall to the Mediterranean region. Weaker Trade winds result in the eastern Atlantic.
SJT says
“Sources please. I gave you the source at UAH in which they explained quite clearly that UAH found three of the errors and the RSS team provided the other.”
They found the errors after publishing a paper claiming they were right and the models were wrong, with the global temperature being lower than was claimed. The new version of the paper has a vastly reduced scope, the tropical troposphere.
Louis Hissink says
Gavin: “That’s the official View, Louis”.
And to be ignored from a scientific perspective.
Louis Hissink says
Gavin: “why you are also full of crap”
Gavin, you have lost the point by descending to ad hominems, not having anything else left to throw at the stage.
Gordon Robertson says
SJT said…”They found the errors after publishing a paper claiming they were right and the models were wrong, with the global temperature being lower than was claimed. The new version of the paper has a vastly reduced scope, the tropical troposphere”.
Still waiting for your sources. You’re just parroting what I’m saying, which is rather troll-like behavior.
DHMO says
SJT Your joking an actual record is to be not considered important because an a virtual reality GCM disagrees! I notice Trenberth is mentioned here a number of times. I suggest if any of you think he is credible look at disagreement between him and landsea. Solomon gives information in his book the deniers. Personally I would quote Trenberth on anything.
DHMO says
I meant.
Personally I would NOT quote Trenberth on anything.
SJT says
“Still waiting for your sources. ”
http://www.realclimate.org/index.php/archives/2005/08/et-tu-lt/
cohenite says
Will Robinson; you are being especially twisty on this thread; raising the RSS/UAH discrepancy is really rolling under the culvert; of no consequence; corrected and moved on; why don’t you critique the FU paper on Tropospheric warming and Stratospheric cooling? You also link to RC who feature a Sherwood paper on radiosonde deficiencies; this is really crook; this is the same Sherwood who with Allen wrote one of the 10 worst pro-AGW papers of all time (to use Garnaut’s terminology) about thermal winds; in this paper the radiosonde data was rejected for temperature but accepted for purposes of deducing that thermal shearing was present so that model predictions about temp could be made which were contradictory to the radiosonde temp data; and to top it off you are repudiating Koutsoyiannis because the GCM’s are not set up to produce regional climate predictions! Will Robinson, that does not compute! And if, as you assert, a coin toss is stochastic, how would you make it predictable by a Hurst scaling?
Gordon Robertson says
SJT said…”Still waiting for your sources”.
Come on man, you’re citing a mathematician (Schmidt at RC) who programs models and taking his word over two atmospheric scientists (Christy & Spencer) who operate the satellites. Not only that, he cites a computer programmer (Connolley) who gives his interpretation of the satellite data.
In the same article, he refers the reader to a site lampooning Fred Singer, who has more credibility in atmospheric physics in his fingernail than Schmidt has in his entire body. At least Singer had the good humour to show up and accept his Flat Earth Award and joke about it. That’s the sign of an intelligent individual.
I can just picture you sitting in the Church of RC, in the front pew, with your mouth agape, taking in every bit of drivel that Schmidt and Connolley can stuff into you.
There’s no point ad homing them, so I’ll tell you what I’m on about. Schmidt acknowledges the satellite data and sonde data back each other. He doesn’t say they are wrong, he infers it. As I put it to you before, what are the chances that both are in agreement and in error overall? Do you understand? If the satellite has grievous errors, and the sonde has them as well, what are the chances that both err so as to be in agreement? It’s not only highly unlikely, it did not happen. The record is sound.
As Lindzen points out, why was the satellite data so untouchable till the IPCC and the AGW crowd needed to get rid of it. Christy was given an award for the data capture machinery. The data has been highly useful to others.
The MSU’s on the satellites read correctly as do the thermometers on the sondes. That’s not the issue. The problems have been in slight variations to things like orbit and the record when satellites are changed and sondes are updated. We’re talking miniscule errors, which Schmidt steers away from in his typical ad hoc presentation. In other words, he’s pulling the wool over your eyes with one hand while he strokes your ego with the other.
Please tell me you’re not that naive. Show me anywhere in the article where Schmidt or Connolley address the issues.
The main issue is that the atmospheric warming trend stopped in 1998. Schmidt is furiously trying to come up with reasons for that last I heard. Do you think he’d be jumping through those hoops if the satellite data was wrong? Schmidt knows the 0.25 C average over the past ten years is accurate and the IPCC admitted that the satellites were corrected before the decadal data indicating that came out.
So, after all the corrections, the satellite data is indicating the atmosphere is cooler than the surface, by a long shot. That’s how things should be, temperatures decrease with altitude, in general. It was the model theory that claimed a hot spot in the atmosphere and it’s not there.
When Schmidt talks about the satellites indicating a warming in the atmosphere that is in line with what the models predicted, he is understating the fact completely. He’s also working you. How long do we have to wait for that promised warming that will make the atmosphere warmer than the surface?
The warming trend has gone flat and the May 2008 study (Keenleyside et al) warned us not to expect any till 2016 due to the oceanic oscillations. Another study by Tsonis et al, who are mathematicians as well, collated the data from all the oceanic oscillations and found they work together for decades then work against each other. Lindzen claims the oscillations alone can account for global warming.
What Keenleyside missed is that those oscillations are likely the cause of global warming, not CO2.
Gordon Robertson says
DHMO said…”Personally I would quote Trenberth on anything”.
The guy is an enigma. He was the professor in charge of John Christy’s graduate studies yet both of them disagree on the climate. Trenberth seems to twist in the wind. He comes out and says the science (climate) is not settled, that it’s only beginning, yet he is an advocate of the AGW theory. He even said models are not reliable then he forces Landsea to resign. It’s incredible…he asked Lansea to join AR4 as a hurricane expert then he goes to an event and publicly declares that severe hurricanes are due to global warming.
I see Trenberth’s hand in AR4 alright. I think the dubious wording that the satellites and sondes had been corrected, and are now in step with surface warming are either his doing or that wording was a compromise. I get the feeling he has an axe to grind with Christy. Could it be professional jealousy…his graduate student is in the limelight over his satellite fame? Who knows?
Gordon Robertson says
Just a follow up to my previous post to SJT. Here’s some information on the Keenlyside study of 2008:
http://www.sciencenews.org/view/generic/id/31635/title/Heat_relief
It states: “The new model developed by Keenlyside and his colleagues differs from others because during its run-up period, the researchers constantly adjust the sea-surface temperatures predicted by the simulation to match those actually observed by oceanographers and satellites. Other models don’t perform such adjustments, Keenlyside notes. The researchers validated their new model using extensive ocean temperature and weather data gathered worldwide between 1955 and 2005”.
Now why do you suppose Keenlyside used satellite data? Is he a heretic? They seem to think so over at RC (or do they use it as well?) although they don’t seem quite to know how to deal with his study. On the one hand, it gives them breathing room till 2016 to come up with more absurd excuses as to why the warming trend has stopped. Then again, they have to admit to natural influences controlling the climate and warming, besides CO2. Quite a conundrum, I’d say.
As if that’s not bad enough, in the same article, someone from Hadley drops them right in it:
“Although the team’s model predicts climate well in some regions, in others it performs rather poorly, says Richard Wood, a climate modeler at the Met Office Hadley Centre in Exeter, England. “There’s a long way to go before a climate model can produce accurate results in all regions,” he notes”.
Doh!! Isn’t that what my earlier post from Koutsoyiannis alleged, that models are decent at predicting continent-wise but kind of useless regionally?
BTW…what exactly is a continental prediction? Is that like a global temperatue?
Gordon Robertson says
This is a piece about James Hansen, the head of NASA GISS where Gavin Schmidt of realclimate works as a climate modeler:
http://www.cato.org/pub_display.php?pub_id=9510
I ask a simple question. Do you think Schmidt would be employed at GISS for long if he disagreed with Hansen?
According to this article by Michaels, and he should know since he has been opposing Hansen almost single-handedly since the 1980’s, Hansen has made some grievous errors in climate predictions along the way.
Michaels says of him, “Hansen’s 1988 predictions were flatly wrong about the extent of global warming. Yet on the 20th anniversary of his original testimony, Hansen said that people “should be tried for high crimes against humanity and nature” for spreading doubts about the promised global warming holocaust. He named names, too: the CEOs of ExxonMobil and Peabody Energy”.
I’m not claiming Schmidt shares Hansen’s politics but it seems reasonable to me that much of the current scientific knowledge he espouses comes from Hansen’s lab. I don’t see how a mathematician could come up with such theories on his own.
If Hansen has been as wrong as Michaels claims, then how right can Schmidt and his RC cronies be?
DHMO says
Gordon I corrected that I said
“Personally I would NOT quote Trenberth on anything.”
SJT says
“and to top it off you are repudiating Koutsoyiannis because the GCM’s are not set up to produce regional climate predictions! ”
He criticises them on their limitiation in modelling regional changes, when it has never been claimed they could, given the limitations in computer power to date.
SJT says
http://en.wikipedia.org/wiki/Satellite_temperature_record
“Since 1979, Microwave Sounding Units (MSUs) on NOAA polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen. The intensity is proportional to the temperature of broad vertical layers of the atmosphere, as demonstrated by theory and direct comparisons with atmospheric temperatures from radiosonde (balloon) profiles. Upwelling radiance is measured at different frequencies; these different frequency bands sample a different weighted range of the atmosphere.[11] Channel 2 is broadly representative of the troposphere, albeit with a significant overlap with the lower stratosphere (the weighting function has its maximum at 350 hPa and half-power at about 40 and 800 hPa). In an attempt to remove the stratospheric influence, Spencer and Christy developed the synthetic “2LT” product by subtracting signals at different view angles; this has a maximum at about 650 hPa. However this amplifies noise,[12] increases inter-satellite calibration biases and enhances surface contamination.[13] The 2LT product has gone through numerous versions as various corrections have been applied.
Records have been created by merging data from nine different MSUs, each with peculiarities (e.g., time drift of the spacecraft relative to the local solar time) that must be calculated and removed because they can have substantial impacts on the resulting trend.[14]
The process of constructing a temperature record from a radiance record is difficult. The best-known, though controversial, record, from Roy Spencer and John Christy at the University of Alabama in Huntsville (UAH), is currently version 5.2, which corrects previous errors in their analysis for orbital drift and other factors. The record comes from a succession of different satellites and problems with inter-calibration between the satellites are important, especially NOAA-9, which accounts for most of the difference between the RSS and UAH analyses [15]. NOAA-11 played a significant role in a 2005 study by Mears et al. identifying an error in the diurnal correction that leads to the 40% jump in Spencer and Christy’s trend from version 5.1 to 5.2.[16]
For some time, the UAH satellite data’s chief significance was that they appeared to contradict a wide range of surface temperature data measurements and analyses showing warming in line with that estimated by climate models. In April 2002, for example, an analysis of the satellite temperature data showed warming of only 0.04 °C per decade, compared with surface measurements showing 0.17 +/- 0.06 °C per decade. The correction of errors in the analysis of the satellite data, as noted above, have brought the two data sets more closely in line with each other.”
So what version are they up to now? 5.2? The satellite data is important, but don’t think it hasn’t had to be adjusted and tuned over time, like the surface record, and like the models.
SJT says
“Doh!! Isn’t that what my earlier post from Koutsoyiannis alleged, that models are decent at predicting continent-wise but kind of useless regionally?”
Koutsouiannis could have just asked the modelers themselves, they would have told that and saved him the trouble writing up his redundant paper. The resolution of the models to date has been due to limitations of the hardware. New supercomputers should allow resolutions that are much finer, and much improved modeling of clouds.
SJT says
“The MSU’s on the satellites read correctly as do the thermometers on the sondes. ”
you need to read the article on wikipedia on how the satellite data is used to create a temperature reading. It’s quite complex and indirect, and no more reliable than the process used to create a surface temperature record.
Lazlo says
Ah… supercomputers (beer). Faster processing of garbage. But, SJT, carbon neutral?
Lazlo says
SJT: ‘you need to read the article on wikipedia..’ are you serious about this unadulterated dross…?
Lazlo says
‘The resolution of the models to date has been due to limitations of the hardware.’ Pure crap… the resolution of their BS assumptions to a point they can be re-ingested up their cavities.
Peter says
SJT: “Greece is having to import water to many of it’s Islands due to prolonged drought. etc etc”
So what’s new? Greece has suffered from severe droughts since ancient times.
Peter says
SJT: “Science would not exist without models. e=mc2 is a model.”
How droll! You know full well what’s meant by ‘model’ in this context.
Peter says
Patrick: “In the AGW context, we’re not going to get absolute certainty for a one shot event”
But, unlike a series of coin throws – the outcome of each one being completely independent of the others, climactic events are highly dependent on what went before. As such, the climate over the next century, say, can be likened to be one very long coin toss – the coin taking 100 years to land. Unless you have a thorough understanding of exactly how the coin’s going to spin, a highly accurate measurement of the starting conditions, and a highly accurate quantification of all the factors which will affect the spin, it’s impossible to predict which way it’s going to land.
Peter says
Gavin: “Peter may not have appreciated my brick wall analogy”
A trowel is a trowel is a trowel. No mistaking one and what it does. A trowel is a well-designed tool which does exactly what it says on the box. A trowel does not make mistakes. It does not mislead.
A GCM is.. well, er….
DHMO says
SJT “you need to read the article on wikipedia on how the satellite data is used to create a temperature reading. It’s quite complex and indirect, and no more reliable than the process used to create a surface temperature record”. In your dreams obviously you know nothing about the problems of measurement using thermometers in white boxes. Surveys have been done on these in the USA. The system was created by weather bureaus for short term gross accuracy. They must be considered unreliable at the level of tenths of a degree anomalies. Look take a sabbatical and really study GCM their application is nonsense. You believe in them as a extreme religious fanatic believes in the bible or god.
gavin says
Peter: “A trowel is a trowel is a trowel. No mistaking one and what it does. A trowel is a well-designed tool which does exactly what it says on the box. A trowel does not make mistakes. It does not mislead.
A GCM is.. well, er….”
I repeat; Peter may not have appreciated my brick wall analogy. Firstly, a trowel does not come in a box with specifications or instructions. Sure it may be well designed for a particular job but that won’t stop it being missused or abused.
Re; “A trowel does not make mistakes. It does not mislead. A GCM is.. well, er..” just like my trowel Peter.
I am sure now Peter is inexperienced in the practical ways when in comes to handling (any) tools.
Likewise it seems DHMO suffers from lack of experience with thermometers. These things too like well designed trowels were only ever a guide.
cohenite says
Will; your comments are indicative of a malaise in regard to not just AGW but the ‘technology’ behind this flawed ideology; it is plain that the disciples of AGW regard the GCM’s as oracles or avatars; this is sad, but I don’t see why the rest of us should pay the price for the willful delusions of others. You persist in misrepresenting Koutsoyiannis; his study replicated what TAR and AR4 had done; which was to make regional predictions; and you can’t take refuge in the idea that the GCM’s are better are predicting global trends; that is just a contradiction in terms because the global climate is a myth, predicated on the notions of average global temperature, radiative balance and the semi-infinite atmosphere; these are all failed concepts, and the GCMs cannot overcome the fact that even a fully understood dynamical system (and the climate system is nowhere near being understood) with fully understood dynamics is unpredictable for long-term horizons; when you marry that with incorrectly assumed dynamics such as how much CO2 can ‘heat’ and the enhanced greenhouse, it would not matter how much increased petaflops you add, all that will produced will be junk. Expensive junk.
TheWord says
Gavin said:-“I repeat; Peter may not have appreciated my brick wall analogy. Firstly, a trowel does not come in a box with specifications or instructions. Sure it may be well designed for a particular job but that won’t stop it being missused or abused.”
Are you being purposely obtuse? Or is this just some innate gift?
SJT says
“In your dreams obviously you know nothing about the problems of measurement using thermometers in white boxes. ”
I appreciate the problems of surface measurements, I don’t think many appreciate the problems of satellite measurements. The troposphere temperatures are the result of interpolation and manipulation of raw data, just as the surface measurements are.
gavin says
Gift? Call it what you like but I have to try much harder on Jen’s to get traction with concepts, measurements and personal perspectives than I had to with any industry technology, science and research.
Temperature change in fluids is about energy levels. As individuals we read that best via some form of transducer and we communicate our observations via some standard form.
Last weekend I sold a barometer with a large dial and set point made in Japan. It also had a separate alcohol thermometer and a hygrometer. All in all this was a fine instrument in its day but we can be amused by the “change” predictions as they were originally calibrated.
Routine environmental monitoring is something I once did well with a sling cyclometer every day and often in extreme conditions where drying or saturation rates were critical. Science for the movement of moisture in a variety of manufactured products depended on it.
The applied science of measurement is primarily a craft that many posters here simply don’t understand.
gavin says
Thinking back I can also say gas flow, mass flow etc is a bit like calibrating the pinch point in a stock yard where every sheep that jumps out of the mob can be counted directly for a time at least.
There are many pinch points, random samples and transducers employed in our calculations today but none are absolute.
TheWord says
gavin,
As someone who says they appreciate and understand the subtleties of measurement, I can only assume that you don’t fully understand what’s going on in relation to the abuse of measurement by the AGWing crowd.
Either that, or you’ve drunk so deeply of the green kool-aid that you’re having trouble reading the barometer.
SJT says
“that is just a contradiction in terms because the global climate is a myth, predicated on the notions of average global temperature”
It’s not a myth, it’s whatever you define it to be. Once you have defined it, (if you have half a brain and have made a decent definition), you can then see what changes are happening. It has limited use, but it is a good ‘dashboard’ instrument, to get the big picture of what is happening. If it goes up, it’s because your sampling is detecting a rise in more areas than falls, like wise if it goes down.
Plenty of skeptics seem to be happy to use averages. Christy and Spencer give us an average for the troposphere, but the troposphere isn’t all the same temperature. What is the “Tropical Troposphere” temperature, for example? It’s not all the same temperature, it varies according to height and distance from the equator, according to season, and time of day.
Will Nitschke says
Cohenite: “From AR4, chp 2 Executive Summary. p131;”
What about “I must not distort the facts, I must not distort the facts” 🙂
Please reread my original post re: “AGW Scientists”. The Executive Summary wasn’t peer reviewed the last time I checked. I can quote some AGW Alarmist on a podcast somewhere declaring that sea levels will rise by 1000 metres in 100 years, i.e., the Science Show on ABC, but the person making that statement is not quoting peer reviewed scientific research.
cohenite says
The tropical troposphere is at a height of 300mb above the north and south tropical latitude extent. Its temp is predicted by AGW and IPCC to increase at a faster rate then the surface temp on the assumption of a static level of RH; RH has declined 21.5% at this level since 1948; hence no EG, or temp increase.
SJT says
“The tropical troposphere is at a height of 300mb above the north and south tropical latitude extent. Its temp is predicted by AGW and IPCC to increase at a faster rate then the surface temp on the assumption of a static level of RH; RH has declined 21.5% at this level since 1948; hence no EG, or temp increase.” But it’s not the same temperature all at that level, is it?
cohenite says
Will; 2 things; Will is SJT, so named in honour of Will Robinson from Lost In Space; to save confusion I will address you as Mr Nitschke. In respect of your assertion that the Executive summary is not peer reviewed, I think Professor Karoly will disagree with you.
Will Nitschke says
It’s been my observation that ‘Deniers’ often make the claim that the Executive Summary is not pure science but also a mix of activist and political elements. Hence it lacks credibility in many respects.
However, when it suits the argument, everything in the Executive Summary apparently meets the high standards of a proper review process – hence some of it’s less than credible claims must be taken with full seriousness as exactly what the scientists in this field claim to be asserting.
Gordon Robertson says
SJT said…”The troposphere temperatures are the result of interpolation and manipulation of raw data, just as the surface measurements are”.
There’s a big difference. Satellites cover 95% of the Earth so their average is far more accurate. If you’re going to live by averages, which suits modelers, you can’t get more accurate than the satellite data.
As Singer said, he’ll go with the accuracy of the satellite over the questionable surface temperature. As far as I’m concerned, the satellites data is all we need. It tells us the planet has barely warmed in a century and there’s no need for all the carbon nonsense.
The IPCC, run by environmental activists, doesn’t like that.
Gordon Robertson says
SJT said…”you need to read the article on wikipedia on how the satellite data is used to create a temperature reading”.
I already told you what I think of William Connolley except that in the spirit of keeping it clean in this blog I am not able to express myself fully. Connolley is an editor on Wikipedia and he ensures that nothing gets on there re climate that wouldn’t show up on realclimate.org. I have read several of his distortions and refuse to read anything Wikipedia has to say about it.
SJT says
“There’s a big difference. Satellites cover 95% of the Earth so their average is far more accurate. If you’re going to live by averages, which suits modelers, you can’t get more accurate than the satellite data.”
Which, as you have now learned, has it’s own problems. The satellite data is not a direct reading of the troposphere, as they can only read the troposphere through the higher levels of atmosphere. What they have to do is take sideways readings, and manipulate the data to arrive at an approximation of the temperature. I think it’s wonderful they can do that, but the satellites and weather balloons are not some source of impeccable data that is pristine in it’s raw state.
If there are any issues you think the wiki article is incorrect on, please point them out. From what i can tell, it seems to be a good representation of the issue.
gavin says
TW thought he had my numbers too in that kool aid crack however my next comment will apply to others too who think they can also be flippant at my expense.
Consider for a moment the difficulties of measuring the moisture content (RH) in a fierce recycled air stream inside the A/C ducting and right beside the heat exchanger with a hand held thermometer, then calculating the drying rate after measuring air speed with a pitot tube. Let’s say our instruments are never steady. Note too these manual instruments are placed in the turbulent environment to be measured and the heat source can’t be touched either in this case. Also; heat in calculations may not give accurate evaporation rates at the other end of the system due to various lags or losses.
Why those manual spot checks? Automation quickly fails where it can’t be maintained in a hazardous environment.
Next consider your satellite data and work out how they are calibrated to read ST and SST from a great distance while remembering surrounding surface air moves constantly. As SJT points out these are only indirect measurements. I say our raw satellite data must be constantly interpreted too and in the end it will reflect old surface station temperatures as there is no other reference system.
The claim that satellites give you support for anti AGW arguments is premature. These newer instruments have only been around for a very short time. Sure, their coverage is probably better but in my book they depend on rather than negate older information.
Patrick Caldon says
Peter,
“But, unlike a series of coin throws – the outcome of each one being completely independent of the others, climactic events are highly dependent on what went before. As such, the climate over the next century, say, can be likened to be one very long coin toss – the coin taking 100 years to land. Unless you have a thorough understanding of exactly how the coin’s going to spin, a highly accurate measurement of the starting conditions, and a highly accurate quantification of all the factors which will affect the spin, it’s impossible to predict which way it’s going to land.”
Peter, that’s just wrong. We can make lots of statements about the climate in 100 years time with close to absolute certainty. e.g. Hadley cells will exist, the poles will be cooler than the tropics, the tropopause will exist, and any number of statements of a similar vein. On the tropopause, the argument not trivial induction (i.e. the tropopause has always existed therefore it must always exist, similar to the sun rising), what tells us it will be there in 100 years in modeling, using exactly those equations that we use in a GCM. It’s not like tossing a coin at all. The essentially random fluctuations which are happening in the mean time are more appropriate to a coin tossing analogy.
Gordon Robertson says
SJT…OK. I held my nose and took a look at your Wiki article. Again…William Connolley is a computer programmer. He has no business editing valid scientific work done by people like Spencer and Christy because he thinks they are wrong.
Firstly, look at the graph on page 1 of the Wiki article. It is plain wrong. It has been modified. That warming trend shown for UAH is what Connolley thinks it should be, not what it is. Besides, much of the discussion, under the ‘Discussions’ tab on the site is from 2005/2006.
Here’s the real graph…once again:
http://www.weatherquestions.com/Roy-Spencer-on-global-warming.htm#satellite-temps
See Figure 9 near bottom of page. Now show me the linear trend line with the positive slope that is shown from 1979 to 2008 on the Wiki graph. What I’m seeing is a flat trend from 1979 till 1998 when a major El Nino event happened. After that there was a slight average warming, which has leveled off through 2008.
On this page:
http://www.worldclimatereport.com/index.php/2008/01/08/musings-on-satellite-temperatures/
“The trend in the UAH derived temperatures of the earth’s lower atmosphere for the most recent 10-year period (January 1998 though December 2007) is a positive 0.04ºC/decade (although it is not statistically significant)”.
That’s a flat trend over 10 years and that is not what is shown by Connolley, the computer programmer, on the Wiki graph. If you doubt the veracity of Spencer, Christy and Michaels, remember they are all bona fide atmospheric scientists while Connolley is not.
This page:
http://www.worldclimatereport.com/index.php/2008/02/07/more-satellite-musings/
shows the RSS and UAH data completely in step. Yes, there were slight errors in both data sets, but the people who make the data sets are the experts, not the modelers at RC who want them modified. Connolley insinuates in the ‘Discussion’ tab that the RSS data set supports the RC POV. Maybe it did before it was corrected downward, but it doesn’t anymore.
With reference to the ‘Discussions’ tab, Connelly makes this statement about Fred Singer, “Yep, thats part of the hopelessly biased bit”. In case you missed it, this is a computer programmer related to realclimate.org calling an atmospheric scientist biased. Connolley seems to have a hate-on for Singer, possibly because Singer is emminently more qualified than Connolley.
Later under the ‘Discussions’ tab, Connolley snidely responds to this query: “It would be useful if someone could better explain why and how the satellite data was adjusted”.
Connolley responds, “You’re suggesting we try to explain how come Spencer and Christy managed to extract a cooling trend for so long from a dataset that when properly examined shows warming? Well we could… William M. Connolley”.
Once again, we have a computer programmer talking down his nose at two emminently qualified atmospheric physicists. He’s talking as if major adjustments were made to the satellite data record when in fact the adjustments were in the order of a 1/10th C. And who is the “we” he refers to? Is he and his RC buddies now the authority?
A read through the “Discussions’ tab of the Wiki article shows how smug and arrogant Connolley can be. He has nothing to be arrogant about, he’s a computer programmer.
RC in general has nothing to offer. They have the geologist Michael Mann whose hockey stick graph was demolished. They have Schmidt and Connolley, computer modelers with backgrounds in math and computer science respectively. They have Rahmstorf, a physicist who studies the oceans. He embarrassed himself in a debate with Lindzen, but he’s the closest they come to actual expertise in a science related to the atmosphere. The rest are geophysicists and the likes.
You live in a small world SJT. Why not get out more and give RC and Wiki a rest?
Gordon Robertson says
SJT said…”the satellites and weather balloons are not some source of impeccable data that is pristine in it’s raw state…”
Where do you think the GCMs get their raw data? Do you think they guess at what’s going on in the atmosphere? They use satellite and sonde data to start off their contraptions then come back and claim it is wrong.
Gordon Robertson says
DHMO said…”Gordon I corrected that I said “Personally I would NOT quote Trenberth on anything.”
I noticed. 🙂 I was going to correct it for you when I replied but I didn’t think it was a big deal. Anyone with basic intelligence knew what you meant.
SJT says
“shows the RSS and UAH data completely in step. Yes, there were slight errors in both data sets, but the people who make the data sets are the experts, not the modelers at RC who want them modified. Connolley insinuates in the ‘Discussion’ tab that the RSS data set supports the RC POV. Maybe it did before it was corrected downward, but it doesn’t anymore.”
If you look at the next figure, they are in step now to a large extent, but before there was a significant difference, with UAH being lower than RSS. The agreement is much more in line with what the models were expecting, that is, higher.
SJT says
“Where do you think the GCMs get their raw data? Do you think they guess at what’s going on in the atmosphere? They use satellite and sonde data to start off their contraptions then come back and claim it is wrong.”
I think you misunderstand how the models work.
SJT says
“Connolley responds, “You’re suggesting we try to explain how come Spencer and Christy managed to extract a cooling trend for so long from a dataset that when properly examined shows warming? Well we could… William M. Connolley”.
Once again, we have a computer programmer talking down his nose at two emminently qualified atmospheric physicists. He’s talking as if major adjustments were made to the satellite data record when in fact the adjustments were in the order of a 1/10th C. And who is the “we” he refers to? Is he and his RC buddies now the authority?”
Spencer and Christy had it wrong, even though they were telling everyone the models were wrong and they were right.
SJT says
“Once again, we have a computer programmer talking down his nose at two emminently qualified atmospheric physicists. He’s talking as if major adjustments were made to the satellite data record when in fact the adjustments were in the order of a 1/10th C. And who is the “we” he refers to? Is he and his RC buddies now the authority?”
It was the difference between the claim that it was cooling vs warming.
Will Nitschke says
Both sides in the debate have to accept the best data available OR point to very credible evidence as to its inaccuracy, i.e., peer reviewed papers or assessments by independent scientific research studies or statisticians to provide evidence otherwise. If you don’t like the data, don’t ramble on about how it’s not ‘accurate’ because it was gathered using a proxy. The bottom line is that every scientific tool is a proxy in one form or another, as we do not have direct access to the ‘real world out there’.
If you don’t like the empirical data (whether you’re a ‘Denier’ or an ‘Alarmist’) then include a link to an authoritative source (further opinions by someone of like mind don’t count) that is in agreement with your viewpoint. Otherwise you’re doing nothing more than revealing your biases.
SJT says
I do have to say though, WCR is a much higher standard of reference than I am used to finding here. I heard Michaels was booed by deniers when he said that there has been warming. Is that true?
gavin says
Gordon; I stopped by your first Spencer link and concluded that he too was cherry picking. In fact by those web pages I reckon he is only just another science writer who thinks he knows it all. The give away for me was fig 3.
Fig 9 had a dubious dependence on Mt Pinatubo that I consider is unfortunate when trying to explain all the other deviations shown.
As I said earlier one must be sampling the recycled air stream directly to know anything about evaporation, precipitation, thermal efficiency and feedback. Most importantly we must use data from near the surface where the initial exchange takes place. Try this out any time outside with your naked body in and out of the shade then try a comparison in moonlight.
Going back to basics; the jolly old sea level is still rising enough to be noticed by those old enough to know Roy is short on other temperature measurements and magnitudes.
Will Nitschke says
Spencer’s pro’s:
“Received his Ph.D. in meteorology at the University of Wisconsin-Madison in 1981. Before becoming a Principal Research Scientist at the University of Alabama in Huntsville in 2001, he was a Senior Scientist for Climate Studies at NASA’s Marshall Space Flight Center, where he and Dr. John Christy received NASA’s Exceptional Scientific Achievement Medal for their global temperature monitoring work with satellites. Dr. Spencer is the U.S. Science Team leader for the Advanced Microwave Scanning Radiometer flying on NASA’s Aqua satellite. His research has been entirely supported by U.S. government agencies: NASA, NOAA, and DOE.”
Spencer’s Con’s:
something of an eccentric and maverick, with strange and wrong ideas on evolutionary theory. This unfortunately casts a shadow on the value of his opinions.
William M. Connolley
Pro’s: None that I’m aware of
Con’s: Political and social activist. No recognised qualifications. Very high editorial control over Wikipedia topics on climate change, resulting in distortions and misrepresentations.
Will Nitschke says
Gavin: “stopped by your first Spencer link and concluded that he too was cherry picking. In fact by those web pages I reckon he is only just another science writer who thinks he knows it all”
Gavin, if you don’t know who Spencer is, do you really feel you are qualified to defend a position for which you’ve only studied one side of the arguments?
Your lack of knowledge discredits the ‘Alarmist’ position in much the same way that certain ‘Denier’ cranks make fools of themselves. I realise you feel the ‘Alarmist’ position needs a defender, but possibly your lack of knowledge is having the opposite effect?
SJT says
“Connolley holds a Bachelor of Arts in mathematics and Doctor of Philosophy from the University of Oxford for his work on numerical analysis.[1] Connolley has authored and co-authored many articles in the field of climatological research. It is his view that there is a consensus in the scientific community about climate change topics such as global warming, and that the various reports from the Intergovernmental Panel on Climate Change (IPCC) summarise this consensus.[2]”
He seems to have some qualifications, and he is critical of some of Hansens work.
Will Nitschke says
OK, I apologise if my remarks can be interpreted as implying that William Connolley is uneducated. I did not mean that. My understanding is that he works as a computer programmer.
I’m not sure I would be confident that he has the qualifications or objectivity to be critical of Hansen’s work.
gavin says
Will; I am currently very confident with my reference to sea level as the yardstick for global temperatures past and present because that is something I can see in a private moment and relate to anytime.
For many years I was hired to troubleshoot and maintain a variety of instrument systems in support of our industrial technology. It was a long career that often required me checking on international advances in order to advise local service managers on fresh developments and likely standards.
On the way I was tutored by some very bright sparks and crafty engineers in the practice of measurement and control. It’s these people I owe most not the writers of science or history. In the end a PhD in this or that discipline doesn’t count for much on the leading edge.
Knowing how to avoid quacks in any industry is a big part of our security and survival habits. I said that only in relation to posters on here who don’t frighten me with their idea of science.
SJT says
Conelly on Hansen
http://scienceblogs.com/stoat/hansen-v2.pdf
Louis Hissink says
Gordon: “They have the geologist Michael Mann whose hockey stick graph was demolished. ”
Michael Mann is not a geologist.
Peter says
Louis: “Michael Mann is not a geologist.”
That’s right, he’s actually a hockey player. 🙂
Demetris Koutsoyiannis says
Dear SJT, I appreciate your criticism about my work. Here is my response, mostly in the form of questions triggered by your comments.
1. “He’s addressing a problem that everyone already knew about, the resolution of the models is too large to make specific predictions for small regions, such as Greece.”
Please take a look at the paper,
http://www.itia.ntua.gr/en/docinfo/864/. “Greece” appears in the text two or three times: first time in the authors’ address and second time in Table 1, to indicate that one of the eight stations examined (Athens) is in Greece. (There is also a cross reference to an additional Greek station). So, the paper is not about “small regions, such as Greece”.
The (eight at this phase) stations we studied are spread all over the world.
Also please take a look at the web pages linked from the above site (about 150 links). My question is: How it happens all these people to discuss our findings if, as you say, “everyone already knew about “.
2. “He doesn’t understand the regional limitations of the current models.”
You mean that there are additional (regional) limitations than we failed to describe in the paper?
3. “He’s tilting at windmills.”
Do you think this is a problem? Shouldn’t some of us tilt at windmills?
Or do you really think that it is useless?
4. “Greece is having to import water to many of it’s Islands due to prolonged drought.”
Prolonged droughts have been the rule rather than the exception in Greece through history. So, do you find it a surprise that we have now a
(2-year) drought after a (~15 year) period of water abundance, which had followed a period of a persistent and very severe (~7 year) drought (that started in 1980s)?
5. “Koutsouiannis could have just asked the modelers themselves, they would have told that and saved him the trouble writing up his redundant paper.”
I prefer to discover things myself. It is much more fun and I strongly recommend it to you as well. Here is one interesting problem to study:
Basic information:
a. Climate models are numerical prediction models integrating differential equations over a spatial and temporal grid.
b. “As everyone knows”, their performance at the grid points is not good.
c. However their performance when we aggregate the grid outputs over continental and global scales becomes good.
d. Given that everyone is interested about the climate at his local scale rather than the “global climate”, grid point outputs are further processed (downscaled) to give good performance at local scales.
Questions:
i. What additional presuppositions are needed to obtain propositions (c) and (d) from propositions (a) and (b)?
ii. Can you give proofs for (c) and (d) given (a) and (b) and your own presuppositions?
(Note: Invoking of magical model behaviour is not allowed).
6. “The resolution of the models to date has been due to limitations of the hardware. New supercomputers should allow resolutions that are much finer, and much improved modeling of clouds.”
This is another good problem to study, hopefully without asking modellers. So, do you have a proof that finer resolution will result in better performance in terms of the climate of 2100?
Demetris Koutsoyiannis
Patrick Caldon says
Demetris Koutsoyiannis,
Who does (d) (local scale modeling) and describes the performance as “good” without substantial caveats? Most attempts at (d) are massively hedged with statements to the uncertainty, and the weakness of the approach.
If you could point me to a paper of someone attempting (d) and unambiguously stating that the result is “good”, I’d be appreciative.
As for (c) what standard of “proof” do you feel is appropriate? I’d refer you to something like the AMIP or CMIP3 (which I’m sure you’re aware of) the results of both of which give reasonably detailed understanding of individual model strengths and weaknesses.
Gordon Robertson says
Demetris Koutsoyiannis…you’ll have to wait while SJT consults with the high-priests over at realclimate.org 🙂
Gordon Robertson says
Louis Hissink said…”Michael Mann is not a geologist”
Sorry Louis, I wasn’t trying to demean geologists. I’m just going on his bio at RC as cited below:
“Dr. Mann received his undergraduate degrees in Physics and Applied Math from the University of California at Berkeley, an M.S. degree in Physics from Yale University, and a Ph.D. in Geology & Geophysics from Yale University”.
They claim he got his Ph.D in Geology and Geophysics.
Gordon Robertson says
Damn!! I gave myself away. Now everyone will know I’ve been on RC. My ‘incredibility’ has deepened, as Groucho might say.
Gordon Robertson says
Will Nitsche said…”Spencer’s Con’s:
something of an eccentric and maverick, with strange and wrong ideas on evolutionary theory.
Ah!!…at last, a genuine climate scientist. 🙂
Patrick Caldon says
And to 6; again this depends on your standard of proof. However ocean modeling in particular works better with a finer grid, and there has been a history of finer gridding in ocean models spontaneously leading to qualitative features appearing in the simulation. One example is the POP ocean model both in the Arctic and for the Agulhas ring current; for this particular model various real-world currents spontaneously appear in the model as the grid gets finer.
You might argue: So what if qualitative features appear, how does this affect climate modeling as a whole? However for one example in the Arctic this leads to better modeling of sea-ice, and a straightforward argument based on ice albedo suggests that sea-ice extent is a reasonably large climate driver; in this particular way (among others) way we can be reasonably sure that better ice modeling will lead to better climate modeling. This experience of finer gridding leading to more qualitative features appearing seems to be common in ocean modeling, so it’s reasonable to assume that finer gridding will lead to more accurate simulation of the ocean, and thus of the climate as a whole.
Gordon Robertson says
gavin said…”one must be sampling the recycled air stream directly to know anything about evaporation, precipitation, thermal efficiency and feedback”.
Christy and Spencer specialize in satellite temperature data sets. Both, however, have degrees in climate science as applied to atmopheric physics. That is, both are experts on the atmosphere and weather systems.
The satellites use a telemetry device (MSU) as described here:
“Two deep-layer tropospheric temperature products, one for the lower troposphere (T2LT surface to about 8 km) and one for the midtroposphere (T2 surface to about 15 km, thus including some stratospheric emissions), are based on the observations of channel 2 of the MSU. The basic measurement utilized is the intensity of the oxygen emissions near the 60-GHz absorption band, which is proportional to atmospheric temperature”.
The MSU is a microwave receiver that picks up all microwave band emissions from oxygen between the surface and 8 km, and the coverage of the atmosphere is 95%. You can’t get any more accurate temperature measurement in the atmosphere than that. That data is backed up by weather balloon data sets.
The arguements against the satellite data are extremely frail. They are based on minor perturbances and they have all been worked out. Look at one of the most recent (October) posts in the blog, a paper by Lindzen, to see the amount of corruption involved in stifling the satellite data.
It’s all described in this 2000 paper:
http://www.ncdc.noaa.gov/oa/climate/research/uah-msu.pdf
Since I made the post to which you refer, I have done a lot of reading on climate models. I am now convinced they are essentially useless as a means of predicting future climate, and I take heart in knowing Kevin Trenberth agrees.
http://sciencepolicy.colorado.edu/ogmius/archives/issue_22/part_2.html
This link is from a series, in response to this first part of the series by Chase:
http://sciencepolicy.colorado.edu/ogmius/archives/issue_22/part_1.html
Trenberth’s comments are followed here in part 3:
http://sciencepolicy.colorado.edu/ogmius/archives/issue_22/part_3.html
In part 3, Hulme refers us to this intense piece by Ravetz:
http://www.nusap.net/downloads/articles/modelsasmetaphores.pdf
It’s heavy reading but well worth it. It explains the psychology behind models and how models are used as metaphors to beat people over the head.
I got a chuckle out of the reference by Ravetz to the metaphor related to evolution, “natural selection”. People use terms all the time in science to make inferences, such as “the vast majority”.
Finally, here’s a really in-depth description of climate models:
http://www.climatescience.gov/Library/sap/sap3-1/final-report/sap3-1-final-all.pdf
What they are trying to do is incredibly complex. Whereas I applaud them for trying, and I can see the benefits in the future, to make a claim at this stage of their development that they are capable of emulating the complexities of climate and predicting warming with any degree of confidence is complete bunk.
Gordon Robertson says
Here’s a direct link to the Lindzen article in my last post:
http://arxiv.org/ftp/arxiv/papers/0809/0809.3762.pdf
In the middle of that post I may have confused people with the reference to the Lindzen article. That link was the one about my previous reference to Christy.
SJT says
“The MSU is a microwave receiver that picks up all microwave band emissions from oxygen between the surface and 8 km, and the coverage of the atmosphere is 95%. You can’t get any more accurate temperature measurement in the atmosphere than that. That data is backed up by weather balloon data sets.
The arguements against the satellite data are extremely frail. They are based on minor perturbances and they have all been worked out. Look at one of the most recent (October) posts in the blog, a paper by Lindzen, to see the amount of corruption involved in stifling the satellite data”
I have never argued against it, I just don’t see why there is aura of pristine accuracy about it, when there clearly isn’t. The troposphere temperatures have to be inferred, because the satellites have to read through the stratosphere. Hence RSS and UAH currently disagree again on what the correct temperature is.
SJT says
“something of an eccentric and maverick, with strange and wrong ideas on evolutionary theory.”
Are you a creationist too?
SJT says
““Greece” appears in the text two or three times: first time in the authors’ address and second time in Table 1, to indicate that one of the eight stations examined (Athens) is in Greece. (There is also a cross reference to an additional Greek station). So, the paper is not about “small regions, such as Greece”.
The (eight at this phase) stations we studied are spread all over the world.”
My bad, it is, however, about small regions, blame me, not the sources I used to formulate my reply. I assumed it was Greece, it was small locations no larger than Greece.
The argument still stands, IMHO. You have to criticise and assess the models on what they claim to address. People want to know, how will climate change affect them individually. If the IPCC has stretched the capabilities of the models too far to meet their demands, then the IPCC is remiss.
John F. Pittman says
Comment from SJT Time October 5, 2008 at 3:00 am says
“”My bad, it is, however, about small regions, blame me, not the sources I used to formulate my reply. I assumed it was Greece, it was small locations no larger than Greece.
The argument still stands, IMHO. You have to criticise and assess the models on what they claim to address. People want to know, how will climate change affect them individually. If the IPCC has stretched the capabilities of the models too far to meet their demands, then the IPCC is remiss.””
Let’s be honest about SJT’s claim #6 about future computing. As indicated on RC (by a sceptic, yes, yes, I know…) in order to do the PDE’s in a differencing model, in order to absolutely sure if and only if you get the math and physics right, would be based on atom and molecular size grids, OR, if you can use parameter/mole size grids, an assumption. First the mole size grids would give you a 10^23 lumped parameter. However, you have to go from 200Km X 200 KM by 20 levels (typical GCM vertical height) for 24.4 Km * alphamoles to go to mole sized (assuming your lumped parameter for moles could be done) events which is about 7.132 moles/m^3.
Lets do the math, I have given the modellers a 10^23 advantage. Now how much more computing power do we need?
=200KM*1000m/KM*200KM*1000m/KM*24KM*1000m/KM/20 levels in current GCM’s/0.14 m3/mole = 3.48e+13 = which means, doubling our computing power every 2 years means we will have this capability, even though we gave them a 10^23 advantage, in not quite 90 years from now to be able to do the math for vorticity without using a lumped parameter, assuming one can do moles.
And GCM modellers make fun of those who use lumped parameters!!!! Even though mathematically it is shown they do. After all the reality is friction and the vorticity and eddies are a molecular/atomic phenomena in the absolute sense.
In the general sense you have to use Reynolds for ODE and N-S for PDE’s in a differencing matrix. Go to RC and read what Gerald Browning said here about post 168 until GS banned his posts on this. Just add http //www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/
But check my math, I probably made a simple math mistake. As a professional I always ask someone to check my math. Makes you wonder why the so named “Hockey team” does not welcome SMcIntyre or others to check their work, doesn’t it??
cohenite says
Patrick Caldon; you raise the key to GCM inadequacy; which is the lacuna between regional climate and the attempt to create global average climate indices which can be used to measure anomalies both regionally and globally, and justify AGW at both levels. In some ways Koutsoyiannis’s paper really states the obvious; there can be regional anomalies without there being deviations from a global climate; one reason for this was looked at in a recent paper by Pielke et al;
http:climatesci.colarado.deu/publications/pdf/R-321.pdf (// excluded)
Pielke applys the regional variation in Stefan-Boltzman to conclude that regional climate fluctuations can occur without perturbing a global radiative balance. Even with variations in the external source of energy, solar, manifestations of that fluctuation can occur regionally without a commensurate global perturbation;
http:www.giss.nasa.gov/research/briefs/shindell_06/ (// excluded)
This is a NASA paper which looks at the LIA; the paper concludes that a miniscule change in insolation, 0.025%, produced a change in global temps of about 0.02C but produced regional cooling over Europe and Nth America 5-10 times larger due to a shift in atmospheric winds.” That is, a global effect of minor consequence produced a regional effect of much greater consequence; presumably, it could work the other way round as well; but the problem for AGW is how they extrapolate from regional events to a global conclusion; in this paper by Runnalls and Oke, Hurst rescaling is used to overcome micro-climate effects at a particular station;
http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCL13663.1
AGW’s method of overcoming micro-climate effects is to apply knees to approximate step functions, or, as MANN does for example, relate proxies at a particular grid to instrument data anywhere. Both methods falsify regional trends. And without regional validity it is no wonder GCM’s are so far out both regionally and globally.
cohenite says
That Runnall and Oke link is;
http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI3663.1
Gordon Robertson says
SJT said…”I have never argued against it, I just don’t see why there is aura of pristine accuracy about it…”
It’s not how pristine the data is, it’s the low reading over 30 years. They managed to massage a slight warming out of the atmospheric data but it’s not nearly what was predicted. Have you read one of the most recent posts in the blog about Christy’s August 2008 paper?
Gordon Robertson says
SJT said…”Are you a creationist too”?
Not in the Biblical sense. I don’t see a problem with the notion that an intelligence of some kind is behind the universe.
Doesn’t it ever mystify you, as you sit there thinking idly, how you are able to do it? Don’t you ever wonder why you have consciousness and awareness? I mean, it’s one thing to connect all the neurons together and quite another to create consciousness out of it. What is that stuff?
Do you think it’s reasonable that the uniformity of the human mind just happened? Do you think it was a fluke joining of primevil DNA?
I don’t know what to think but I’m not hung up on evolution as it satnds. I can see the sense it it at times, then it looks like a load of cobblers.
gavin says
Gordon: Let’s return to my Oct 3 – 6:08 post “I stopped by your first Spencer link and concluded that he too was cherry picking & The give away for me was fig 3” Fig 3 shows the monster MWP & LIA imports.
Now I can admit to being way out of touch with all this current data fixing but you can’t convince me that this MSU stuff is in any way more truthful than a handful of BoM type stations we had here and there with their long histories in testing the elements of weather for all us surface dwellers.
What are our MSU instruments referenced to these days, not tree rings, not sea levels but other instruments? There is still much to learn starting here –
http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F1520-0426(2003)20%3C613%3AEEOVOM%3E2.0.CO%3B2&ct=1#I1520-0426-20-5-613-CHRISTY3
“Unfortunately, no instrumentation provides a “perfect” measurement of the bulk atmospheric temperature, especially over a 23-yr period, so there is difficulty in determining the error of one system by comparison with another, which itself is subject to error”
hey; I could have written that slide about my work ages ago!
Unfortunately I must dine and then catch something on TV as w/ends here are busy. Sorry for my delay in reply.
Cheers
Richard Mackey says
In the comment on 5 October, Cohenite raises a significant point about the regional/global issue. He identifies another of the lethal failings of the IPCC theories and modelling methodologies.
The following is an extract from a paper I’m completing in which I summarise findings of Alexander Ruzmaikin and Joan Feynman and Kovaltsov and Usoskin.
This summary together with Cohenite’s comment gives us a glimpse at the nuances of climate dynamics which you get by using the lens of non-linear complex systems to view climate dynamics instead of the linear, radically simplified models of the IPCC et al. This lens enables us to see how relatively small changes in solar phenomena (whether gravitational, electromagnetic, plasma or irradiance or an interaction of some or all four) results, having regard to the relevant time lag, in dramatic regional climate dynamics which might not be detected by the IPCC et al’s global modality of coarse grained modelling and analysis.
Feynman and Ruzmaikin amongst many others (e.g. the three Brian Fagan (http://en.wikipedia.org/wiki/Brian_Fagan) books about climate dynamics as a determinant of human history) argue that it was the regional changes to our climate dynamics that have influenced human history. To the extent that these arise from factors external to the Earth and the Moon, they arise from the Sun – directly or indirectly – or the Earth’s orbit around the Sun.
EXTRACT QUOTE
Ruzmaikin (2007) explained that linear and non-linear systems respond differently to external forces. The response of linear system is simply linearly proportional to the applied external force. Non linear systems respond in a conceptually different way. Non-linear systems have internally defined preferred states known mathematically as attractors.
The response of non linear systems to an external force is variable residency in the preferred states (i.e. the attractors) and changes in the transitions between them. The issue is not a magnitude of the response to an external force, as with the response of linear systems, but one or more of:
• a change of state;
• a change in the time spent in different states; and/or,
• the rate of oscillation between states.
Ruzmaikin (2007) considered that the impact of solar variability is to change the probability of the duration of particular climate patterns associated with cold conditions in some regions and warm conditions in other regions. These consequences are far more important, he argues, than changes to average global temperatures.
According to Kovaltsov and Usoskin (2007), the Earth’s climate is not formed or modulated uniformly over the planet. It is largely determined by conditions in some specific key regions. These, in turn, affect larger regions or global climate features. They argued that the global climate can be affected via changes of not only global atmospheric or ocean parameters, but also via local changes if related to such key regions. Kovaltsov and Usoskin (2007) explained that regional variations, which have a solar origin, overlay the temporal variations in the amount of solar irradiance, which at any location on Earth, are synchronous all over the planet.
According to Feynman (2007), there is now general recognition within the scientific community that the traditional definition of “climate change” in terms of the global average temperature is too restricted to be useful for an understanding of the response of the Earth’s surface to changes in the climate drivers. For example, low solar output results in a cold region that extends across northern Europe and Asia. However, it also results in a warm anomaly off the south west coast of Greenland. At the same time, Northern Africa and the Middle East will remain warm, while the temperatures in the western United States will be largely unaffected.
Feynman (2007) found that, in response to variable solar activity, the real change in global average temperatures has been smaller than the change in the regional temperatures. Significantly, it was the regional temperatures that have influenced human history.
END OF QUOTE
These considerations lead me to conclude that:
(a) What we see around us and throughout our history is non-ergodic climate dynamics, with highly variable climatological features at any one time;
(b) The Sun-Earth system is a complex, electrodynamically and gravitationally coupled system dominated by nonlinear interactions;
(c) The Sun, as a complex dynamic system, generates a wide range of complex perturbations affecting the climate system as a complex non-linear, non-stationary system; and
(d) It is misleading to think in terms of unique global temperature or pressure variations, to be characterised by a unique solar variability curve valid for the whole of the Earth’s surface.
As a result:
There is no one measure of climate system.
The climate system is in different states regionally (dry; wet; cold; hot; windy; still).
It is characterised by a variable duration in any one state, and variable fluctuations between states.
Changes in parameters (e.g. temperature, pressure, precipitation) of regional states are greater than those of climate system.
Its analysis requires analytic methodologies that do not assume linearity or stationarity in the time series.
The impact of a complex perturbation applied to a complex non-linear non-stationary climate system best measured by:
Strange Attractors;
Phase synchronisation;
Resonant amplification; and
Complexity matching effect
Linear systems have one measure, e.g. average temperature
The linear response of linear systems to applied forces is readily measurable.
I suggest that it is more important to understand the regional nature of the planet’s climate dynamics. The IPCC’s obsession with global indices, in part the result of conceptualising the climate system as a linear system, is not helpful for the development of the most effective and efficient national policies.
As Essex, McKitrick and Andresen (2007) argue, the idea of a global temperature is most likely a waste of time. They use physical, mathematical, and observational analyses to show that there is no physically meaningful global temperature for the Earth in relation to global warming. They point out that while it is always possible to construct statistics for any given set of local temperature data, an infinite range of such statistics is permissible mathematically if physical principles provide no explicit basis for choosing among them. Distinct and equally valid statistical rules can and do show opposite trends when applied to the results of computations from physical models and real data in the atmosphere. A given temperature field can be interpreted as both ‘‘warming’’ and ‘‘cooling’’ simultaneously, making the concept of warming in relation to global warming physically ill-posed.
References
Essex, C., McKitrick, R., and Andresen, B., (2007) “Does a Global Temperature Exist?” Journal of Non Equilibrium Thermodynamics Vol 32 No. 1 pps 1 – 27. DOI: 10.1515/JNETDY.2007.001.
Feynman, J., 2007. Has solar variability caused climate change that affected human culture? Advances in Space Research doi:10.1016/j.asr.2007.01.077.
Kovaltsov, G. A., and Usoskin, I. G., 2007. Regional cosmic ray induced ionization and geomagnetic field changes. Advances in Geosciences, 13, 31-35; published 13-08-2007.
Ruzmaikin, A., 2007. Effect of solar variability on the Earth’s climate patterns. Advances in Space Research doi:10.1016/j.asr.2007.01.076; published online 3 March 2007.
John F. Pittman says
Richard M
Is there a reason you did not include Demetris Koutsoyiannis’s recent paper?
Richard Mackey says
Good morning John
The reason why I didn’t cite Demetris’ work in the extract I posted is that the extract is from the part of the paper in which I’m drawing together the threads from a review of a whole bunch of papers reporting solar/climate relationships. This is where I present a ‘big picture’ account of solar/climate relationships, which I’ve summarised as (a), (b), (c), and (d) in the post
These papers cover the totality of the relationships: gravitational, electromagnetic, plasma and irradiance and the interactions between them in relation to all the main climate dynamic processes over many time frames. Apart from de Jager (2005) and Versteegh (2005), and the papers I’ve mentioned, I can’t find others that attempt to draw a ‘big picture’ inference.
De Jager concluded that the role of the Sun is significant, but as it depends on latitude and longitude, it is incorrect to hypothesise a uniform measure of the Sun’s impact on the Earth’s surface. Versteegh (2005) noted the variable nature of Sun-climate relationships in relation to latitude and longitude and that the Sun induces a non-linear response at any given location. He observed that this complicates the assessment of Sun-climate relationships and requires the nonlinear analysis of multiple long and high resolution records at the regional scale. He reported that the field of non-linear analysis of Sun-climate relationships is somewhat underdeveloped even though the dynamics major climate configurations such as ENSO, NAO and the AO are non-linear. He considered that more research is required to establish relationships between the lunisolar tides, geomagnetism and climate.
I discuss Demetris’ work in another part of the paper which is about time series analysis and the nature of the geophysical variables used in climatology.
References
de Jager, C. 2005. Solar Forcing of Climate. 1: Solar Variability. Space Science Reviews 120 197 241.
Versteegh, G. J. M., 2005. Solar Forcing of Climate. 2: Evidence from the Past. Space Science Reviews 120 243-286.
Gordon Robertson says
gavin said…”you can’t convince me that this MSU stuff is in any way more truthful than a handful of BoM type stations…”
I understand your skepticism but I work in the field of electronics and computers. I was trained as a technologist, which is between an engineer and a technician. Part of the program I studied was in microwave theory and instrumentation. I wrote a paper on Masers, which is about microwave amplification, although I’ve forgotten more about that than I have learned.
Furthermore, I studied astrophysics for a year at university and part of that course was about measuring microwave and light radiation from space. If they can measure microwave emissions accurately from light years away, what would be the problem measuring microwave emissions from a few miles?
The MSU units use very sensitive receiver and they scan the atmosphere as they pass, much like a radar sail. They cover the entire Earth as it rotates under them as they move in a polar orbit. The MSU’s can cover 95% of the atmosphere.
The surface stations are too few and too scattered. Some of them cover vast areas and it’s very tough to cover the ocean. We know as well that many of them have to be adjusted for various reasons and some of them in Russia have doubtful data since they were abandoned for some time.
I have a lot of confidence in the methodology of MSU units because I know how sensitive microwave receivers can be. The microwave emissions from atoms are also well undertood. I can’t think of a better way to detect these emissions, from the surface up, than by MSU’s.
I am impressed the most by the fact the satellite data corresponds so well with the radiosonde data. Christy doesn’t bs the issue, he explains the problems. He has done a lot of work, however, correlating the satellite and radiosonde data. It’s all laid out for anyone who wants to dispute it. Anyone who has disputed has come up with nothing more than minor errors.
You have to understand that the difference between the MSU data and the corrected surface data is only a few tenths of a degree. The significance of the MSU data is that it is not showing the anticipated atmospheric warming due to increases in CO2 density.
Gordon Robertson says
SJT said…”I heard Michaels was booed by deniers when he said that there has been warming. Is that true”?
First of all, I don’t know what is meant by denier. I have yet to read any scientist in the global warming debate who denies warming has occured. Many, like Michaels, Lindzen, Christy and Spencer at least acknowledge the possibility of a human contribution.
Michaels was the first to openly oppose Hansen back in the 1980’s. At that time, he was receiving no funding from oil companies, he just didn’t like the implications being made by Hansen that we were headed for climate tragedy. I have read nothing to the effect that anyone has booed him for his support of warming, anthropogenic or otherwise.
Michaels’ point is that the catastrophe aspect is completely overblown. He doesn’t think 1 or 2 degrees C will cause any problems to which we can’t adapt. Furthermore, he has studied the history of CO2 vs. temperature over the past 60 years and has seen neglible sensitivity. He doesn’t see why that should change in the future.
It amuses me that certain people set up a science, separate from mainstream science and based on simulations of the atmosphere, then claim anyone who doesn’t support their superficial studies are ‘deniers’ or ‘skeptics’. That becomes somewhat irksome when the people making the claims are mathematicians or from disciplines well outside climate science.
I don’t know why you haven’t clued into the rhetoric behind names like denier or skeptic.
Gordon Robertson says
gavin said…”What are our MSU instruments referenced to these days, not tree rings, not sea levels but other instruments”?
they are referenced to known emission levels of oxygen in the 60 Ghz microwave range. It is known what temperature causes what intensity of 60 Ghz radiation from an O2 molecule.
I wouldn’t call that a proxy study in the sense of tree rings or ice cores. We have means of directly corroborating the results in real time. We can compare the MSU data to radiosonde data, which uses thermisters in lieu of thermometers. Thermisters are just as accurate as thermometers.
Proxy data using tree rings and ice cores is infered when you go back past the temperature record, but why would you need them in the era of the temperature record? What you are saying is that thermometers are no good unless you have another means of corroborating them. MSU units have been calibrated to read the microwave radiation from atmospheric O2 and it is known from lab studies what to expect. Comparing the MSU data to sonde data is merely a backup, not a requisite.
gavin says
Gordon; I see a couple of fresh replies, thanks. The one at 11.25 am had me looking up some old papers from the past. I found my post trade grades including instrument technology ended in 1969 then I got a certificate for some 30 units a decade later but my physics studies got lost in between.
Three decades ago it seems I had recovered somewhat from a memory crash that still frightens me. Maths was again my top subject but do you know, I fudged my way through years of electronics, industrial systems and radio communications including terrestrial microwave and satellites without remembering hardly any of it?
What I got paid for though was some routines that provided certainty in the minds of other people too busy or too remote to do their own investigations into a range of technical issues, the last being a broad range of interference across the microwave spectrum. That was more than a decade ago now.
While considering this reply I went for a cuppa and noticed the two page feature deliberately spread on the kitchen table. In 1972 I lost two valuable acquaintances in a campaign to save a most important lake out in the wilderness. The pilot had for a brief time been my mentor on its natural evolution in recent glacial deposits. The formation of large sand dunes there from this shallow creek fed pool system between the ranges in a relatively short geological time frame was suddenly a big interest for both science and the public. I quickly became involved in assisting round the clock observations of its pending destruction by a new water storage project. Although the physical damage to the dune formations was less than expected in some camps the event was spectacular none the less.
People I encourage to stay behind got quite frightened at times, especially as the larger trees on top came down at night. I hope we won’t see this happen along the coast.
gavin says
Gordon: “Thermistors are just as accurate as thermometers”
I guess you can imagine that I have also done a lot with electronics as applied to industrial/scientific instruments, telemetry, labcraft etc. In this case of remote sensing we are stuck with the fact that the 02 radiation is only proportional to it’s gas temperature. Fixing that with a temp at some altitude is also a problem considering how hard it is to do that with modern surface stations.
Gordon: Have you ever taken temperatures in an air stream? Today we could start with a thermometer like the latest medical types attached to a helium balloon with altimeter and recorders then send it aloft. How fast would this contraption rise and would the thermometer respond truly to it’s changing environment in time? Alternatively we could hang it outside our aeroplane while reading the customer display on the seat in front. BTW During my last trip I noticed that our 737 height indicator was gaining 10 m in as many seconds while standing & waiting for aircraft movement control at the extreme end of the runaway. Rising 100 or so M while standing still is a bit of a novelty hey.
Back in about 1972 we realized what a terrific instrument the latest camera was for recording details in events. Retreating from my personal failures to produce significant evidence was not a novelty so I rapidly became dependent on the work of others again. That about the time we started systematically photographing and publishing the changing landscape for a wider public appreciation. Interpreting the value of a shoot had also become a vital area of consideration. In terms of quality control much was dumped. Does that sound familiar?
John F. Pittman says
Richard Mackey
Time October 6, 2008 at 9:44 am
Thanks. Did you include any discussion of “On possible drivers of Sun-induced climate changes Cornelis de Jager, Ilya Usoskin”?
Gordon Robertson says
Gavin “Have you ever taken temperatures in an air stream”?
If you’re saying what I think you’re saying, hopefully the temperature sensor would be protected from elements like air streams. That’s why I can’t understand the fuss made about thermisters being affected by direct sunlight. Surely that was considered when the telemetry was installed.
I know that Spencer is an authority on MSU units. He worked with them at NASA. I also know that the MSU unit was not designed for global warming research, it was put up there for weather forecasting. If they didn’t work, one would think we’d have a lot of lousy weather forecasts, but the opposite seems to be true. Christy and Spencer were awarded medals for their work. You would think their work had been closely scrutinized.
It was Christy and Spencer who approached NCAR to see if they could get the walloping amounts of data they had stored from the MSU units. NCAR was happy to oblige since they were doing some data collating at the time. NCAR set up Christy and Spencer with the data they had on hand, and the latter went to work on it.
All was well during most of the 1990’s. Christy was reporting that the satellites were not in agreement with the computer models but he was brushed off as a pest. It wasn’t till around 1998, when politicians were trying to convince the nations on Kyoto, that the satellites become unbearable to them. One US politician came right out and said, “we have to do something about the satellite data”.
In TAR, the IPCC admitted the satellites were in disagreement with the models and NAS concurred. Somewhere between TAR and AR4, serious efforts were made to either discredit the satellite data or look at it more closely, depending on how you see things. It’s my feeling that a concerted effort was made to discredit them, because after the smoke cleared, the two major satellite data set creators, UAH and RSS, were in agreement with themselves and the sondes.
A lot of work was done in the atmosphere earlier than 1979 by meteorologists like Dr. Joanne Simpson. They flew planes right into hurricanes, and did direct research on clouds. They must have done direct temperature measurements. If the sondes and satellites were not measuring correctly, I’m sure that earlier atmospheric data could be used to see if they were in the ballpark.
I think everyone should be skeptical about any science and the satellite and sonde technology are no exception. I’m wondering if there would have been such a hue and cry about that data if we didn’t have this global warming thing. I am very skeptical about claims that the data, accepted widely in the 1990’s has suddenly become tainted.
Gordon Robertson says
gavin “I fudged my way through years of electronics, industrial systems and radio communications including terrestrial microwave and satellites without remembering hardly any of it”?
What amazes me is going back to study theory I thought I knew and realizing I didn’t understand it at all. I certainly don’t remember a lot of microwave theory. I was reading some waveguide theory the other day and still haven’t got it down right.
I don’t know if you are up on current data cabling, but twisted pair has made a dramatic comeback. When I first studied electronics, twisted pair was garbage cable. Now we have people talking about CAT 5 (Category 5) cable as if it can do what advanced waveguides do.
Nothing has changed about twisted pair, it’s the technology that has changed. We now live in a largely digital world with square waves which can be recovered even when the signal has deteriorated immensly. On top of that, we have learned how to send much more information at lower frequencies by piggybacking it (quadrature modulation).
So, the same old stuff we used to call ‘bell wire’ has now become sophisticated because it is more carefully twisted and packaged. People claim it can transmit signals at 1 gigahertz, which is nonsense, but technology has become such that the good old fundamentals have been sacraficed for hipness. It doesn’t matter that you have forgotten so much, you’ve probably forgotten more than modern techs learn.
I studied electrical engineering for a few years but I’d been out of university for 10 years and my basics were severely lacking. It’s really tough to study engineering when your math has holes in it. Anyway, many of the guys I studied with planned on failing up to three courses per year. They would go back and re-write them in the summer break. That begs the question as to how well anyone learns at university. There are a few students of genius level who no doubt have high retention, but I think the average student forgets an awful lot.
gavin says
Gordon; there is no substitute for practical experience in my book. At various times I had to entertain some undergraduate engineers on the job during their semester breaks. As you probably realize by now I don’t hold academia in great aw. I owe much more to those few bright buddies who could cut it both ways given time. More to the point I learned never to depend on management policy or routines.
Fear does something too in regard to what we can recall in an instant. As a skinny apprentice I got sent through the 36 or 42” circumference cast iron manhole in our wet steam rotary driers to recover broken condensate scoops during plant shutdowns.
One morning I heard steam cracking through the main line while I was still inside so I immediately dived out leaving both gum boots and the bottom half of my overalls behind. Problem no2 there was a padlock key to the dry steam factory supply valve in my overall pocket on the wrong side of the drum and of course the main valve was leaking under pressure as the boilers half a mile away fired up. It took everybody standing by quite a while to find a solution but I never went back inside again.
Apparently people were known to swell up while stuck in such a manhole and that got me thinking about all policy that sends a kid in to do a man’s work. Policy was I had one padlock key and the company safety officer had the other however I decided on the day to keep both as added insurance and hoped to finish the job early as the only light weight in the crew for good measure. Ever since; I take great note of those who have actually been there before me.
Pressure vessels were the easy bit as I went virtually freelance through major industry and its research. Electrical power installations, nasty chemicals and radiation became the new hazards on a daily basis. By the time I got into microwave networks and their analysis I was very dependent on other practical people for my operational style.
The key to that experience was listening very carefully to a departmental veteran who had hit the male equivalent of the glass ceiling and was between jobs at the time of my first communications interview. He gave me the smart answers to likely fancy questions about the more covert work in modern radio technology. I was up to the leading edge again in a jiffy. One outcome was my support for CDMA in its infancy, another was our acceptance of leaky cables being installed everywhere.
All in all; it’s most important to gather and run with the practical side of what’s possible after new developments rather than proclaim their design potential as we may have done with say CDMA v TDMA in the first instance. After all, signal propagation is only one part of the equation.
My purpose in any post is usually about avoiding this frantic nonsense about evidence or lack of it. In real life I pay scant regard to individual papers and experiments. In scaling up from the lab bench and models there is always a lot of wise counselling to be done before the full implementation of some fresh technical excellence can be achieved. In technical developments we are often beyond general experience too. Standards and mutual agreements flow later and are sometimes decades behind the practice. I say this thinking about the petro-chemical industry in particular. Climate R&D and its management is no exception. The only problem is getting everybody on board in time.
Richard Mackey says
Hello John
Yes I have examined the de Jager & Usoskin paper. Those two chaps are very eminent scientists who have accomplished much and their papers always warrant careful study.
In this paper, their findings are at the phenomenological level rather than the level of physical processes by which of which cosmic rays or ultra-violet radiation induces climate dynamics. Their findings are exploratory rather than conclusive; aggregate rather than regional.
The paper’s main, if tentative, result is that considering only the relative contributions of cosmic rays and solar UV to tropospheric temperature variations, solar UV is more likely to more likely to affect temperature variations in the atmosphere around us than cosmic rays, other things being equal. This is a useful working hypothesis.
Richard
John F. Pittman says
Richard, I do like reading these works. It takes me awhile though.
In this thread’s start it says “According to Richard Mackey, a sceptic from Canberra, also writing on the issues of climate change and financial systems, a key limitation with both financial and climate models is the underlying false assumption that economic and climate systems are ergodic systems – that is they normalise to an equilibrium state.””
The reason I brought up this de Jager & Usoskin paper was their use of by Moberg
et al. (2005) temperature reconstruction. At http://noconsensus.wordpress.com/ Jeff has some interesting work showing the problems with selection criteria that is used in temperature reconstructions. I believe that David Stockwell did as well in this http://landshape.org/enm/blogs-on-random-temperature-reconstruction/ . I was curious if you had some discussion on this. I have seen several comments along the lines that one area of the AGW would not effect the validity of another. I disagree. In reading the de Jager & Usoskin paper and reading Jeff and some of David’s work, I would be concerned of the bias in reconstructions as indicated by Jeff and David could lead to incorrect phenomenological conclusions such invalidating a conclusion of something being a useful hypothesis.
Thanks in advance for any discussion.
gavin says
Let’s offer a word of caution here.
For several decades, 1960’s & 70’s I witnessed a lot of effort around the problems associated with moving our industry from batch to continuous processing. Unique to Australian conditions after automation was the need for systems to remain versatile in terms of our small market volumes. That is, each plant could be expected to run a variety of products using the same equipment. Production capacity could also be changed rapidly with demand.
If we looked at a few examples, there would be kettles and cooking, evaporators, digesters and reactors, webs and coatings, also sludge. With early attempts at control we got “hunting” as the instruments tried to normalize the process to some design criteria. Frantic damping of signals usually followed. Given time I worked on phase shift in feedback, then we properly experienced the necessary steps of all functions in our records.
One obvious change that could result in stability was process time or web speed. Another was turbulence in mixing. Web strength and uniformity depended very much on ingenuus devices employed at the point of delivery just like our choice of paint brush at home. A painted finish is after all only the settled product and that depends on many things including all its manufacturing stages before delivery to our wall.
Equilibrium is a much desired feature, both in nature and in man made machines. Turbulence can be employed to provide particle randomness in all directions. Streams on the other hand are essential to the continuous process, so is in phase feedback where we are concerned if we want a settled environment.
One test before control could be established with certainty was a stepped function. Abrupt changes can be managed but in practice from personal experience much of what we build is fragile. I became an expert at tiptoeing around furnaces, gas plants and refineries. It’s a fool’s paradise to expect not to be directly involved with the environment at every level. One spark in the wrong place and bang… How natural is that?
gavin says
In a comment on the news early today I caught the word that “equilibrium” being used in again relation to the global financial crash. A senior editor from Iceland speaking to Radio National noted; we may reach some sort of strange equilibrium as everyone’s currency is falling. Can’t find the quote but this is the issue.
http://www.abc.net.au/news/stories/2008/10/10/2387129.htm?section=justin
As I said a step function is the test and we naturally look for respite in the next steady state
Richard Mackey says
Dear John
I am familiar with those debates. I also realise that a lot of the time series in use in climate research, including those used to report the dominant role of the Sun, are problematic as are the techniques of time series analysis in use in may of the published papers. Given that almost all traditional data analysis methodologies are based on linear and stationary assumptions, the analysis is bound to year equivocal results.
Demetris Koutsoyiannis is one of the few scientists who have taken seriously the challenge to use methods that let the data speak.
Norden Huang developed the methodology called Empirical Mode Decomposition (EMD). Unlike most statistical methodologies for analysing time series, EMD makes no assumptions about the linearity or stationarity of a time series. EMD lets the data speak more directly, revealing its intrinsic functional structure more clearly. It does not does not have the restrictive assumptions of linearity and stationarity that the familiar Fourier-based techniques have, because it uses Hilbert, not Fourier, transforms.
Huang et al (1998) have also highlighted the need to use analytic methodologies that reveal clearly any nonlinear relationships (that may also contain intrinsic trends) when analysing time series of natural phenomena. Huang et al (1998) showed that, necessarily, misleading conclusions will be drawn from the uncritical use of time series analytic techniques that assume relationships within the time series are linear, stationary and devoid of intrinsic trends.
Cohn and Lins (2005) brought attention to the nonlinear, non-stationary nature of climate time series data. Cohn and Lins (2005) concluded:
These findings have implications for both science and public policy. For example, with respect to temperature data, there is overwhelming evidence that the planet has warmed during the past century. But could this warming be due to natural dynamics? Given what we know about the complexity, long-term persistence, and nonlinearity of the climate system, it seems the answer might be yes. Finally, that reported trends are real yet insignificant indicates a worrisome possibility: Natural climate excursions may be much larger than we imagine. So large, perhaps, that they render insignificant the changes, human-induced or otherwise, observed during the past century.
Demetris’ work is without doubt the most thorough and comprehensive. It is informed by an understanding that builds on Poincare, Birkhoff, Kolmogorov, Hurst, Mandelbrot + some insightful workers in hydrology.
I haven’t got behind most of the published work that use problematic time series and problematic time series analysis methodologies. That would be too big a project!!
In my written work about climate dynamics I try to make the most use of papers using analysis that is commensurate with the nonlinear, nonstationary nature of the natural processes. The IPCC and related work is not in this category. As I read the science, authors in the IPCC awg/ghg school of thought stubbornly refuse to recognise that nature is nonlinear and non stationary and that all of their time series analysis is problematic. Refs
Cohn, T A. and Lins, F., 2005. Nature’s style: Naturally trendy. Geophysical Research Letters, (32), L23402.
Huang, N. E.; Shen, Z.; Long, S. R.; Wu, M. C.; Shih, H. H.; Zheng, Q.; Yen, N. C.; Tung, C. C.; and Liu, H. H., 1998. The empirical mode decomposition and Hilbert spectrum for nonlinear and non stationary time series analysis. Proceedings of the Royal Society of London Series A the Mathematical, Physical and Engineering Sciences, 454 903 995.