I’VE never meet Jo Nova in person. But she can write a good headline, and she cares about the truth. Her blog has never received any government funding, but it includes a lot of interesting facts and evidence on science-related topics.
Meanwhile ‘The Conversation’ is a government-funded blog for university researchers that got $6 million dollars just a couple of years ago to employee staff to spout opinion.
Yesterday The Conversation was, as always, short on evidence, but its government-funded scribes could vouch for the Bureau of Meteorology. In particular, Andy Pitman and Lisa Alexander, both part of the global warming industry, authored a long piece about how you should trust them and the Bureau. But the discerning reader might be left wondering why?
Because, as Jo Nova explains at her blog today, they didn’t actually explain how and why it was necessary to change a cooling trend at Rutherglen into a warming trend.
Meanwhile, I’ve been reading a peer-reviewed paper by Blair Trewin, which details how the homogenisation technique employed by the Bureau is meant to work. The only problem is, the methodology as detailed in this paper published in the International Journal of Climatology (Volume 33, Pages 1510-1529) doesn’t actually seem to accord with the methodology as implemented by Dr Trewin at the Bureau of Meteorology. What I mean is, the peer-reviewed paper says one thing, but the output from the homogenisation technique shown in the ACORN-SAT reconstructions suggests something entirely different.
Something that is worth noting in the paper, is comment from Dr Trewin that, “but negative adjustments are somewhat more numerous for minimum temperatures, which is likely to result in ACORN-SAT minimum temperatures showing a stronger warming trend than the raw data do.” What he is saying, in plain English, is that ACORN-SAT may exaggerate the warming trend somewhat as a consequence of artificially dropping down the minimum temperatures. In fact, as I explained with reference to the Rutherglen temperature trends, the Bureau progressively drops down the minimum values from 1973 back through to 1913. For the year 1913 the difference between the raw temperature and the ACORN-SAT temperature is a massive 1.8 degree C.
The apologists, Pitman and Alexander, in their conversation suggest that, “the warming trend across Australia looks bigger when you don’t homogenise the data than when you do”. But this is not what the peer-reviewed literature says. And yet the take home message from their article is believe only this same peer-reviewed literature.
More on Rutherglen, Pitman and Alexander in today’s The Australian.
Another Ian says
From comments at Steve Goddard
“Gail Combs says:
September 1, 2014 at 2:52 pm
BTW, over at IceAgeNow I saw this interesting comment:
August 30, 2014 at 3:24 pm
I remember hearing a BOM spokesman claiming it was reasonable to delete the hottest temperature ever recorded in Australia from before the second world war on the grounds it may not have been recorded using standard practice.
Without any proof at all he claimed it must have been recorded with the thermometer directly in the sun and not in shade.
They are so arrogant about their infallibility they announce they are deleting “inconvenient” extreme heat recordings from the past…..
WOW what ARROGANCE!”
Jennifer Marohay says
this makes me smile… read between lines… think Nineteen Eighty-Four…
“THE Bureau of Meteorology’s rewriting of historic temperature records has been defended by leading climate scientists from the ARC Centre of Excellence for Climate Systems Science at the University of NSW.
In an online article, centre director Andy Pitman and chief investigator Lisa Alexander said homogenisation of raw temperature data was an “essential process in improving weather data by spotting where temperature records need to be corrected, in either direction”.
They said data homogenisation was used to varying degrees by many weather agencies and climate researchers worldwide.
“Although the World Meteorological Organisation has guidelines for data homogenisation, the methods used vary from country to country, and in some cases no data homogenisation is applied,’’ Dr Pitman and Dr Alexander said…
spangled drongo says
BoM researcher Blair Trewin’s explanation:
The data set was developed using a technique, the percentile-matching algorithm, that applies differing adjustments to daily data depending on their position in the frequency distribution.
“This method is intended to produce data sets that are homogeneous for higher-order statistical properties, such as variance and the frequency of extremes, as well as for mean values,” the paper said.
“The PM algorithm is evaluated and found to have clear advantages over adjustments based on monthly means, particularly in the homogenisation of temperature extremes.’’
Considering what CAGW is costing us all, one would think that previous govts would have long since audited this procedure where over a thousand previous recording sites have been reduced by more than 90% and then “homogenised” to show a warming that is barely in excess of natural variation.
When the raw data is adjusted in this way it only takes a very small change to convince govts that doom is approaching and the temptation for trough feeders may be too great to resist.
For the trough feeders not only to be the gatekeepers sans auditing, but to also claim that they are the only ones who can possibly decide is patently absurd and a responsibility that any rational person in their position would not wish for.
It has now reached the point where a royal commission is probably the only credible solution.
Two problems with homogenization in climate science:
1.) Label your units, just like we all learned in 6th grade. If you’re reporting “homogenized degrees C”, then you’re not reporting “degrees C”. No homogenized temperature report or graph should ever be labeled “degrees C” (or “degrees F”), with no further qualification. Economists have a similar problem with inflation adjusted dollars, hence the use of labels like “2012 dollars” or “nominal dollars”. For whatever reason, climate “scientists” seem prone to ignore this basic need.
Under a valid homogenization procedure, I should be able to take a 2014-homogenized temperature and convert it to a 1999-homogenized temp and vice versa. In climate science (AFAIK), that’s not possible.
2.) Under a valid homogenization procedure, if station A/year X is warmer than station B/year Y after homogenization in 1999, then station A should still be warmer than station B after homogenization in 2014. If, over time, the same homogenization procedure switches the ordering of two station/time pairs, instead of just altering the scaling, then there’s something wrong with the procedure.
Without spending a lot of time digging into the numbers, I’m not sure whether the people at “The Conversation” have a valid argument or not, but neither are they. Their article amounts to, basically, a statement of faith.
Well here’s some more evidence about the temp record and the length of the pause or hiatus. This is from the open thread.
This is Ross McKitrick’s new study about the hiatus or pause in temp over the last 19 years.
He calculates the pause to be 19 years at the surface and 16 to 26 years in the lower troposphere. BTW there are hints that UAH will soon use a new method that will bring it closer to RSS measurements of temp. If true this will be another blow to the extremists.
Judith Curry is discussing the new McKitrick study. http://judithcurry.com/2014/09/01/how-long-is-the-pause/
It might, barely, make sense to create a synthetic station that “homogenizes” the data from each contributor to a new record tied for example to a centroid for a convex hull defined by the geographic area delimited by the stations employed in each “homogenization” process. The new record should never be imputed to any of the real, contributing stations. But even given that under some circumstances, homogenization might make a little sense, where could the trend of “adjustments” come from. A change in method or equipment might impose a step change on raw data, but not a non-zero trend. So, how can make methodological sense to create one through adjustment and homogenization?
Jen and all. It is indeed interesting that anything or anyone that doesn’t subscribe to ‘your’ particular point of view is fair game to ridicule. For instance, “Yesterday The Conversation was, as always, short on evidence, but its government-funded scribes . . . . “. If you were a regular reader of the ‘The Conversation’ you would appreciate that there is much criticism of governments. Could it be that the ‘governments’ in question currently happen to be on the right side of politics, and that justifies immediate opposition to any organisation (i.e. The Conversation) that takes them to task? Hardly objective or ‘scientific’!
Why not leave data alone and just express in simple language why it may be dubious? Some think it’s important to have vivid graphs and national temps etc. Why? How much temp has simply been dictated by cloud? How sustained was the heat/cold etc? Why connect profoundly different climate zones because they happen to fall within certain political boundaries? What’s Albany got to do with Cape York? (For that matter, what’s Rutherglen got to do with Cabramurra?)
You had some freakishly high minima in NSW in 1950. When you’ve got cloud hanging over the northern tableland all through winter – something it rarely does – your minima go through the roof. Someone ignorant of the northern tableland comes along sixty years later – maybe they work in Boulder or East Anglia – and decides something based on that temp anomaly – or lets a computer decide it. What a total waste of time!
People want more from numbers than numbers can give. If you can’t match numbers to what actually happened you’ll be fretting over high minima and low maxima which only came about because there was lots of cloud about. You’ll declare a year to be a certain way because of a weather event which was a one-off or only barely within the time frame. Our biggest tornado only just made it into the 1970s by a few hours. But someone will be able to turn that into a number to show that Australian tornadoes were more severe in the 1970s than in the 1960s. It’s just a stupid number, for god’s sake.
Just accept data for the shabby thing it is. It’s handy stuff if you can use your loaf, worse than useless if you can’t.
Beth Cooper says
Jen says, ‘Think 1984.’
Also comes to mind, ‘The Hollow Men.’
Between the idea
And the reality
Between the motion
And the act
Falls the shadow.
I don’t visit The Conversation regularly. But I do go there every so often. Also my brother sometimes sends me links to articles he has had published there or finds interesting. So I am not unfamiliar with the site.
What I have noticed, is that the stories, whether they are on the Murray River, economics, agriculture, fisheries, or climate change, are as predictable as Tony Jones on the ABC.
That is its mostly, as I see it, popularist propaganda rehashed, over and over. I don’t know why the tax payer should pay for such a space.
spangled drongo says
It’s gone beyond the failed science now and the alarmists realise that ideology has to be their last chance.
You would think the IPCC would be too embarrassed to indulge in this snake oil:
“If you were a regular reader of the ‘The Conversation’ you would appreciate that there is much criticism of governments” Examples please marc. Then this:
“Could it be that the ‘governments’ in question currently happen to be on the right side of politics, and that justifies immediate opposition to any organisation (i.e. The Conversation) that takes them to task? Hardly objective or ‘scientific’!”
Of course there were no criticisms of the previous Rudd/ Gillard governments [sic].
@ spangled drongo
I’ve laid out the ACORN-SAT methods at JN here (#38.2):
I think it is important to note an important distinction: there are 2 methods.
1) Homogenization Method – M&W09
2) Adjustment Method – PM95
Further down from that link above at #39,1,1 after looking at the cases of Amberley and Bourke by Marohasy, Abbot, Stewart, and Jensen, Jen’s ‘Rutherglen’ and ‘Rewriting the history of Bourke Part 2’, and Jo’s ‘‘Big adjustments? BOM says Rutherglen site shifted, former workers there say “No”’ that I referred him to, Mikky commented at #39.1.1:
“What may have been lacking is human intervention to question what the software has done”
I think automation of the methods above is a better description than “software” but I replied in agreement and I’ve also brought the discussion forward to Jo’s ‘Pitman says BOM don’t “fiddle” with data’ at #28.1:
This was a follow-on from #28 where I stated:
“We want to scrutinize the statistical AND physical (i.e. local conditions) rationale (we already know the methodology) for specific applications of the ACORN-SAT homogenization method (M&W09) and adjustment method (PM95) in the cases of breakpoints at Rutherglen, Amberley and Bourke – for starters.”
In that second comment linked (#28.1) is the case of Rutherglen from Jo’s ‘BOM finally explains’ (my clarifications):
“Jo says: Let’s check out those [near] neighbors (Deniliquin, Wagga Wagga, Sale, Kerang, Cabramurra) [instead of BOM’s remote locations]”
The graph is linked, after which I’ve made my point (in essence similar to Mikky’s):
“There’s something immediately wrong with either or both of BOM’s homogenization and adjustment rationales and/or applications of them (but not necessarily the actual methodology) just by this case alone i.e. it is not necessarily the methodology that is at fault it is the application of it. Mikky identified this in ‘Hiding something’ (#39.1.1):……(Mikky’s quote above)….”
So you imply that the procedure is at fault by your demand for an audit of it, I (and Mikky) are in effect echoing that but have suggested a particular aspect of that procedure i.e. Trewin et al should also make intelligent hands-on interventions when their automation of M&W09 and PM95 (neither necessarily faulty but the automated application of them could be) returns what looks a lot like garbage as in Rutherglen.
That is my main thrust and I would be interested in what others may think of this (Jennifer?) but there is another important point that both Bernd Felshe at #38.1.1 and myself at #38.2.1 have picked up on in the ‘Hiding something’ thread (top link above). The adjustment method (PM95) is a statistical model method (see #38.2.1) for which actual data is replaced by “parameter estimates”, and, according to Klugman (2002) “there is no guarantee that the equations will have a solution, or if there is a solution, that it will be unique” (see #38.2.1).
There’s a long way to go getting to the bottom of all this I think but I suspect that the lack of sensible human intervention in an automated process that accesses comparator data will be found to be the culprit rather than the respective methodologies (M&W09, PM95) that, although probably valid in concept, were automated inappropriately (the root human cause in the first instance, not picked up) e.g. the automation accesses REMOTE comparators rather than LOCAL when given a selection of datasets, local are missing. NIWA even did that manually in contravention of R&S93 for their NZT7.
There’s a similar situation to ACORN with BEST in NZ. For Hamilton City and the Waikato region, BEST pulls in datasets from Auckland and Bay of Plenty regions (different climates) completely overlooking the local datasets e.g. Ruakura in Hamilton City. BEST NZ profiles don’t match NIWA NZT7 profiles at the same location. Again, the humans aren’t thinking. Similarly in Australasia with GISS. GISTEMP profiles don’t match NZT7 or ACORN profiles (e.g. Rutherglen) even though the GISS process is similar to ACORN in respect to selection of remote comparator stations rather than nearby. The GISS methods obviously have some differences to BOM’s other than what looks to be similar, and also inappropriate, comparator selection.
But probably better just to go with BOM, ACORN, Amberley, Rutherglen, and Bourke for now.
Richard, my impression is that the methodology of the BOM, if we can find how and what that is, is simply designed to be complicated and recherché. It is designed to off put and confuse and this is what I found when I looked at the NIWA court case. These temperatures are fitted to an ideology and that fact is hidden behind unnecessary technicality.
It’s a con job.
I’m inclined to think Pitman and Alexander really don’t know what this is all about at the detail level. But they would like everyone to be left with the impression they do.
They say homogenization is an “essential process” in the quote from The Australian above. We already know that but could they elucidate the details of ACORN-SAT process? I doubt it. There’s a lot going on. I’ve been looking at BOM’s (read Trewin) homogenization method (M&W09) and adjustment method (PM95). That’s not even process but do Pitman and Alexander know that or the respective methods in detail? I doubt it. I know I don’t yet, PM95 is heavy going.
The “process” is the application of those methods. That’s been automated from what I can gather (but still learning). If the process is essentially software automation (I don’t know yet, is it?), are Pitman and Alexander completely familiar with the coding, what it accesses, what the embedded method algorithms are from above, and how it could possibly return garbage? I doubt that too.
I don’t think Pitman and Alexander will dare step in to the details of Amberley, Rutherglen, and Bourke. It would expose their ignorance.
‘If you were a regular reader of the ‘The Conversation’ you would appreciate that there is much criticism of governments.’
I think you will find it left leaning, don’t remember seeing AGW theory put under an intolerable light.
Fairfax and aunty are on the same bandwagon, but you are oblivious to the fact they are using sophisticated propaganda. Their strategy is the sin of omission so you can’t really be held responsible for being ignorant on the subject.
I’ve got some disagreements with your comment cohers.
>”the methodology of the BOM, if we can find how and what that is,”
There are 2 and we know what they are (M&W09, PM95 – see the links above).
>”is simply designed to be complicated and recherché.”
I don’t get that impression. It’s just that for example, PM95 is advanced statistics. It was difficult to even find a tutorial but I found one eventually (see links above). That statistical methodology is used in health policy and by actuaries at least, and it has been applied by Trewin to temperature series. M&W09 is the basis for NOAA’s USHCN, it’s more straightforward from what I’ve skimmed although I haven’t crunched it in detail yet (to do).
>”It is designed to off put and confuse and this is what I found when I looked at the NIWA court case.”
Again, no. The established and appropriate method for NZ was R&S93, that’s straightforward. NIWA’s NZT7 was loosely based on it but they departed from it similar to BOM i.e. they used remote comparators rather than local/nearby/neighbouring. NZCSET adhered to R&S93 on the other hand. Documented here (R&S93 method in Appendix A):
‘Statistical Audit of the NIWA 7-Station Review’
The contention was (still is) over NIWA’s very loose (ad hoc) application of R&S93 which they convinced the Judge was internationally recognized. Well it is but only because CRU incorporates the NZT7 in CRUTEM4 (as it does BOM’s HQ). Thing is, the Judge dismissed Bob Dedekind’s affidavit because he wasn’t a climate scientist (he took the word of Dr Brett Mullen, climate scientist, instead) and with it went the ‘Statistical Audit’. But the ‘Statistical Audit’ above was reviewed by 3 independent professional statisticians. The Judge ignored that by his dismissal of the affidavit.
>”These temperatures are fitted to an ideology and that fact is hidden behind unnecessary technicality.”
I don’t think it is quite like that, but close. I think it is a default-status quo situation. In both cases (BOM, NIWA) necessary, but inappropriately applied, technicality returns a result that fits very nicely with their ideological business of the day – man made climate change i.e. their results, dodgy as they are but they’ll never admit that, are advantageous to them so they desperately want to uphold their results. They like their results, they want to preserve their credibility and the impression that their work is high quality, they can’t and wont see that their results are garbage because they’ve got too much prestige and business riding on their results.
Their motivation then is to defend their results by all means because to cave in to questions of impropriety would be to lose both prestige and business, the latter ideologically driven but business is business.
spangled drongo says
Thanks for your efforts, Richard. It certainly needs investigating. As with NIWA.
Thanks for your unique insights.
I tend to agree with the thrust of your assessments – scientific and also the position Pitman, Alexander and others find themselves in.
I can also see that what is written in the technical literature by Trewin bears little resemblance to the ACORN-SAT output. You sort of explain why… but perhaps not quite convincingly enough…
Also, what is perhaps missing from all of this is:
1. Agreement on the overriding principles that should govern homogenisation.
2. Agreement on the need to start the record from the beginning. That is for places like Bourke the temp record can be extended back at least another 30 years to begin in 1880, if not earlier. And once this is done even the gross distortions potentially introduced by the two step homogenisation process you describe become somewhat… well the hot years of the late 1800s are quite spectacular and overwhelming…
handjive of climatefraud.inc says
@marc September 2, 2014 at 9:50 am #Quote:
“Jen and all. It is indeed interesting that anything or anyone that doesn’t subscribe to ‘your’ particular point of view is fair game to ridicule.”
marc. unlike the conversation, your comment will not be deleted, nor will your account be blocked here, though it would qualify applying conversation standards.
The conversation is a sheltered workshop, where rude, abusive favoured resident trolls are protected from any robust conversation or debate.
It is a cul-de-sac of progressive group think.
As for ridicule, maybe you should go and stand on a street corner yelling the end is nigh whilst wearing a sandwich board that says same.
Yes, you and your failed ‘97% consensus settled doomsday climate science’ deserve ridicule.
spangled drongo says
All of the above plus it’s taxpayer [over] funded.
It is interesting that the only reason the so-called homogenization seems to exist is to enhance the claims of a cliamte crisis.
It is also interesting that it seems the only way cliamte catastrophe promotion can exist is by way of controlling the conversation. In robust open discussion, climate crisis promotion does not hold up under scrutiny.
spangled drongo says
Jen, is it possible to get any more detail on this from Steven Goddard?
>”You sort of explain why… but perhaps not quite convincingly enough… ”
Yeah. I haven’t convinced myself yet either. There’s an enormous amount of reading, understanding, and thinking to do (TR049, M&W09, PM95, data) that has to be worked on over time as I’m sure you are well aware because I see you are doing same. Clear, concise explanations from me are aways off unfortunately but I’ll progress – hopefully.
I’m really only scratching the surface but I’ve probably got a head start on most folks. So I’m trying to communicate the important stuff I’ve managed to grasp over several years now for others to more easily catch up if they’re interested (those that are actually behind that is) i.e. create more informed awareness in discussions.
My rudimentary stats knowledge and ability could fall short for understanding the advanced PM95 method in depth (maybe not). I provoked the initial investigation that led to the ‘Statistical Audit’ of the NZT7 by Dedekind et al but bowed out when no longer useful for statistical application of R&S93. The likes of Dedekind and Ken Stewart are more able than me when it comes to data manipulation. I’m hoping at least one statistician will turn up in this probing of ACORN-SAT to help me progress because although formal stats is not my strength I can certainly understand the concepts even if I struggle with actual application of them. None I know of so far unfortunately so I’ve had to wing it myself.
Temperature series and the stats involved are just another item of solar, radiative heat transfer, legal, models, IPCC etc in the climate debate that I try to keep up with. My strengths being heat and economics (from engineering science and business studies), and an analytical ability to assemble the key elements (some legal study helps).
I still don’t know how BOM implements M&W09 and PM95 from reading TR049. Is it by bespoke software code? If so, does the software have a title? I haven’t come across mention or discussion of this anywhere so I’m starting to ask. Do you or anyone know? I have enough IT nouse to delve into code. For example I’ve had a good look at GISS ModelE GCM code without difficulty.
I think this human rationale combined with method automation aspect is the key to what is producing these bizarre adjustments wrt ACORN (I’ve found I’m not alone too). NZT7 is easier because it is manually done but the remote comparator vs neighbouring contention is much the same.
>”1. Agreement on the overriding principles that should govern homogenisation.”
Didn’t Readfern link to an organization working on this?
>”2. Agreement on the need to start the record from the beginning.”
Maybe the earlier data could form a separate series i.e. a non-SScreen series followed by, but not spliced to, the SScreen series? Why bother changing the non-SS data? Just leave as is. There’s already too much raw data alteration.
My reply to Griss at JN:
>”They seem to homogenise all local sites in exactly the same direction, sort of like a round-about.. chicken and the egg scenario.”
A bit like BEST’s “Raw Data relative to Expected Monthly Means” and “Difference from Regional Expectation”.
Here’s HUME RESERVOIR closest to Albury (16.24km):
Which is pulled in with all the others to produce Albury (near Rutherglen):
RUTHERGLEN RESEARCH (60.89km) away is one of the others pulled in:
Except Albury is dependent on the “regional expectation” of HUME RESERVOIR and the others.
How can there be an “expectation”? Chicken, egg?
And BEST Albury looks nothing like ACORN Rutherglen around the early 1970s (top graph red dots):
And BEST Rutherglen raw is oddly different to BOM Rutherglen raw (note the differing length of series and the record gap breakpoint.
It would be interesting to look at the BEST breakpoints for Amberley and Bourke too (or any ACORN sites for that matter). BEST use their own “scalpel” method for breakpoint identification.
Jennifer Marohay says
Spangled, Update from Steve Goddard… after I queried the number of stations…
I made a mistake in my Australia analysis. I wasn’t properly differentiating between stations which had temperature data and those which only had precipitation data back to 1895. The number of stations with temperature data back to 1895 is much smaller
My apologies. I have the code fixed now and am working on better analysis.”
Pity the Bureau couldn’t admit when it made a mistake.
Jennifer Marohay says
This morning Graham Lloyd wrote in The Australian that the BOM was justifying its homogenisations at Rutherglen with reference to 17 neighbouring stations.
Ken Stewart has had a look at them… already! His analysis suggests after homogenisation Rutherglen becomes an outlier. Crazy suff they do at the BOM!
Check out Ken’s latest post here… http://kenskingdom.wordpress.com/2014/09/02/rutherglen-spot-the-outlier/
>”I still don’t know how BOM implements M&W09 and PM95 from reading TR049. Is it by bespoke software code?”
Someone is now on to this. Mikky answered my question at JN (‘Explain this’ #4). Here’s the relevant report:
The Australian Climate Observations Reference
Network – Surface Air Temperature (ACORN-SAT)
Bureau of Meteorology response to recommendations of the Independent
Peer Review Panel
15 February 2012
C2, page 7 pdf:
C2. The computer codes underpinning the ACORNSAT data-set, including the algorithms and protocols used by the Bureau for data quality control, homogeneity testing and calculating adjustments to homogenise the ACORN-SAT data, should be made publicly available. An important preparatory step could be for key personnel to conduct code walkthroughs to members of the ACORN-SAT team.
Agreed. The computer codes underpinning the ACORN-SAT data-sets will be made publicly available once they are adequately documented. The Bureau will invest effort in improving documentation on the code so that others can more readily understand it.
spangled drongo says
Meanwhile NOAA adjusts the heat:
While UAH show us the other story:
But its not a conspiracy 😉 By the way Jen favicon ..
Comment (mine) at JN ‘Explain this’ #16
>”This long running rural record that looks ideal apparently had “unrecorded” station moves found by thermometers miles away. Already we have found Bill Johnston who worked at Rutherglen who confirmed that the station did not move.”
OK, and BEST appears to confirm the 1980 Rutherglen Research breakpoint was NOT NECESSARILY a station move – it is a “Record Gap” only:
BEST also make a very large adjustment for that 1980 gap to the mean (ACORN adj is to Max and Min separately but which should coincide – ‘nuther story). BOM makes a very large adjustment but hasn’t come up with the exact reason why yet (doesn’t know yet?). It’s a record gap with some data in it but that is all that is known.
I cannot see why, unless something known actually happened at the site during the 1980 gap, that an adjustment be made. There is some data in the gap – leave it as is.
Data can be infilled but this is contentious because it is making up “data”. But that is a minor contention compared to the large adjustment. The gap seems to me to be an infilling exercise – not an adjustment exercise.
Date infilling is covered in C6, page 8 pdf of this report (H’t Mikky #4)
C6. The Panel notes the intention of the Bureau to consider ‘infilling’ data gaps in a small number of stations’ data records. The Panel strongly recommends that, if the Bureau proceeds with this work, the processes should be carefully documented, and the infilled data should be flagged and maintained separately from the original.
Agreed. There appears to be a little confusion about this issue. At this stage, the Bureau only intends to publish and disseminate the ACORNSAT data-set without infilling (i.e. as a composite and homogenised data-set). Should an infilled data product be published, the Bureau will clearly identify it as such and keep it separate from the core ACORNSAT data-set
# # #
BEST’s RUTHERGLEN RESEARCH series starts much later than in ACORN but BEST also use the earlier RUTHERGLEN POST OFFICE:
RUTHERGLEN POST OFFICE: 225 [mths], Jan 1903, Nov 1921
RUTHERGLEN RESEARCH: 525 [mths], Jan 1965, Oct 2013
Seems odd that BEST doesn’t access the long-running RESEARCH data.
Bill Johnston says
The software does have a title and if you ask the authors nicely, you will probably be able to obtain a copy. The software is RHtestsV4. Its manual is available here:
Thanks Bill. Nice.
I don’t think RHtestsV4 is the ACORN-SAT software. It’s a different implementation, Quartile Matching, not Percentile Matching for a start. It’s probably a similar example but I can find no reference to it at BOM or TR049 or searching ACORN-SAT RHtestsV4 on the web.
Brendon, please supply a valid email address to comment here, & I also suggest you comment on the science instead of falling back on the old alarmist slogans about “peer reviewed science”. Anyone can publish peer reviewed articles if they stick to the accepted line run by the alarmist establishment. e.g. Look at the output of John Cook & Stephan Lewandowsky
The world is obviously not warming, the ice is not melting, they fake those pictures. The glaciers are being photoshopped, sea level fall is a global coverup, atmospheric temps as measured by satellites are in decline but the scientists involved, including the pretend skeptic, Roy Spencer, are covering that up too! The scientists at NOAA are also falsifying OHC data to make it look like the oceans are warming (as an increase in heat trapping gases would cause). Plants and animals are migrating, no, I mean being forced to migrate, polewards and to higher altitudes in response to this fake warming.
And now, yes Jennifer, it’s a world wide conspiracy to prevent YOU from publishing breakthrough science that proves ALL current climate scientists wrong.
Yes, all of that is being scripted by thousands of scientists, I mean we proved that when we got their emails and one mentioned “the decline”!!
So yes, either thousands of people are involved in a co-ordinated effort to fool the planet, or you are a complete looney.
A bit of warming post 1980 maybe? Levelling off lately, maybe? Same old dribble of sea level rise as has been going on since the 1700s? Global sea ice extent fractionally below the sat average, after being above the average – yes, Brendon, above! – just a few weeks ago?
All of that may give reason to believe we’re in another balmy period of the Holocene (one of many). And none of it has anything to do with the subject of fiddling data.
Robert, cherry picking is not good scientific practice. Not sure why you focus only on the Antarctic sea ice increase and ignore everything else. No I take that back, I know exactly why you do it. Fooling yourself is easier than fooling me.
Brendon, I was referring to TOTAL – ie GLOBAL sea ice. Antarctic sea ice, which I did not mention, has been running way above average for a long time. It is not doubted that Arctic ice has been very low at September minimum in recent years, with 2007 and 2012 lowest in post-1979 sat record. Whether extent is lower than eg 1920s or post Napoleonic Wars it’s hard to say, but it’s quite possible.
Some coolists are of the opinion that this September’s “recovery” is a sign of more ice to come, but, being a total skeptic, I don’t see why that should have to happen. Nor do I think we should regard the advance of ice or of glaciers as more than yet another climatic problem for humans. In fact, I think it’s silly to cheer for a recovery of Arctic ice, just as it’s silly to cheer for the sun to rise. Arctic ice has been up-and-down like Berlusconi’s trousers since forever. Stable Arctic is a fantasy, like stable climate.
You mean, you really weren’t aware of the global sea ice extent in recent months? I’m no scientist, but I check out of interest. It also doesn’t take long to find out that those melty western bits of the Antarctic are also the VOLCANIC bits. What is surprising is that this is an old, old fact that nobody would ever have bothered to deny or obscure – until you-know-what!
No I don’t check the global sea ice extent looking to cherry pick one day every now and again. I look at all data.
To look at global sea ice without considering the cause (winds, flow of glacial melt) of change to the sea ice is simplistic, but that serves your purpose.
The Antarctic ice is mostly land-based glaciers and that mass is in decline.
Why do you ignore this? Cherry picking!
Also why do you ignore the Ocean Heat Content since more than 90% of the warming is accumulating there? Cherry picking!
As for your claim that subterrainean volcanoes match melt, well you might want to check your sources because the maps don’t align very well with the two separated by 1000s of kilometers in some cases. But to the casual observer, I can understand how you might have allowed yourself to be fooled.
Oh, and there’s numerous peer-reviewed papers showing why this climate change is cause by GHG emissions, but of course you’ll cherry pick which science to believe based upon your own predetermine outcome.
Nothing cherry picked, Brendon. All true and of consequence. There is a lot of sea ice in the world, and record extent in the south. None of this is daily or flukey. You should have been checking.
And, yes, low extent in the north (I’ve checked regularly for years, it takes a few seconds a week), just like a hundred years ago and two hundred years ago – though I can’t give you satellite readings for those past minima (duh). Of course, navigability does not involve extent of ice but position of ice. At present you can slip through a ship above northern Siberia, which you could not do in 2007, even though there was less Arctic ice in that year at minimum. Ice is volatile, which should be obvious.
Rest easy, Brendon, none of this means much. Sea level rise and and ocean heat content (as far as one can sloppily measure) are consistent with real global warming, both the sort you get in interglacials and the upward wobbles within those longer term warmings. Bass Straight was dry a mere ten thousand years ago. Ask yourself how that could be. Widen your graphs and you’ll see that nothing is soaring.
West Antarctica is volcanic. That does not have to explain changes in ice extent or glacial loss, since all sorts of things can be causes. Ice is VOLATILE. There is no need for a “match”. But western Antarctica is VOLCANIC over a vast area, and around Thwaites in particular.
Why is there so much Antarctic ice overall in recent years? When we have actual climate science we may know more. We may even get to know why Arctic temps did that plunge in the 1960s, and why there was enough Arctic ice in the 1970s to help cause the global cooling alarm. (Yes, there was a global cooling alarm in the 70s. Many are trying to “disappear” it, just like a lot of people in future decades will try to say they were never sold on the present warming hysterics.)
And we are all cherry picking Brendon, since we live on a hot ball and know little about its innards and about the deep hydrosphere. The way “science” is going in this barbarous era of publish-or-perish, it may be some time till we do know.
Now say “cherry-picking” ten more times.
Sea ice in Arctic is not normal Robert.
“These results reinforce the assertion that sea ice is an active component of Arctic climate variability and that the recent decrease in summer Arctic sea ice is consistent with anthropogenically forced warming.”
As for your reference to Milankovitch cycles Robert, please show me any peer-reviewed climate research support your claim that they might be to blame for the current warming. These cycles are currently in a slow cooling period expected to last for at least another 10,000+ years. But please go ahead and cite the climate science to prove me wrong.
i agree Ice is VOLATILE as it responds to warming. Why has Antarctica sea ice expanded? Because warmth is not the only factor. The change in polar winds and greater flow of colder water from melting glaciers have provided an environment for greater growth of surface sea ice. That’s as climate science understands.
Brendon, I agree with you that Arctic sea ice is not normal. There is NO normal. The 1970s – or the period from the 1870s to around 1917 – were no more of a norm than the periods of meagre ice in the 1920s or just after the Napoleonic Wars.
As to why any of this happens, when we have climate science we will know a bit…which is better than the almost-nothing we have have now. It does not matter what “all the data” indicate when the data is hopelessly inadequate.
There is no normal or stable climate. There is no flat line of temps. After the highs of the Optimum we have had a succession of warmings and coolings, none of them remarkable. There were Minoan, Roman, Medieval and Modern warmings, among others. The attempts in very recent years to flatten these out – for what seems to be for purposes of politics and dogma – have been verging on outrageous. It was the original hockey stick (Marque One) from the 90s that started me on this skeptical track. That, to me, was the real denial of climate change.
The present warming is common and it is humdrum. The sea level rise (which has continued in unspectacular fashion since the 1700s) is only surprising in places where subsidence of post-glacial rebound make it look like a big deal. (In Juneau and Stockholm sea levels are sinking – but I promise not to “cherry-pick” them to prove that sea level rise is not real. Sea level rise is real, but it’s a dribble and it’s not new at all.)
I do not imagine that things happen without causes and consequences. But saying that you or “climate science” are in the know is what got us into this expensive mess. Having to rush in with a reason for Antarctic sea ice growth is for flim-flammers like Christian Turney or the SkS crowd. You start by admitting you don’t know, not by christening some theory or journal “climate science” and saying that climate science knows, or that’s how climate science understands. What you don’t know, you don’t know. Publish-or-Perish can churn out any amount of trash which is obsolete after a few years (so others can publish then perish). But when you don’t know something…you don’t know it!
I’m a total skeptic at present. I believe nobody has a clue about what the climate will be three months from now. That doesn’t mean future climate is not knowable. It means future climate is not known. I promise to heap praise on the first person who successfully and consistently tells me, over a decade, what the climate will be in three months time from each prediction. I will lobby for them to get a real Nobel, and not one of those Norwegian Emmies they call Nobels and give to drongos.
And if anyone comes out with “peer-reviewed climate research” offering facile explanations for how Milankovitch cycles are “to blame” for the present warming I will treat it skeptically, like the “peer-reviewed climate research” of Briffa and Mann. It would make more sense than the battle of human GHGs versus human aerosols, but it would still be a big call. Chinese researchers reckon the MWP was warmer than the present, but, even if that is so, who knows if the present warming is over? It may have further to go. How can you make predictions about a planet you have hardly examined?
Two things are certain. The plasticky ball called Earth is mostly very hot, and it remains largely inaccessible and unexamined. Does that ever make you wonder?