Over last weekend, my post criticising the new paper by Chris de Freitas, Manfred “Bob” Dedekind and Barry Brill that claims warming in New Zealand’s temperature records is only one third of that calculated by the National Institute of Water and Atmospheric Research (NIWA) attracted a flurry of attempted ripostes at Richard Treadgold’s Climate Conversation blog. One — by Bob Dedekind — sets out to be a rebuttal of my original post. Sadly for Bob and his co-authors, he has only managed to dig himself into an even deeper hole.
For the sake of the record, therefore, I have taken the time and trouble to deal with each of his points in detail. The results of my researches do not make pretty reading for De Freitas, Dedekind, Brill, or the editorial team, reviewers and publishers of Environmental Modelling and Assessment.
Pal review
Dedekind kicks off his attempt to deal with my criticisms by repeating the silly claim — made on the basis of a very selective parsing of some emails stolen from the Climatic Research Unit at the University of East Anglia in 2009 — that climate scientists had colluded to get an innocent Chris de Freitas fired from his position as an editor at Climate Research in 2003.
Unfortunately for Dedekind, the truth of the matter — extensively documented by John Mashey in his 2011 Pal Review document — is that de Freitas spent years abusing his position at the journal by ushering poor papers by his climate sceptic mates, notably Patrick Michaels, through to publication by subjecting them to weak or inadequate peer review. CdF’s behaviour eventually led to a mass resignation by other editors, and ultimately his own resignation. Here are the main points uncovered by Mashey’s diligent research:
- From 1990 to 1996, Climate Research published no papers by any of the following sceptic “pals”:
Sallie Baliunas, Robert Balling, John Christy, Robert Davis, David Douglass, Vincent Gray, Sherwood Idso, PJ Knappenberger, Ross McKitrick, Pat Michaels, Eric Posmentier, Arthur Robinson, Willie Soon, and Gerd-Rainer Weber.
- de Freitas became an editor at CR in 1997 and then accepted 14 papers in the period up to 2003 from authors with whom he had close ties via US far right lobby groups and climate denial organisations.
- Papers from the “pals” accounted for half of his editorial workload.
- de Freitas acted as editor on seven papers by Patrick Michaels, half of Michaels’ publication record over the period. Mashey describes Michaels as “king of the pals”.
- After de Freitas resigned his editorial role in 2003, publications from the pals stopped appearing in Climate Research.
Given de Freitas’ track record, it is unsurprising that I queried the peer review process at Environmental Modelling and Assessment. Dedekind may choose to live in a parallel universe where white is in fact black, but the rest of us will accept the colours we see at face value.
Source of 7SS
One of the straightforward falsehoods in dFDB 2014 that I pointed out in my original post is this, from the abstract:
Current New Zealand century-long climatology based on 1981 methods produces a trend of 0.91 °C per century. Our analysis, which uses updated measurement techniques and corrects for shelter-contaminated data, produces a trend of 0.28 °C per century.
Dedekind fulminates:
Suffice it to say that there is zero evidence to show that the pre-2010 7SS was ever based on a correct application of RS93, apart from the assertions of some at NIWA.
Let me pose a question. What does Dedekind think Rhoades and Salinger were doing in their 1993 paper? Indulging in a purely theoretical exercise? In fact, they developed their techniques by working on what became the Seven Station Series (7SS), and from 1992 onwards the 7SS was compiled using RS93 methods properly applied.
At least one of the authors of dFDB 2014 should be aware of that simple fact. During the discovery process before the High Court proceedings, Barry Brill and Vincent Gray examined a set of storage boxes at NIWA — dubbed the “Salinger recall storage boxes” — that contained (amongst other things) all of Jim Salinger’s original calculations for the 1992 reworking of the 7SS.
Perhaps Brill and Gray didn’t look at Salinger’s calculations, or if they did, didn’t realise what they showed.
Two other critical references that prove that between 1992 and 2009 the 7SS was based on RS93 properly applied, are given below in the section on “Periods for comparison”.
Ignoring NIWA’s work
Here Dedekind goes completely off the rails:
Difficult to untangle the confusion apparent on this one. Firstly, the current 7SS uses the old technique, based on Salinger’s 1981 thesis. We applied a new technique (RS93) to it for the first time.
As I’ve just shown, that simply isn’t true, and Dedekind and his co-authors should be aware of that fact because they were given access to the “Salinger recall storage boxes” and should have read and understood the papers referring to the RS93 method’s application to the 7SS post 1992.
Further proof that dFDB 2014’s authors should have known that the latest 7SS does not use “old” techniques comes from the “Technical Notes” behind each station report prepared by NIWA’s scientists. These are not secret, but they are very technical and NIWA has judged them not suitable for putting on its website — but they were all supplied to Barry Brill in July 2011 ((Hint: If anyone wants copies of these Technical Notes, all they have to do is ask. If you want them quickly, ask a NIWA climate scientist, and don’t mention the Official Information Act. I asked, and as an example you can download the Notes for the Dunedin adjustments here [File updated 9-30am, 6/11 with improved formatting.]. If you don’t mind waiting, then ask for them under the OIA — the request will go straight to the lawyers (it’s the legal requirement for Crown Research Institutes).)). The Technical Notes are basically just tables of intermediate calculations with very little contextual explanation, but they show without any doubt that:
- Shifts to maximum and minimum temperatures were calculated by NIWA for the 2010 Review;
- The statistical significance of all shifts was calculated too. The significance tests were done relative to each comparison (reference) site, rather than evaluating an overall significance level after combining sites as RS93 did.
The Technical Notes were also supplied to the Australian Bureau of Meteorology climate team in 2010 as part of the peer review process and BOM’s scientists would have had no trouble understanding them. The same may not be true for the authors of dFDB 2014.
Dedekind should, therefore, be well aware that NIWA did not use “old” techniques for the new 7SS, and that they calculated adjustments for maximum and minimum temperatures as well as mean temperatures. If Dedekind has not seen these Technical Notes, then he should ask his co-author Barry Brill why these inconvenient truths were withheld from him.
Workings or SI
I shall bow to the views of Steve McIntyre (yes, that Steve) at Bishop Hill ((Comment on Nov 2, 2014 at 12:58 PM.)) on dFDB 2014’s lacklustre support for anyone wishing to reproduce their results:
I strongly recommend that the authors provide turnkey code showing their results.
[…]
Some readers, if not most readers, are only semi-interested in the controversy, but insufficiently interested to try to code the results and figure out how to access the data from NIWA. The authors should place the NIWA versions as used in their own FTP location and provide the code by which they obtained their results. The advantage of placing the code online is that interested readers can see exactly what was done without having to parse and interpret the methodology in the article – though clear methodology is equally important in seeing what was done.
Nor is it a sufficient reply for the authors to complain about their own prior mistreatment by NIWA. Most of the climate community will be sympathetic to NIWA and unsympathetic to the authors. So they need to go the extra mile.
Quite so. Extraordinary claims — and lets be clear, dFDB 2014’s assertion that warming in NZ is one third of that previously calculated by experts is an extraordinary claim — require extraordinary proof.
Periods for comparison
Dedekind makes the following statements:
Any assertion that makes the claim that RS93 does not use one or two year periods is false. Any assertion that RS93 uses four year periods is false.
Of course, it’s more than likely that Gareth’s vision is somewhat blurry on this point. Perhaps he is confused whether it’s two years before and after a change or four years in total? Who knows? But if he wants to wriggle out via that tunnel, then he should be aware that he would be confirming the two-year approach.
As for the claim that no professional working in the field would use a shorter period, then is Gareth now claiming that Dr Jim Salinger (the co-author of RS93) is not a professional, since he clearly uses it in section 2.4 of RS93? What about Dr David Rhoades? Should we write and tell them that?
Just to be clear, when I said in the original post that the use of one or two year periods is not adequate, I was using the RS93 terminology of k=1 and k=2; that is, k=2 means 2 years before and after a site change (so 4 years in total, but a 2-year difference series which is tested for significance against another 2-year difference series).
Dedekind claims that NIWA never considered k=4. He is wrong, and should know he is wrong, because he has certainly had sight of the following documents:
- Page 3 in the 1992 NZ Met Service Salinger et al report (single page scan here). The final paragraph clearly states k=2 and k=4 were used. The full paper (pdf here) was available to the NZCSET, but was not amongst the “exhibits” supplied to support their evidence to the High Court. One wonders why not…?
- Top of page 1508 in Peterson et al 1998: “Homogeneity adjustments of in situ atmospheric climate data: a review”, International J. Climatology, 18: 1493-1517 (pdf here). Clearly states k=1, 2 and 4 were considered. The paper is cited in dFDB 2014. Perhaps the authors didn’t read it.
Direct evidence that calculations based on k=4 were made is also in the “Salinger recall storage boxes” inspected by Brill and Gray.
Minimum and maximum temperatures
As I pointed out in my original post, dFDB 2014’s failure to consider maximum and minimum temperature adjustments is the paper’s most critical flaw. Dedekind — as is becoming all too clear — is simply wrong when he states:
If this is the most critical flaw in our analysis, then why, in NIWA’s Review of the 7SS, did they not do this? Why did they use the mean, as we did? We followed their lead, after all.
By the way, nothing in anything we’ve done precludes NIWA doing their own RS93 analysis. Why have they not done this yet?
As I’ve already shown above Dedekind should be aware that NIWA did consider max and min temperatures — which is essential if you are only going to apply adjustments if they achieve statistical significance. The evidence is there in the Technical Notes supplied to his co-author Barry Brill two years before dFDB 2014 was submitted to EMA. It’s even in the 7SS Review document NIWA produced explaining the process they used to create the latest 7SS. The Review may emphasise the mean temperature shifts but NIWA obviously had to have calculated the max and min shifts for the Review to mention them at all. Mullan (2012) also considers max and min temperatures when applying RS93, and shows why it is important to do so.
Missing data
Dedekind takes issue with my comments on his infilling of missing temperature data for May 1920 in Masterton:
We use the average anomaly from surrounding reference sites to calculate our missing anomaly. So if Gareth wants to criticise our paper’s technique, he criticises NIWA at the same time.
Estimating anomalies is certainly the correct approach in place of using climatology. But it doesn’t appear Dedekind has done this for Masterton in dFDB 2014. Table 3 in the paper shows no adjustment made for the 1920 site move, but if you apply RS93 k=2 — their preferred method — this would change to -0.3ºC and have to be applied because it meets their statistical significance test. Unfortunately this would lead to a doubling of the current NZCSC trend for Masterton and therefore might not be ideologically acceptable.
The 11SS
Dedekind tries hand wave away the 11SS as having been “thoroughly debunked elsewhere”, but doesn’t link to any debunking. The fact is that the raw station data from rural sites with long records that require no adjustments show strong warming. In other words, the warming seen in the 7SS is not an artefact of site changes or urban warming. That is an important matter, and should have been addressed in dFDB 2014.
Mullan 2012
In my original post, I noted that Brett Mullan’s 2012 paper Applying the Rhoades and Salinger Method to New Zealand’s “Seven Stations” Temperature series (Weather & Climate, 32(1), 24-38) deals with the correct application of the methodology described in Rhoades and Salinger’s 1993 paper. It is not cited in dFDB 2014 — itself a sign of shoddy scholarship in a paper claiming to make the first use of that methodology with respect to the 7SS. In his attempted rebuttal to my post Dedekind makes this odd statement:
“Mullan (2012) is far from a refutation of RS93.”
Well, no, since it is entirely about the proper application of Rhoades and Salinger’s methodology — but it is a direct problem for what dFDB 2014 calls RS93 — a misapplication of that methodology.
At the very least, dFDB 2014 should have addressed the existence of Mullan’s paper, and explained why the application of RS93 in that paper is not preferable to their interpretation of it. Making no reference to the paper is a sign of either not knowing the basic literature of the field in which you are attempting to publish (one of academe’s greatest sins), or it’s a sign of trying to avoid uncomfortable issues. In either case, it is a clear example of how the peer review process at EMA failed. Knowledgeable reviewers would have insisted that the authors address the issues raised in Mullan 2012.
Sea surface temperatures (SST)
Dedekind makes much of the fact that the paper does refer to one paper on SSTs around New Zealand — but skips over the essential point: that the SST evidence confirms that warming is occurring faster than they calculate. A hand wave from the authors to “there is low confidence in the data in the crucial pre-1949 period” is hardly a serious argument — especially given the strong warming shown in the raw station data, and corroborating warming seen on offshore islands and in the loss of ice in the Southern Alps.
Parting shot
Dedekind closes with a little snipe at me for pointing out that he had no publication record. Perhaps I should have added “relevant” or “in the field” to the sentence in my original post, but in making an appraisal of his expertise I was greatly assisted by Justice Venning’s judgement on the matter in NIWA v Cranks:
Mr Dedekind’s general expertise in basic statistical techniques does not extend to any particular specialised experience or qualifications in the specific field of applying statistical techniques in the field of climate science. To that extent, where Mr Dedekind purports to comment or give opinions as to NIWA’s application of statistical techniques in those fields, his evidence is of little assistance to the Court.
Dedekind and Treadgold’s reaction to my criticism of dFDB 2014 — and their whole approach to NIWA and the NZ temperature record — demonstrates just how divorced from reality the climate crank position has become over the five years of their attack on NIWA. Their whole campaign only makes sense in a strange world where New Zealand’s climate scientists have been conspiring to create the impression of warming where none exists. Remember Treadgold’s impassioned bleat when he launched their effort in November 2009?
We have discovered that the warming in New Zealand over the past 156 years was indeed man-made, but it had nothing to do with emissions of CO2—it was created by man-made adjustments of the temperature. It’s a disgrace.
Now that dFDB 2014 has been published, and the NZCSC’s partial and political misapplication of climate statistics has been revealed, the enormous mismatch between the little fantasy world they’ve lived in for the last five years and the harsh reality of a world that’s warming fast has become all too obvious. Such is the nature of cognitive dissonance, however, that we cannot expect reason to prevail in their camp. The deluded will continue in their delusion, and continue to try to twist the world to match their own expectations. And they will continue to fail, miserably.
[The Marvelettes, Danger! Heartbreak Dead Ahead.]
NIWA’s page of references on 7SS clearly cites Rhoades and Salinger 93 as the source of the methodology, as well as Fouhy et al 92 on site details.
It’s hard to understand Dedekind missing that….
It seems to me that the likes of de Freitas, Brill and Dedekind are just scam artists, pure and simple. Their “paper” isn’t meant to make sense, merely to look “sciencey” enough to give the impression of a debate amongst experts, and be cited on denialist blogs to delude the unwary.
Indeed Rob, it’s the ol’ Wizard of Oz routine – try to bamboozle punters with their spoof science.
Come back New Zealand glaciers, it was all just a scam!
As seen in Salinger et al (1992) they used “Comparisons were also made using annual values for the entire overlapping period of record with a neighbour station (Salinger 1981)”. An example of this is given in Rhoades and Salinger (1993) Table V, method A. This seems to have been missed. Also RS93 describe yet another method (see section 3.4) used (sorry to be technical but taken directly from RS93 for the stats Geeks:
“3.4. Finding the most prominent change points
Meteorological series usually contain a number of apparent change-points, where the level seems to change from one value to another. The more prominent a change-point associated with a site change is, the more
confidence one has that it is a real site change effect. It is therefore of interest to identify the times of the most prominent change-points in a series and to see if the site changes are amongst them. Let {egt, = 1, . . . , n ) be the series of residuals of (xr} from a time-of-year mean. Then, to fit the most prominent k change-points, we
seek a partitioning set of integers n, < n, < . . . <nk= n and values p,, p2 , . . . , pk such that the residual sum
of squares (RSS) is minimized. For a given partitioning set, RSS is minimized when For a given k, the optimal partitioning set is efficiently determined by dynamic programming, as discussed by Seward and Rhoades (1986)."
These two methods have been totally overlooked by dFDB etc.