Roger Pielke Sr. is no longer blogging, and has been travelling in Europe lately, so when I reached him by e-mail he was unaware of the latest twists and turns in Watts/McIntyre saga, in which they attempt to refute the AGW thesis by re-cutting GISSTEMP data in light of Watts survey of U.S. Surface Stations. Mr. Pielke declined to respond directly, explaining that this was "not his specialty", but was kind of enough to refer me to frequent co-authors Xiaomao Lin, Ken Hubbard, and John Nielsen-Gammon. Of these, Mr. Lin has promised to look at the CA work and attempt a response over the course of the next couple of weeks, and Mr. Nielson-Gammon has responded below with some concrete advice as to how Steve's squad of auditors ought to be proceeding.
Here is Mr. Nielson-Gammon's original e-mail:
I appreciate the attempt by Watts to classify stations on the basis of siting quality.
The impact of poorly-sited stations on the trends is not known ahead of time, despite people's expectations, and should be much more sensitive to changes in siting than to poor siting itself.
Given a sufficient number of stations, I would trust the trends from well-sited stations much more than from poorly-sited stations.
I am eager to see the results of screening the stations by siting quality. I do not know whether the well-sited stations will show more of a trend, or less of a trend.
Until the results are properly adjusted for variations in the geographical distribution of stations, it is not possible to draw any conclusions. I haven't seen anything posted online yet that does this.
In response to this, I asked:
the first and second passes at the data after classifying about a third of the network (340 or so stations) into "good" and "bad" stations are here [links to CA].
In most cases the good and bad stations are pretty much in sync. Do you think 300 plus is still not enough?
To which Mr. Nielson Gammon responded:
The number is sufficient. At least, the error bars probably won't overwhelm any important difference in trends.
So there is no need, as McIntyre et al have done, to stop the analysis. In fact, a brief run through of CA comments here and here suggests that this work has been stopped in panic because it has not yielded the correct conclusion: the "good" and "bad" stations refuse to go out of sync. As Anthony Watts writes:
I do have the feeling though that comparing USHCN/GHCN data to GISS will yield similar curves no matter what, since the data has already been adjusted at the USHCN level, and that adjustment persists in the data through to GISS data.
Anyway, I wrote further to Mr. Nielson-Gammon:
They've been working on [adjusting for variations in the geographical distribution of stations]. Surfacestations has been slowly working its way into the U.S. Midwest. There's a map here.
And in response Mr. Nielson-Gammon suggested some concrete procedures for the CA crowd:
You don't need uniform coverage of quality estimates, you can do this with the data that's already available. You just need to bin the data by location, for example by computing the difference between good and bad stations within every 5 degree by 5 degree square over the US and average the results. Even better, if you're interested in the century-long trend impacts, compute the difference between the station trends and the smooth map of linear trends. Either approach would eliminate the confounding effect of spatial variations in the trends. Maybe someone will have this done within two weeks or so, it's not that hard, and it's fairly standard scientifically.
So get at it, lads (and you too KB the Denying Munchkin). There's no need to wait for more stations to be classified. You should be able to give a solid answer one way or another within the month.
Hopefully Mr. Lin will respond in the next little while. If so I will post his remarks. I have also e-mailed Mr. Hubbard and perhaps he shall reply as well.
Sorry for the long post, especially to my more politically inclined readers who probably don't give a shit..