James goes on to point out one of the two reasons why this is a bad approach:
A recent comment of [Pielke's] provides a nice opening:As you well know much of science works through hypothesis - falsification. Across climate science the null hypothesis used to guide research has been that a human signal is NOT present, and research is then done to falsify this hypothesis. In some cases such research has been done allowing attribution of climate effects to an anthropogenic forcing. This null hypothesis is chosen because of Occam's razor, it is a simpler explanation for what is observed.He is essentially posing the question as initially one of detection - can we show that the AGW has had an effect, and that the observations are not just the result of climate variability? - before moving on to attribution - how much of a change can we describe as being due to this particular cause?
There is, however, an entirely different but equally valid approach that could also be used from the outset, which is: what is our estimate of the magnitude of the effect? The critical distinction is that the null hypothesis has no particularly priviledged position in this approach.
It is amusing to see Roger, very much at the sharp end of policy-relevant work, promoting the scientifically "pure" but practically less useful detection/frequentist approach rather than the more appropriate estimation/Bayesian angle.
The business interests who will be hurt by action to address emissions know who they are, but the people who will be hurt by inaction don't know who they are. Damage estimates will help people know they are being affected and be more likely to balance the polluters in policy development.
The other reason why estimates are valuable is for climate change litigation. If we can estimate how much damage polluters are causing, we're much closer to figuring out what they owe to society. I'm certain this is something that fossil fuel lawyers and lobbyists are thinking about, and at some point they're going to want a deal: in return for immunity from litigation, they'll give a concession on climate change legislation. The more scared they are of litigation means the bigger the concession that they'll offer.
Back to James' post - in the comments, Roger shows up to say he mostly agreed with James on policy, but was not advocating but just "noting" the IPCC approach. Fast forward to more recently:
The attribution of health impacts to greenhouse gas emissions relies on work done by the WHO. Here is how the WHO describes its own analyses:
In 2002 the WHO explained (at p. 26 in this PDF):
Climate exhibits natural variability, and its effects on health are mediated by many other determinants. There are currently insufficient high-quality, long-term data on climate-sensitive diseases to provide a direct measurement of the health impact of anthropogenic climate change, particularly in the most vulnerable populations. Quantitative modelling is therefore the only practical route for estimating the health impacts of climate change.
And by quantitative modeling, they mean a model with assumptions included for the effects of GHG-driven climate change.
Roger calls this "non-empirical" and says estimates can be altered to say anything. Only "empirical" data can be used. I assume he means a disproof of the null hypothesis, which he had said before isn't a good way to do policy. Maybe I've misunderstood....
Worth noting that this WHO stuff is the same that Stoat says is badly done. And I'll concede that taking the additional step of estimating the human impact after estimating the AGW climate damage adds more squishiness to the calculation. But Roger says it shouldn't even be attempted, a different approach from saying the attempt was flawed.
(And I'll just put in a marker that I'm not convinced that William's right about the WHO, but I'm going to have to look at that more closely to see if I have anything intelligent to say....)