Sunday 31 August 2008

Ben Goldacre on the MMR-autism hoax

Ben Goldacre was written an absolutely excellent article on the history of the MMR-autism scare, with a particular focus on the media's responsibility for the thousands of cases of childhood illness, disability, and death that the scare has caused. Particularly interesting is his potted history of vaccine scares in other countries. The United States is rolling through its own scare at the moment, helped along ably by its endemic antivaccination movement.

The MMR scare bobbed into view while I was in high school, and it never smelled of anything but quackery, mostly because the media coverage degenerated quickly into a mess of emotionally exploitative bullshit and instigator Wakefield seemed more interested in getting off on said media exposure than actually being a scientist. And I just don't get any less angry about it. We have a responsibility as scientists to present our work accurately and modestly, and to consider the consequences of our errors, and to not take hundreds of thousands of pounds from lawyers to make shit up from the most tangentally-related, hokey data. The media and the public look to us for answers, and if we say something which gets their attention, it's going to be be assumed as the truth. For my part, I did my best to present the situation as it really stood to my relatives, and to make sure I had the best possible basis for believing that.

It was an important lesson at an stage of my career. Anyway, check out Goldacre's article. It's wonderfully thorough and well-written. And remember, don't trust what you read in the papers.

Tuesday 26 August 2008

More realism? Yes please.

One of my coworkers has passed "Predicting Molecules — More Realism, Please!" by
Roald Hoffmann, Paul von Ragué Schleyer, and Henry Schaefer by onto me. It's potent stuff, so much so that the journal elected to print the reviewers' comments (one negative, from Frenking, and the others positive, from Bickelhaupt, Koch, and Reiher) alongside it. I like it.

As the title suggests, the article is mostly on the subject of computationally designing molecules, and the often wooly criteria that theoriticians use to justify these compounds' existence or usefulness. Hoffmann et al propose a set of evidence to prove that a substance is "viable", meaning likely to survive in a reasonable lab environment, or at least "fleeting", meaning that it exists to detectability but may not be isolable. It's all common sense (stability with respect to polymerisation or oxidation, charge transfer, etc.), but it's refreshing to see it laid out so plainly. I imagine that a lot of computational "synthesis" papers will model themselves after this methodology.

I'm also sure that the criteria will be argued about and adjusted. Frenking gets us started with nit-picks about rules which are utterly unapplicable to astrochemistry (a compound which is unstable on Earth may exist by the megaton in the vacuum of space), and makes the invaluable observation that theoretically-unisolable compounds may be stabilised in the lab by forming complexes with equally unorthodox or artificial bedfellows. Or to put it another way, the environment is just as important as the compound in determining stability. Reiher points out that some of the criteria for stability are too generous, that molecular dynamics would only drop explosives into the "unstable" pile for example. Koch observes that the criteria are generally biased towards organic chemistry (perhaps inevitably given the authors!). These comments are as valuable as anything in the Hoffman paper itself, and hopefully they'll be just as widely discussed. Each reviewer seems to want to pull the criteria towards a specific field, so the main concern is probably that it'll turn into a piecemeal mess.

The rest of the paper is spent calling out computational chemists for things like quoting bond lengths to seven decimal places on the angstrom - values well beyond the physically measurable, and likely so sensitive to computational methodology as to be meaningless. On this subject, the authors have less to say. This is probably the section that provoked Frenking's remarks about "comments which are in reality neither helpful nor do they make realistic suggestions for an improvement". The paper has little to say on accuracy (matching results to reality), precision (being confident in your results) and the related matter of significant figures (providing your results to X decimal places) except to point out common mistakes and state that we should exercise "common sense", but it's good that someone brought it up. This section was pretty thought-provoking, but you may want to skip to the bottom of my ramble on it.

Precision, and with it the number of significant figures, is a tough one. In an experiment, the data that are gathered on each run are slightly different due to subtle variations in the experimental setup or sheer chance. Those can then be averaged to give a final result. The precision can be determined by calculating the "standard error" in the results, which indicates how wide the average is spread out. It's reasonable to say that anything smaller than the standard error is not significant. If a result comes out as 1.004434 gram +/- 0.02 grams, then quoting results as 1.00 +/- 0.02g is fair. Also there's the very basic point that you can't calculate your results to more decimal places than your original measurements.

Computational chemistry is often quite absurdly precise by comparison. Although the purely mathematical approximations involved in computational chemistry add some run-to-run, computer-to-computer, and program-to-program variation, this is usually absurdly small. The paper observes that this is not the case for DFT computational methods, something I'd not been aware of. The authors call for those performing DFT calculations to quantify their precision by performing calculations using different software, and different computers, and then actually calculating precision. I'm not sure how well this will be received, as it doesn't sound incredibly practical.

In most cases, though, it'd be entirely reasonable to say that a computed bond length for substance X is 1.3424545 +/- 0.0000001 angstroms, an insanely precise result. The high precision quoted isn't actually a meaningful statement. We can't quote our result to seven decimal places, because the value we've obtained may be sensitive to the particular computational method we used (which I'm going to call "method A" for now). For a different computional method, then we may obtain the value 1.3464354 angstroms (let's call this method B), or 2.3453453 angstroms (with method C). Those results may be very precise, and not vary from run to run at all, but they're obviously suspect. It no longer seems sensible to quote any of these values to seven decimal places. They're clearly very sensitive to the method chosen, and for all we know aren't even accurate (i.e. they could be nothing like the real value).

The authors don't provide much advice except to quote the guidelines of Pople, about using 3 decimal places for distances in angstroms and so on. I'd suggest using the accuracy of a method to determine the number of decimal places. For example, if I was working on compound X, and similar compound Y had been well studied in the experimental and computational literature, I might look up Y in the CCCBDB and see how well it was described by methods A, B, and C. If method C gives a result which matches experiment to two decimal places (for example, 2.3923 vs 2.3912) then it seems reasonable to use this number of significant figures in quoting my own results (1.35).

I could also suggest quoting the number of decimal places based on the attainable experimental precision, because that's what the results will be compared to. This may be useful if the method chosen isn't well-benchmarked, but a lot of experimental work has been done. For example, if the best experimental study on compound Y had a precision of +/- 0.1 angstroms, then I would quote the bond length in X as 1.3 angstroms.

What about compounds or methods which aren't well-studied? Recall that computational chemistry is in the business of approximation. Well, luckily it's got a toolbox of well-understood approximations. For example, there's a series of methods that goes "MP2, MP3, MP4, MP5...", increasingly mathematically thorough and therefore providing an increasingly accurate approximation. By going through this series, the calculations gradually approach the impossible dream of an exact description of the chemical system, and will show this by our results settling down on a single value. If a series of increasingly-accurate calculations gives the results 3.43, 1.02, 2.48, 1.67, 2.24, 1.83, 2.12, 2.04, 2.09, 2.08, 2.11, then it seems fair to say that the result has converged to one decimal place, and to quote it to that number of places.

Ramble over. To sum up, this paper and the accompanying comments deserve to be read by everyone in the field, and I expect some sort of conference presentation's going to come out of it. I'd love to be around for the questions after that.

(Yes, I'm deliberately lumping both "basis set" and "computational method" here.)