Wednesday 31 December 2008

Time's arrow

As we prepare to lurch into 2009, Roger Ebert has posted a review of The Curious Case of Benjamin Button which makes some interesting comments (with his usual wit) about the importance of the "arrow of time" in a film narrative, and how logically perverse and alien Button's life becomes as a result.

From physics, it seems that the only aspect of the universe which is asymmetric (and in fact antisymmetric) with respect to time is the entropy of a closed system (such as the universe): it must always increase during any process. I half-remember something from "A Brief History of Time" on that subject (I read it about half a decade ago). The key point is that very concept of time is an artefact of our existence as chemical beings, driven inexorably by thermodynamics. Ebert's review doesn't go into these sorts of technicalities, but it makes me wonder about the psychology of life (would we even call it life?) without a sense of time. It would be as staggeringly alien as the planet at the centre of Stanislaw Lem's "Solaris", I suspect.

Wednesday 5 November 2008

Stop the communis!

Via one of Switched's interminable parade of "top X" gallery articles, the Sydney Morning Herald reported back in September that the "ratio communis, a key region of the brain, was malfunctioning" during mobile phone texting or GPS usage. Oh no! This has been somewhat credulously re-reported by Switched (with the the embellishment that "UK" researchers made the finding), and God knows how many other blogs, with the usual concerns about what radio is doing to our brains. Scary stuff, for as the article says, the ratio communis keels over completely: "Instead of fluorescing on brain scans, it flickered, grey and dull." Indeed, there have been "fatalities when this key decision-making part of the brain failed". This amazing research led the American College of Emergency Physicians (ACEP) to issue warnings against texting while rollerblading, driving, and so on.

Not to spoil all the fun, but it's nonsense, and either someone pranked the SMH, or the SMH have pranked us. Switched was sceptical on principle, but didn't spend the ten minutes required to confirm their suspicion. "Ratio communis", which the article thinks means "common sense" in latin, actually means "common scheme" in most of the instances I can unearth something closer to "common meaning". It does not appear to refer to any part of the brain, even as a neologism, for it does not appear anywhere on the entirety of PubMed, the go-to archive of medical research (the words appear, separated, in 95 articles). On the intertrons, it only appears in the appropriate neurological context in blogs parroting the SMH story. The ACEP press release is conspicuous by its absence from their website - the warnings the SMH article refers to are from a seperate media release which went out back in August.

I can't help but wonder about the "scientific literature" that crossed the desk at the Sydney Morning Herald and which prompted their article. Either the literature, or the article itself, is a finely-crafted satire at our complete stupidity when a blinking semiconductor box is placed in front of us. (I've contacted the paper to try and discern which.) The reference to fMRI, suggesting that scientists can watch our "common sense centre" shut down as we tap away at our gadgets, was just the icing on the take which, alas, gave it a sheen of veracity and led to a fair few people taking it seriously. Inadvertently, it's satirised our tendency to take anything shown by "science" at face value. I'm glad that it drew less attention than George Carlo's epic fail of a press release about how wi-fi causes autism, at least.

Friday 31 October 2008

Review: Bad Science by Ben Goldacre

Ben Goldacre's Bad Science column in The Guardian, and the namesake blog, are rigorous in scholarship, precise in analysis, and refreshingly reluctant to demonise. An oasis of genuine optimism, Goldacre engages with fad, misconception, and systematic bias in medical and scientific research, and offers up thoughtful solutions to the deep problems in medicine, both mainstream and alternative. Eager to teach, Goldacre peppers his writing with clear explainations of how things work and why some things just ain't so, carefully avoiding away from the oh-so-tempting, mind-closing "you don't know what you're talking about, here's why" approach to sceptical discourse. Instead, he supposes that we could all understand and engage with the science and medicine headlines, if we were armed with the right tools and relieved of a few popular misconceptions. So I had high hopes for his book, also named Bad Science. (I was lucky enough to get a signed copy. Thanks Michael!)

Bad Science reminds me of Darrel Huff's classic How to Lie with Statistics, which was a humourous primer for spotting statistical cons in advertising and the media, and also Stephen Poole's Unspeak, which repurposed close reading as a bullshit-detection system. Increasingly subtle or technical slip-ups are used as worked examples against which increasingly subtle or technical mental tools are deployed. Armed with these tools, we can recognise deception or error in our day to day lives, and we can understand stories in the media in more depth. Goldacre opens with "detox footbath Barbie", introducing the idea of a controlled experiment, and leads us on a brisk but clear tour though the scientific method and medical statistics. Even sophisticated topics like meta-analysis are dealt with honestly and unpatronisingly.

Never accepting a simple answer, Bad Science also takes some time out to elaborate on some of the surprising revelations along the way. The need for experimental controls in medicine (in the context of homeopathy, a subject which Goldacre appreciates has been done to death) is underlined by the amazing power of the placebo effect, and Goldacre takes a brief detour to discuss the potential for this controversial therapy. His discussion of the "antioxidant paradox", that antioxidant suppliments seem to be actively harmful while high blood antioxidant levels are protective, is much more than a simple debunking. Instead he takes the time to discuss the larger implications of this find for our understanding of how the human body works. (This is reminiscent of the writing in Bad Medicine, Wanjek's thoughtfully scattershot romp through medical misconception).

Goldacre also lets rip about some of his pet hates, but is careful to lay the blame with systems and cultures rather than individuals. The idea of science as an intimidating and arbitrary authority figure is prevalent in the media, and is a projection of journalists' own fears, he argues. By depicting science as an ivory tower sending down pronouncements, the media does its best to discourage us from thinking about the stories. On the other hand world-changing studies and daring pioneers make for easy headlines, and the media are all too willing to loft figures like Andrew Wakefield or Deepak Chopra up as new authorities. The real tragedy, he argues, is that the media shakes off any blame when the wind changes, hanging the former heroes out to dry as quacks or shrugging their shoulders about those crazy scientists, always changing their minds.

Mainstream medicine arguably takes even more heat then "alternative", a step up from "bumbling" to "dangerous", and Goldacre goes as far as to describe pharmaceutical industry behavior as "evil". Mercifully few authorities take homeopathy seriously, for example. However catastrophic failures or misuses of science and statistics deceive doctors and policy-makers on the effectiveness of drugs and treatments, while the publication system fails to properly combat these errors. It's here that the book gets most technical, far removed from the sort of material most readers are likely to encounter, and this part of the book offers striking lesson that perhaps shooting down waffle about distance healing isn't the best use of a sceptic's time.

Personal empowerment is a recurring theme of the book. Science need not be an intimidating authority figure, because we could all potentially understand the process. Likewise some problems can only be dealt with at a personal level, not with a magical sciencey-sounding pill or exercise program. When it keeps these goals in mind, Bad Science is a great success. Later chapters quickly turn into stand-alone case studies in an effor to impress on us the importance of understanding science and medicine - we've all got an interest in staying healthy, after all - and come across as more fragmented. It's here that the book's origins in a succession of blog posts and columns begin to shine through. Even so, it's always wittily, clearly, and precisely written. Bad Science is a superb guide to understanding and engaging with the science and medicine in our everyday lives, and I wholeheartedly recommend it.

(Book links help support Cooking Fiasco.)

Friday 19 September 2008

More from my .log files

The Gaussian03 fortunes continue to amuse:

"THE ACADEMIC HIERARCHY"

THE PRESIDENT:
LEAPS TALL BUILDINGS IN A SINGLE BOUND,
IS MORE POWERFUL THAN A LOCOMOTIVE,
IS FASTER THAN A SPEEDING BULLET,
WALKS ON WATER,
GIVES POLICY TO GOD.

THE VICE PRESIDENT FOR ACADEMIC AFFAIRS:
LEAPS SHORT BUILDINGS IN A SINGLE BOUND,
IS MORE POWERFUL THAN A SWITCH ENGINE,
IS JUST AS FAST AS A SPEEDING BULLET,
WALKS ON WATER IF SEA IS CALM,
TALKS WITH GOD.

PROFESSOR:
LEAPS SHORT BUILDINGS WITH A RUNNING START AND FAVORABLE WINDS,
IS ALMOST AS POWERFUL AS A SWITCH ENGINE,
CAN FIRE A SPEEDING BULLET,
WALKS ON WATER IN AN INDOOR SWIMMING POOL,
TALKS WITH GOD IF SPECIAL REQUEST IS APPROVED.

ASSOCIATE PROFESSOR:
BARELY CLEARS A QUONSET HUT,
LOSES TUG OF WAR WITH LOCOMOTIVE,
MISFIRES FREQUENTLY,
SWIMS WELL,
IS OCCASIONALLY ADDRESSED BY GOD.

ASSISTANT PROFESSOR:
MAKES HIGH MARKS ON WALLS WHEN TRYING TO LEAP TALL BUILDINGS,
IS RUN OVER BY LOCOMOTIVES,
CAN SOMETIMES HANDLE A GUN WITHOUT INFLICTING SELF INJURY,
DOG PADDLES,
TALKS TO ANIMALS.

GRADUATE STUDENT:
RUNS INTO BUILDINGS,
RECOGNIZES LOCOMOTIVES TWO OUT OF THREE TIMES,
IS NOT ISSUED AMMUNITION,
CAN STAY AFLOAT WITH A LIFE JACKET,
TALKS TO WALLS.

UNDERGRADUATE AND WORK STUDY STUDENT:
FALLS OVER DOORSTEP WHEN TRYING TO ENTER BUILDINGS,
SAYS, "LOOK AT THE CHOO-CHOO,"
WETS HIMSELF WITH A WATER PISTOL,
PLAYS IN MUD PUDDLES,
MUMBLES TO HIMSELF.

DEPARTMENT SECRETARY:
LIFTS TALL BUILDINGS AND WALKS UNDER THEM,
KICKS LOCOMOTIVES OFF THE TRACKS,
CATCHES SPEEDING BULLETS IN HER TEETH AND EATS THEM,
FREEZES WATER WITH A SINGLE GLANCE,
IS GOD.

Monday 15 September 2008

Alex's Law^H^H^H Guess of Phone Run Times

If we assume that the listed talk time for a phone is the amount of time taken to drain the battery completely by making one continuous phone call, and the listed standby time for a phone is the amount of time taken to drain the battery completely with the phone powered on and with radio active, but otherwise idle, we can conclude that:

If you use the phone for half the rated talk time, you will have half the rated standby time available.

e.g. if my phone is rated for 400h (2 weeks) on standby, and 4h of talk time, in practice I can use the phone for a week if I make 2 hours of calls.

This is a useful rule of thumb for sizing up phone manufacturers' ratings, which use preposterously unlikely fringe cases. To remove the deception factor, just cut the rated standby and talk times in half. I'm sure I'm not the first to realise this, but it seemed worth writing down. Of course, it assumes the phone is not used for other functions. If surfing the web or sending text messages strains the phone as much as calling, then those can just be lumped into the talk time pile. If they drain it less or more, then there's some cludge factor involved.

LIGOland



As promised, a report on my trip to LIGO Hanford Observatory (1:30pm, second Saturday of the month, entry free, approx. 2.5 hours). Our tour opened with an explanatory movie, the gist of which I'll relate for clarity. LIGO (Laser Interferometer Gravitational Observatory) is an experiment to detect gravity waves, the shadowy warpings of space and time given out when massive bodies interact. The textbook example is that of binary neutron stars, pairs of absurdly massive dead stars which circle each other, losing energy by emitting gravity waves until they collide in a final burst of energy. They hope that by measuring the gravity waves from sorts of events, in combination with conventional electromagnetic (light, radio, x-ray) signals, they can get a better understanding of how gravity works on a large scale. The oh-so-topical Large Hadron Collider is chasing gravity too, from the opposite end (the tiny subatomic interactions which give rise to gravity in the first place).

Each of LIGO's observatories consists of a pair of 4km arms, meeting at one end to form a right angle. These form a gigantic Michelson interferometer. A single laser beam is generated at the intersection, split, and sent out along the arms. The lasers reflect off mirrors, and return to the intersection, then are recombined and sent to a detector. If the arms are of equal length, then the recombined beams interfere to zero. If the arms are different lengths, then they fail to interfere properly and there's an appreciable signal. A passing gravity wave distorts space-time so that the arms are of different lengths (or equivalently, time passes at different rates) and thus there's a measurable signal. (CalTech's own info page.)




These are impressively sensitive instruments. After the film, our tour guide demonstrated an interferometer maybe a foot across, tugging on the metal frame with a string. The minute warping of the apparatus caused the laser pointer to fade in and out, while more delicate motions, such as nearby footsteps, were made plain by connecting up a light sensor in the laser's path to a loudspeaker. By scaling up the instrument, even more minute changes can be picked up, an elegant bit of "big science". In principle, LIGO is sensitive down to less than the radius of a proton.

Like the footsteps of our guide, any number of mundane things can change the lengths of the arms, or just wiggle the mirrors around, and our tour seemed to revolve around their efforts to wipe out this unwanted noise. First and foremost, there are two LIGO observatories, one in Hanford in Washington state, the northwest of the United States, and one in Livingston, Louisiana in the southeast of the United States. Therefore any signal detected at one and missed at the other can be assumed to be background noise. (Another upshot is that the two detectors are in slightly different planes in space, so the relative strengths of the two signals can be used to triangulate the source of a gravity wave, at least in principle.)



Secondly, and perhaps most obviously from my perspective as a visitor, they built one of the observatories at Hanford. As my earlier post relates, Hanford lent itself to the atomic weapons program due to its utter isolation (and water supply, of course). It took us about twenty unnerving minutes of driving in the nuclear reservation to get out there. This helps to cut out traffic noise. Presumably out of necessity rather than anything else, the Louisiana counterpart detector is surrounded by logging, and unlike the Washington sands, the local ground carries vibrations of just the right frequencies to mess with the mirrors' fancier stabilising equipment.

That's where things get less elegant, more ingenious. The movie related that the mirrors hang from wires, which attach to vibration-isolated platforms, and sets of teeny electromagnets can prod the mirrors back and forth to counterbalance any unwanted wiggles. The control systems are designed to anticipate certain kinds of wobbles and damp them out. Another example of the device's sensitivity: the atoms in the suspension wires jiggle around because of their own heat, making the wires vibrate at their resonant frequencies like violin strings. On the opposite end of the scale, they brought in a freshman to write an application which would automatically account for the one-foot tidal warping of the Earth's surface as the sun and moon circle overhead.




This opening preamble was probably a little long for my taste, although it was a great chance to quiz a member of the LIGO staff. Now we got a wander across the site to the intersection of the beams themselves. Did I mention that we're in the desert? And the arms are 4km long? As much as I would've liked to see the mirrors up close, that's a hell of a hike, and the drop-in tour doesn't go there. Our tour guide was happy to relate anecdotes and field even more questions, in spite of the weather. Honestly, we were suffering a little by the end. Come in the winter, if you insist on the afternoon drop-in like we did.



Our tour concluded in the control room, an appealingly dim-lit NORAD-esque chamber, ablink with readouts and spectra from LIGO. There aren't any genuine signals yet. At its current 25 million-light-year range (bigger than the entire local group of galaxies), they're only expecting to see colliding neutron stars every ten years, give or take, but upgrades set for the next couple of decades should expand its range tenfold, raising the frequency of detections a thousand times (the volume covered goes up with the cube of range), so that certain events should be detected daily. To be honest, I think LIGO isn't an operation instrument yet, and it's in a prolonged setup phase until these upgrades are complete. Until then, they've certainly built the world's most sensitive seismometers, as we got to watch individual trucks popping up as blips on the control room readouts.

Where next? The obvious way to escape all those rumbles is to put the detectors in space. That's the idea behind LISA, whose arms will be links between satellites in orbit of the sun. This will allow them to study gravitational waves of frequencies which are inescapably concealed by noise on the Earth. I wish them all the best of luck. As for the tour, it was perhaps a little too limited in terms of what we got to see on the site, and there was much repetition between the tour and the video, but if you're keen to ask questions (or one of the other visitors is), there's a lot to learn. (The tour guide, if anything, was too keen to talk, apparently oblivious to the blasting heat.) And it's not every day you get to clamber around on top of a scientific tool. (My fiance was disappointed when it turned out there was not actually any lego, though.)

Sunday 7 September 2008

Geographic nominative determinism?

I have to wonder if there's any cosmic significance to a new orgasmic response study coming out of Paisley of all places.

Domo Arigato

Much of today was spent at the CREHST* Museum in Richland, WA. It's mostly a history of Hanford Site, and its effects on the surrounding area. Hanford produced plutonium for the Manhattan Project and continued to produce through the Cold War. Now it's being decommissioned and cleaned up. The museum's pretty small, but the exhibits are excellent, little reproductions of parts of the sites, cutaways and what appeared to be an original conceptual model for one of the contructions. The once-top-secret project was well-illustrated with old photos and, downstairs, a short documentary. (When I say top secret, apparently one kid thought the project was making toilet paper, because his dad brought back two rolls every day.) I appreciate anywhere which has old dosimeters up for grabs in the gift shop. The best part was this, though:



The last time I enjoyed stacking blocks this much, Button Moon was probably on TV.

LIGO Hanford is also inside Hanford's borders, and I'm hopefully going to get out there in the next week or so. ZOMG UPDATES

*Columbia River Exhibition of History, Science, and Technology

Sunday 31 August 2008

Ben Goldacre on the MMR-autism hoax

Ben Goldacre was written an absolutely excellent article on the history of the MMR-autism scare, with a particular focus on the media's responsibility for the thousands of cases of childhood illness, disability, and death that the scare has caused. Particularly interesting is his potted history of vaccine scares in other countries. The United States is rolling through its own scare at the moment, helped along ably by its endemic antivaccination movement.

The MMR scare bobbed into view while I was in high school, and it never smelled of anything but quackery, mostly because the media coverage degenerated quickly into a mess of emotionally exploitative bullshit and instigator Wakefield seemed more interested in getting off on said media exposure than actually being a scientist. And I just don't get any less angry about it. We have a responsibility as scientists to present our work accurately and modestly, and to consider the consequences of our errors, and to not take hundreds of thousands of pounds from lawyers to make shit up from the most tangentally-related, hokey data. The media and the public look to us for answers, and if we say something which gets their attention, it's going to be be assumed as the truth. For my part, I did my best to present the situation as it really stood to my relatives, and to make sure I had the best possible basis for believing that.

It was an important lesson at an stage of my career. Anyway, check out Goldacre's article. It's wonderfully thorough and well-written. And remember, don't trust what you read in the papers.

Tuesday 26 August 2008

More realism? Yes please.

One of my coworkers has passed "Predicting Molecules — More Realism, Please!" by
Roald Hoffmann, Paul von Ragué Schleyer, and Henry Schaefer by onto me. It's potent stuff, so much so that the journal elected to print the reviewers' comments (one negative, from Frenking, and the others positive, from Bickelhaupt, Koch, and Reiher) alongside it. I like it.

As the title suggests, the article is mostly on the subject of computationally designing molecules, and the often wooly criteria that theoriticians use to justify these compounds' existence or usefulness. Hoffmann et al propose a set of evidence to prove that a substance is "viable", meaning likely to survive in a reasonable lab environment, or at least "fleeting", meaning that it exists to detectability but may not be isolable. It's all common sense (stability with respect to polymerisation or oxidation, charge transfer, etc.), but it's refreshing to see it laid out so plainly. I imagine that a lot of computational "synthesis" papers will model themselves after this methodology.

I'm also sure that the criteria will be argued about and adjusted. Frenking gets us started with nit-picks about rules which are utterly unapplicable to astrochemistry (a compound which is unstable on Earth may exist by the megaton in the vacuum of space), and makes the invaluable observation that theoretically-unisolable compounds may be stabilised in the lab by forming complexes with equally unorthodox or artificial bedfellows. Or to put it another way, the environment is just as important as the compound in determining stability. Reiher points out that some of the criteria for stability are too generous, that molecular dynamics would only drop explosives into the "unstable" pile for example. Koch observes that the criteria are generally biased towards organic chemistry (perhaps inevitably given the authors!). These comments are as valuable as anything in the Hoffman paper itself, and hopefully they'll be just as widely discussed. Each reviewer seems to want to pull the criteria towards a specific field, so the main concern is probably that it'll turn into a piecemeal mess.

The rest of the paper is spent calling out computational chemists for things like quoting bond lengths to seven decimal places on the angstrom - values well beyond the physically measurable, and likely so sensitive to computational methodology as to be meaningless. On this subject, the authors have less to say. This is probably the section that provoked Frenking's remarks about "comments which are in reality neither helpful nor do they make realistic suggestions for an improvement". The paper has little to say on accuracy (matching results to reality), precision (being confident in your results) and the related matter of significant figures (providing your results to X decimal places) except to point out common mistakes and state that we should exercise "common sense", but it's good that someone brought it up. This section was pretty thought-provoking, but you may want to skip to the bottom of my ramble on it.

Precision, and with it the number of significant figures, is a tough one. In an experiment, the data that are gathered on each run are slightly different due to subtle variations in the experimental setup or sheer chance. Those can then be averaged to give a final result. The precision can be determined by calculating the "standard error" in the results, which indicates how wide the average is spread out. It's reasonable to say that anything smaller than the standard error is not significant. If a result comes out as 1.004434 gram +/- 0.02 grams, then quoting results as 1.00 +/- 0.02g is fair. Also there's the very basic point that you can't calculate your results to more decimal places than your original measurements.

Computational chemistry is often quite absurdly precise by comparison. Although the purely mathematical approximations involved in computational chemistry add some run-to-run, computer-to-computer, and program-to-program variation, this is usually absurdly small. The paper observes that this is not the case for DFT computational methods, something I'd not been aware of. The authors call for those performing DFT calculations to quantify their precision by performing calculations using different software, and different computers, and then actually calculating precision. I'm not sure how well this will be received, as it doesn't sound incredibly practical.

In most cases, though, it'd be entirely reasonable to say that a computed bond length for substance X is 1.3424545 +/- 0.0000001 angstroms, an insanely precise result. The high precision quoted isn't actually a meaningful statement. We can't quote our result to seven decimal places, because the value we've obtained may be sensitive to the particular computational method we used (which I'm going to call "method A" for now). For a different computional method, then we may obtain the value 1.3464354 angstroms (let's call this method B), or 2.3453453 angstroms (with method C). Those results may be very precise, and not vary from run to run at all, but they're obviously suspect. It no longer seems sensible to quote any of these values to seven decimal places. They're clearly very sensitive to the method chosen, and for all we know aren't even accurate (i.e. they could be nothing like the real value).

The authors don't provide much advice except to quote the guidelines of Pople, about using 3 decimal places for distances in angstroms and so on. I'd suggest using the accuracy of a method to determine the number of decimal places. For example, if I was working on compound X, and similar compound Y had been well studied in the experimental and computational literature, I might look up Y in the CCCBDB and see how well it was described by methods A, B, and C. If method C gives a result which matches experiment to two decimal places (for example, 2.3923 vs 2.3912) then it seems reasonable to use this number of significant figures in quoting my own results (1.35).

I could also suggest quoting the number of decimal places based on the attainable experimental precision, because that's what the results will be compared to. This may be useful if the method chosen isn't well-benchmarked, but a lot of experimental work has been done. For example, if the best experimental study on compound Y had a precision of +/- 0.1 angstroms, then I would quote the bond length in X as 1.3 angstroms.

What about compounds or methods which aren't well-studied? Recall that computational chemistry is in the business of approximation. Well, luckily it's got a toolbox of well-understood approximations. For example, there's a series of methods that goes "MP2, MP3, MP4, MP5...", increasingly mathematically thorough and therefore providing an increasingly accurate approximation. By going through this series, the calculations gradually approach the impossible dream of an exact description of the chemical system, and will show this by our results settling down on a single value. If a series of increasingly-accurate calculations gives the results 3.43, 1.02, 2.48, 1.67, 2.24, 1.83, 2.12, 2.04, 2.09, 2.08, 2.11, then it seems fair to say that the result has converged to one decimal place, and to quote it to that number of places.

Ramble over. To sum up, this paper and the accompanying comments deserve to be read by everyone in the field, and I expect some sort of conference presentation's going to come out of it. I'd love to be around for the questions after that.

(Yes, I'm deliberately lumping both "basis set" and "computational method" here.)

Monday 26 May 2008

Meteor pistols! Sweet!

I can only apologise for my once-again-ruined update schedule. I've been preoccupied with research and personal matters. This story was too cool to overlook:

Science probe for 'space pistols'

Given pride of place in an unassuming museum on the East Coast of America is a pair of 200-year-old duelling pistols shrouded in mystery. The intricately decorated guns were said to have been forged from the iron of a fallen meteorite.


SILVER-CLAD SPACE PISTOLS! In order to figure out whether their metal really is distinctly meteoritey (honestly I'm not all that good with the solid state), they're subjecting the pistols to a tour-de-force of non-destructive chemical analysis. That's where it gets fun for me.

The big technique which the BBC is excited about is neutron diffraction at the ISIS facility. Imagine light shining through a glass crystal - it's refracted and scattered by the material. If you use finer and finer wavelengths, eventually the light (by now, X-rays) is scattered in a very organised manner by the atoms in the crystal. The pattern of atoms in that crystal can be described by a "repeating unit" which just repeats and stacks up through space, so light interacts with the whole crystal in a very well-defined way. Through some rather irksome maths, you can figure out where the atoms are in the repeating unit, and their size and therefore identity. Now you know the chemical structure of your crystal! This is "single-crystal X-ray diffraction".

For various technical reasons, X-ray diffraction doesn't provide the most in-depth information in the world. X-rays let you see the atoms by scattering through the electrons, so the clarity drops off as you move to smaller and smaller atoms, and at wide angles. Neutrons are big, heavy particles, but if you can scatter those through the sample (and that's where the big cool stuff from the BBC article comes in) then you get much better results. In fact, you can even see hydrogen atoms - the smallest atoms of all - and their nuclei.

This is all a gross simplification so I'm sure someone in Central Facilities work will pop along to point out how hilariously inaccurate this is, especially that I've not discussed powder X-ray diffraction instead of single crystal and so on, but I digress.

The other technique they used was X-ray fluorescence. The everyday sort of fluorescence that I'm used to - like you'll see if hold a banknote under UV light, for example - happens by moving electrons about. The electrons in an atom or a molecule fill into "levels", like steps on a staircase. The incoming light gives energy to an electron in the molecule, kicking it up to an unusually high level, like moving it up several steps. It then loses some of that energy, basically as heat, until the only way for it to drop in any energy is by a really big step. That big step is achieved by giving out some light again, but because we've already lost some energy, the colour of light is different (lower-energy light has a longer wavelength). That's why tonic water, for example, can take in high energy, invisible UV light and give out a low energy, visible green glow. The specific structure of the steps determines what wavelength of light the molecule absorbs, and what wavelength of light it gives back out, so by knowing the fluorescences, we can (if we're lucky) tell what molecule we're looking at.

That's everyday fluorescence. X-rays are still electromagnetic radiation, like light, but each packet of X-ray "light" has a really stupendous amount of energy. If you drop that energy onto an electron, it's not just going up a few steps, it's going to be thrown right up the staircase and out the window! That's what we call ionising radiation - it can rip charges right out of atoms and molecules. However the particles in the nucleus of the atom have a step layout too, with much bigger energy gaps between the steps. They can take the energy from a packet of X-rays, and go through this whole dissipate-some-energy-then-re-emit thing. Again, the wavelength of energy it takes in, and the different wavelength it gives out, is pretty specific to the atom. So we shoot different wavelengths of X-rays at something, and watch for fluorescences, and we can identify the materials in it.

These techniques are both really useful because they don't damage the guns! Usually to tell what's in something, chemically, you have to take a bit of it off, and dissolve it, and react it with stuff, or burn it, or shine some lasers through it. That's not really something you want to do to potential METEOR PISTOLS.

Anyway, getting back to the job in hand... it turns out they're not space pistols. In fact, the handles aren't even silver, they're an unusual kind of brass. Well, at least the analytical chemists had fun.

Tuesday 15 April 2008

"Grand challenges" report

(Via ACS Chemical and Engineering News): The US Department of Energy's report on "Directing Matter and Energy" is available online. It's huge (144 pages in umpteen megabytes) and slightly technical, but it's definitely worth digging through if you know the basic chemistry. It sums up some of the most interesting ideas and open problems in physical chemistry in a reasonably understandable manner. Superfluidity? Non-equilibrium dynamics? Nuclear-electronic coupling? It's all in there.

My favourite? Using cleverly arranged chemical systems to mimic the behavior of subatomic particles like quarks, so we can study them without needing high-energy particle accelerators.

Wednesday 9 April 2008

Fuel cells and decentralising energy

My one of my undergrad dissertations (I had two: one on placement year, one as a three-month project before my finals, this is the latter) was on the subject of hydrogen storage, which got me thinking about energy issues. We're currently highly dependent on hydrocarbons as fuels for electricity generation, cooking/heating, and vehicle fuels, as well as to a smaller degree in chemical feedstocks for everything from drugs to polythene bags. These resources are finite, and the oil's largely locked up in a part of the world we're obsessed with destabilising, so it'd be good to have alternatives. The energy isn't going to be anywhere near as plentiful as energy from fossil fuels, at least at first (due not so much to scientific limitations as political heel-dragging), so there's the problem of belt-tightening and energy efficiency for a while.

Our energy generation is very centralised. It's easier to build one big nuclear (or coal, or gas) power plant than several small ones, after all, and electricity is fairly easy to distribute from such a central point. The losses involved in distributing electricity to the end users aren't huge. With the race towards electric vehicles, we're going to be drawing on the grid to fuel our cars and trucks too, so it's important to think about how we're making and using our energy. One of the biggest drawbacks with centralised power generation, as it turns out, is efficiency. Electricity generation systems based on heating up water (such as nuclear, gas and coal power plants) waste maybe two thirds of the heat they generate. You can claim back some of that energy, but due to pesky thermodynamics you can't get it all. The excess heat can't really be transported to where it'd be useful (our windswept, chilly lab for a start) so it's dumped straight into the environment, which kind of sucks for the climate locally and globally.

Earlier in the year we had a presentation from Prof. John Irvine of St. Andrews University. He's working on fuel cells, and in particular solid oxide fuel cells. A fuel cell, in general, is a device which uses up fuel to generate electricity. There's a lot of interest from industry. What's really striking is the sort of product Rolls Royce are aiming at - it's a fuel cell which runs off our existing natural gas supply, but it generates electricity at the same sort of or better efficiency than a gas-burning plant, and it's the size of a domestic gas boiler. Here's the fun part: any inefficiency in generating the electricity just heats that person's house! Although it's obviously not sustainable, it gives us a "stop-gap" that allows much more efficient use of our existing fossil fuel reserves. It's a nice idea, and it highlights a possible benefit of shifting electricity generation out to people's homes.

In the long term, energy generation may decentralise, or it may stay central. Many of the big green options (solar, wind, wave power) can be decentralised, and it's been suggested that we'll all set up our own little solar farms and whathaveyou. Again, this makes it easy to catch waste heat and use it more productively, and the losses incurred in transporting the electricity are reduced. Maybe neighbourhoods will gang together. Other systems, like geothermal power or nuclear fusion, are clean but still need to be centralised, and still have the dumping-lots-of-hot-water problem. It'll be interesting to see how everything's mixed up in 50 years time.

As far as vehicles go, in the short term everything's based on existing rechargable battery technology. This means that it's largely dependent upon how the electricity is made, and the concerns above. In the long term it looks like we may wind up with a "hydrogen economy". Current batteries are pretty crappy in terms of energy density - you can fit a lot of energy into a particular volume (your fuel tank) but they're very heavy compared to petroleum, and the batteries have to be disposed of, recycled, whatever after a couple of years. Instead of charging batteries, the theory goes, we can use the electricity to make hydrogen gas from water. Hydrogen gas has a great energy per unit weight (it's the lightest gas, and it's got a literally explosive amount of energy crammed into it) but it takes up a lot of space compared to petroleum (something we're working on). You get the energy back out by running it through a fuel cell, getting your water and electricity back, which can then power your vehicle.

There are a lot of things to think about when it comes to the efficiency of a hydrogen powered vehicle. As well as the efficiency of the electricity generation, you've got to worry about the efficiency of making the hydrogen using that electricity (there are some interesting ideas on that front, although of course nature got there first) and then you've got to store the hydrogen, and transport it! When you've got hydrogen-fuelled vehicles moving around hydrogen fuel, there's a lot of room for inefficiency in the system as a whole. Many of ideas for storing hydrogen in smaller volumes look like they'd need large-scale chemical industry work, so it's not clear how we could decentralise this and thereby cut back the shipping costs (in terms of efficiency). We may be stuck with a situation like we've got with oil at the moment, moving tankerloads of mysterious pellets around the country to wherever they're needed. This is a big argument in favour of sticking with old-fashioned rechargable batteries, if we can make them lighter, longer-lived and easier to recycle.

In a lot of ways, thinking about energy efficiency is like thinking about the Drake Equation. You've got a lot of subtle variables in there, like the energy involved in making your energy-efficent gizmos and recycling them at the end of their life. These could counterbalance the benefits you get from your gizmo in the first place! So it's a complex problem, and a lot of good debate and argument's come out of it.

Sorry to my regular readers (are there any?) for the delay in updating. I've been a bit preoccupied. I'll do a theory update about intermolecular bonding (once atoms stick together to make molecules, how molecules stick to eachother to make materials) on Sunday with any luck, and get back to a regular update schedule soon.

Tuesday 26 February 2008

Amazon Vine to reviewers: Please stop saying things are crap

Amazon recently launched their Vine program in the UK, and I was one of the lucky few thousand or so people to get in on it. For the uninitiated, Vine is a scheme whereby well-regarded reviewers on Amazon can get free stuff, in exchange for reviewing the product within a certain timeframe. I like free stuff, but alas most of it is either crap, obscure, or only available in such a small quantity as to be pointless.

Once upon a time I wrote reviews for a games website, which also meant free stuff, and I rapidly became aware of the problem in deciding how to grade something which I got free of charge, when everyone else would have to shell out for it. A mild irritation to a reviewer (who must of course get all the way through to write the review) might be enough to make someone give up in disgust at the product if they've paid good money for it. Well, this has been on my mind since I wrote my first, and so far only, review. Given the sheer number of reviews that Amazon get back for low-value items (there are only a half-dozen or so freebies to choose from, and you only get the chance once a month) and that there's no requirement that a review be good, the system's already going to skew review averages in a bad way, and completely bury genuine "I went out and had to pay for this" reviews.

And then, Amazon sends me this:

Dear Valued Vine Voices,

Thank you for your commitment to Amazon Vine. The opinions you provide are invaluable to our customers as they make their purchase decisions, and we are grateful for your participation in the process.

We want to remind you that several vendors submit unfinished versions of their products to the Vine programme in hopes that you will write pre-release reviews about them. Please be aware that these samples have not completed the manufacturing process and
should not be held to the same standards as finished products. For example, publishers often submit unfinished works, called galleys, that are likely to include typos, repetitive content, errors in syntax, and may be missing glossaries, indices, table of contents, photos, etc.Please take this into consideration as you craft your reviews. We ask that you focus instead on the potential of the overall product. In the case of books, please write about the overall quality and context of the author's message as opposed to the editorial features of the book. If you have any questions about writing reviews, please contact us at vine-support@amazon.co.uk. Thank you again for your continued support of the programme.

Sincerely,
The Amazon Vine Team


It's not just books, either. The Vine forums (they're private) reveal that the copies of The Ferpect Crime and other DVDs are sent out on recordable DVDs with no menus or extras. Pre-release stuff is a perk of the program, of course (and of reviewing in general) but using unfinished pre-release stuff as warm-up reviews for the real thing doesn't strike me as ideally suited to Amazon's Customer Review pile, where the advantage was always that somebody had been playing around with the "real thing" for a while and could pick up on any little quirks or flaws.

I worry that non-members who are not aware of this policy of sending out preview items (that link in the opening paragraph is the only information you get on the program) might be misled by a whole lot of speculative "reviews" on the basis of an incomplete preview product. In any case we're limited to whatever content the publisher decides to send out. Sending out dozens of preview items with the explicit instructions that the reviewers be generous and vague in describing them in the reviews section is verging on astroturfing, in my not-so-humble opinion, and certainly reduces the value of Vine reviews, and Amazon reviews in general.

(The Vine catalogue does not provide any indication what content is pre-release and what is identical to the finished product, so perhaps we're to write all our reviews under the assumption that what we're seeing perhaps isn't the real deal. I might be reading too much into the email though.)

Incidentally, The Elements by Second Person, the album I reviewed and linked to above, comes from Sellaband, a rather novel little indie publisher. Sellaband, it transpires, arranged for "promotion to the 50 most active reviewers on The Vine" as part of its distribution deal with Amazon. I'm not sure what that promotion was (I'm not one of their most active reviewers, by a long shot) but I have to wonder how it's affected the review scores for that album. Anyone want to come forwards?

(It's been suggested to me that it's a marketing misphrase, and that they simply mean 50 albums are sent out to any Vine reviewers, as we're all meant to be amoungst Amazon's most active. That tallies with the number of Vine reviews that have appeared for the album.)

Saturday 23 February 2008

Real spam of genius

One of the fun things about a .ac.uk email address is that you get a subtly different kind of spam. Dodgy "tuition fee funding" emails from half-wits, say:

Tel: 0705-381-2013
Fax: 0705-381-2013

Ref:ESD/RGID-GIR-L0001/ENC7655

Dear sir/ma,

I am Dr Cheryl Mayfeild; Director ,Student Funding, Westfam Foundation. It has come to my notice that you may want an eligibility in our limited funding for this year as we have a targeted funding of 10,000 students and in which 7,500 have applied.

This may be only the first step in making up your mind whetherto take a free fund of 10,000GBP to augment or cover your tuition fees , living expenses,accomodation,etc. and it is important that you make the right decision for you.

The list of questionaire at the bas e of this letter is your application form which must be filled and emailed back to westfamreg@gmail.com , for official registration.

Further information is also available on our mini-site http://westfamfoundation .blogspot.com
Sincerely yours,
Dr Cheryl Mayfeild.

note!!!
fill this Application form:

student informmation

Course Name:
College/university Name:
Full Name:
Street:
Apt/Suite
City
State
Zip Code
Phone:
Personal Email:

Drivers License
State Issued:
License Number:

Parent or Guardian Data
Parents Names
Parents Phone
Parents E-mail Address
Father's Employer
Company
Address

Privacy Report:
This is to assure you that none of your information shall released to any third party (except on your permission). we run a private encryption database to keep your information safe and secure.
We store and process your personal information in our database, and we protect it by maintaining physical, electronic and procedural safeguards in compliance with applicable federal and state regulations. We use computer safeguards such as firewalls and data encryptionn to safe-guard your information.
Thanks for your understanding,
Westfam Foundation


I could point out all the missing apostrophes, too, but it's not worth the effort. Maybe they're a legitimate UK operation like their blog (check it out; blogger doesn't want to delete it for some reason) suggests, and they just got confused about what country their prospective benificiaries are in. Or not. Those phone numbers are personal-use call-forwarding by the way. I'd call them up to take the piss, but I suspect they're routing to a premium rate phone scam number. Hey, it looks like teachers can benefit too!

You also get the latest breakthroughs, straight to your inbox:

Definitive Periodic Law is revealed in arrangement of new Periodic Table, repeating sequential numbers of protons discovered in Groups 1 through 18 in the elements of the new ENERGY WAVE of the Periodic Table. The elements in the new ENERGY WAVE of the Periodic Table are given in the ground state, which is one electron for each proton. This arrangement provided a unique opportunity to observe the nucleus of the elements. By incorporating the sequential numbers of protons underlying the Energy Levels K, L, M, N, O, P, & Q in shell blocks s, p, d, and f of the Group elements, it revealed what had been hidden and veiled in the complexity of electron configurations. Sequential numbers of protons are observed to repeat in the Group elements from period to period. This is the true revealed energy force creating the similar physical and chemical properties of Groups 1 through 18 from period to period in the Periodic Table. The ENERGY WAVE of the Periodic Table had revealed Definitive Periodic Law… “Definitive Periodic Law is the number of protons underlying the Energy Levels K, L, M, N, O, P, & Q, in the nucleus of the Elements. These sequential numbers of protons repeat in shell blocks s, p, d, and f, forming groups that have similar physical and chemical properties from period to period.” These sequential numbers of protons are the cornerstones of the nucleus and provide the atomic orbitals of the electrons the foundation for their spatial relationship to the nucleus as described by the azimuthal (angular) and magnetic numbers of quantum chemistry. These sequential numbers of protons are very important, as they reveal new explanation to chemical bond angles and the molecular geometry and structure of molecules.



ISBN-13: 978979623510

Publisher: Energy Spectrum Publishing

Publish Date: November 2007

80 pp 70 Color illustrations


Formatting as in the original. I imagine that if you took a high school chemistry student, kept him up all night before his exams, pumped him full of caffiene, and told him you were going to shoot him if he didn't get an A, that's the sort of thing you'd get on the paper. Either he's restating a lot of what we know about chemistry and nuclear physics in a completely obscure way, or it's word salad. The "periodic law" is the cycle of properties you get when you arrange elements by mass, and it does allow you to get some useful predictions out as a described in that post. Although it's a very good model, it's limited to an approximation of the real, deep quantum mechanical behaviour which gives rise to that cycle. You run into areas where that approximation doesn't hold, so you have to accept that the properties of elements aren't just a function of their mass and that there's something else going on. (One of the nice things about learning chemistry is that you gradually learn more and more complex models, but don't leave the old ones behind. It's like running through the scientific method in time-lapse, and gives you a better feel for how science as a discipline works as a result.)

Anyway, I certainly can't make head nor tail of the email, even with the help of their website. Any takers? I'm tempted to blow the $25 on ordering a copy for entertainment value (I buy Nexus for the same reason, and I'd be doing them a favour), but quack physics has a tendency to be both bizarre and boring. (Okay, maybe real physics has that problem sometimes, but at least it's useful and does cool stuff).

Thursday 21 February 2008

Wii Bowling: Serious Business

It's probably a good time to introduce my particular research area, "computational theoretical chemistry". As I discussed in the last long post, the ability of atoms to stick together comes from electrons' tendency to pair up, and atoms' need to fill up "shells". I'll elaborate on this in more detail, but the important point to take from this is that chemistry (the science of how matter comes together) is all about electrons. They're tiny, tiny little particles. The lightest nucleus, that of hydrogen, is two thousand times heavier than the electron it binds. That tiny mass means that the crazy rules of quantum mechanics dominate.

Now, the maths required to describe quantum mechanics is well known (Einstein was behind one of the seminal papers in at the start of the 20th century, and it got him the Nobel prize), and so we can use this maths to predict what a particular chemical system is like, but it's fairly torturous. It's possible to describe a hydrogen atom with a pen and paper, but as soon as you go above that (the simplest chemical entity, with just one electron to describe) you have to start making simplifications to even be able to solve the problems in principle. These approximations involve lots of repetitive methods. If you can remember trying to do long division, or find square roots by Newton's method, then you've done a similar sort of thing. We say that these methods of solving the problem are numerical rather than analytical.

Fortunately it wasn't too long before electronic computers (and theoreticians like John Pople who knew how to make them dance) came on the scene, so we could feed these long, boring problems into dumb but fast boxes and get the answers. This is computational, theoretical chemistry. These days there are lots of stupendously powerful computers to work with and handle all sorts of complicated problems. We work hand-in-hand with experimentalists, helping to figure out what could be going on in their experiments. In return, the experimentalists give us a way to test how accurate our predictions are.

As it happens, it's still very difficult to perform calculations on big systems. The way around this is to use simpler and simpler methods. Eventually you throw out quantum mechanics altogether and start describing molecules as little lumps (atoms) stuck together with springs (bonds). The springiness of the springs comes from experimental observations of similar bonds (a bond between a carbon atom and a hydrogen atom is always pretty much the same springiness, say). Then you solve the maths for that system, which is fortunately relatively simple. The theories behind weights and springs and so on are called "classical mechanics", and these were pretty well sussed by the end of the 19th century. Applying them to chemistry like this is called "molecular dynamics". It's got some limitations - you can't break the springs, usually, and some subtle effects can be missed - but it's still very powerful. The Folding@Home project uses this sort of method to study how proteins fold up, because they're absolutely huge.

There's no point in having a breath-takingly fast computer if you can't have a bit of fun with it, mind you. The Pittsburgh Supercomputing Centre decided to set up a simulation of a bunch of buckyballs - little spheres of carbon which are distant relations of the graphite in a pencil - and make a scientifically accurate game of microscopic Wii bowling. In fact they've hooked the remote into the spiffy molecular dynamics package NAMD and the spiffy visualisation tool VMD, to create something they've dubbed "WiiMD". Be sure to check out their YouTube videos. Many of these things are very difficult to look at by experiment. (You have to look at things sideways, and pick apart what's going on indirectly. I really do enjoy a good experimental method, mind you. I've kind of missed the detective work of decoding mysterious wiggly lines.)

Apologies for not updating sooner - I've been overrun in the lab (nothing like an upcoming presentation to convince you to get your work into order). I'll do a little video to discuss bonding in more detail, in particular how molecules interact with each other, and also a post about reactions, the changes which molecules go through. Then eventually some more quantum mechanics to put the stuff about valence bond theory into context. See you then!

Monday 18 February 2008

Academic misconduct and getting away with it

The American Chemical Society's C&EN is reporting on a truly spectacular piece of academic misconduct. They've actually got a blog for it, if you wish to comment. Go ahead and read it, I'll be here when you get back.

Pretty blatant, isn't it? If a book or an album were ripped off that way, it'd be pretty easy to spot. Unfortunately the sheer volume of scientific papers published, and the number of journals available, gives fraudulent research a lot of hiding places. For a dodgy paper to survive, it first has to dodge peer review, in which researchers in that field look it over and decide whether the research itself is legitimate and the paper is well put-together, and then escape detection by the journal readers themselves.

Peer review has plenty of problems. It often involves handing over your research to academic rivals, a topic which C&EN raised last week (they've moved this article to a blog as well). Scientific papers are often very specialised, so it must be tempting for a reviewer (who works in that field, but doesn't know the particular specialism very well) to let something through because it "looks good" even though a specialist would immediately see that it's a load of garbage. Likewise, if the reviewer doesn't work in that specialist area, they may not be familiar with the literature and therefore may not notice that a paper is a duplicate. The readership has the same difficulty in uncovering a problem. If a paper's not in their specialism, they may be reluctant to report that it looks like gibberish for fear that they may be wrong. And even a specialist can't be expected to read every paper in his or her field. There are simply too many journals, and dodgy papers tend to get published in the obscure ones, or at the very least, not the journal they're ripping off!

I had this problem when I was marking lab reports recently - the answers a few questions were copy-and-paste jobs from a report I had marked the week before, and only the odd wording and bass-ackwards chemistry in one particular sentence jogged my memory. In the end I had to come up with a ruse for recalling the last week's papers to check them out. A little more effort, and it would've escaped un-noticed, especially if I'd had more than a half-dozen or so papers to mark. (One of the things that really annoys me about plagiarism is that I get suspicious and find myself going back and forth between papers following up spurious similarities, which means it takes longer to grade.)

What can we do? Electronic plagiarism-detection is a start but is useless in the face of many forms of misconduct and may lead to dubious, lazy marking practices. There's a lot of money to be made by selling automagic fixes for complex problems, and software is only as good as its database. Stay vigilant, I suppose. Don't take a paper at face value just because it's outside of your research comfort zone. Scepticism is healthy in science, after all. I'm pretty new at this, but it seems to me that the skills necessary to spot academic misconduct (inquisitiveness about new fields, reading journal articles properly and not just going over the abstract with a highlighter) are the same skills needed to be a good scientist.

Further reading:
Pharyngula on a baffling failure of peer review.
Deja Vu, a duplicate-publication-finder with a very good database.
Wikipedia's article on scientific misconduct is a fun starting point for some lunchtime browsing.

Saturday 16 February 2008

Valency: sticking atoms together

To recap my last Long Post, everything in the world is made up of atoms. These (nearly) indivisible units combine in different ways to make different things. These things are called "compounds". What do they look like? How do they work? That's what I'll try to explain in this post.

Atoms and bonds

It's actually pretty easy to say what elements a compound is composed of, in what proportion. Any volume of water, for example, contains twice as many hydrogen atoms as oxygen atoms. That gives us its chemical formula (or more specifically its empirical formula), H2O. Likewise, big bad carbon dioxide is CO2.

As it happens, the chemical formulas here also describe a single molecule of that substance. Pictures of molecules are pretty commonplace. They usually show a bunch of atoms represented by coloured balls, linked together by sticks which represent "chemical bonds", whatever those are. The number of bonds an atom will form is specific to that atom, and decides what sort of structures it can form. Here's an oxygen atom (red), and some hydrogen atoms (white), and there's only a couple of ways I can stick them together:



So, if we had a cloud of carbon dioxide, and we zoomed in far enough, eventually we'd see individual molecules, and closer still, we'd see that each molecule is made up of a carbon atom and two oxygen atoms stuck together. There's a different way of building things up, though. Salt, for example, has the chemical formula NaCl, usually we don't get a single unit "NaCl" - instead we get a big collection of Na (sodium) and Cl (chlorine) stuck together in a ratio of one-to-one, like Na800Cl800.

What decides how many bonds an atom can make? One of the best pictures for this - which works on many different conceptual levels - is valence bonding.

Valence bonding

There are around 100 different elements to choose from in making molecules. The selection box is called the Periodic Table. Here's the "main group":



They're arranged in order of the weight of individual atoms from the top left to the bottom right. Oddly, we have a cycle in the elements (or "periodicity", hence the name of the table), where if you start at a particular atom and go along eight steps, we have something similar. Dmitri Mendeleev spotted this when he was creating this table, which is why the main group table has eight columns, which we call chemical groups. Mendeleev had to leave gaps and suppose there were missing, unknown elements with properties similar to their neighbours above and below to keep the pattern going. This was a great intuition - although the reason for the cycle wasn't understood at the time - and it worked remarkably well.

Starting from the top right, we have helium, He. This is really chemically inert - an atom of helium doesn't do much, preferring to sit around on its own. Going down one (eight atoms heavier) an atom of Neon (Ne) is also inert, but it's a bit heavier. Then there's Ar, Argon, which is heavier again, and so on. These are called the noble gases, or inert gases. They're the last column, so that's group eight (VIII).

This is interesting, but because these elements don't really do anything, it doesn't give us any insight into chemistry, so let's go along to group four (IV). Here we have carbon, silicon, and germanium. Carbon can form methane, CH4. Silicon can form silane, SiH4. Germanium can form germane, GeH4. We could reasonably suggest that carbon and its relatives can stick to any four other atoms. We would say they all have a valency of four.



This idea, called valence bond theory, was an early breakthrough in chemistry. By comparing different compounds, we can come up with the valencies of the different elements. It increases from a valency of 1, in group I, to a valency of 4 in group IV. Then it declines again, from a valency of 3 in group V to a valency of 0 in group VIII. By satisfying these valencies, we can stick atoms together to make compounds. Oxygen has a valency 2, so you need two of them to satisfy carbon's valency of 4 and make CO2. You may notice that this means there are two bonds going between the carbon and each oxygen. These "double bonds" really exist, and are much stronger than single bonds.

Shared electrons and where valency comes from

If you're an inquisitive sort, you'll be wondering where this valency thing comes from, and why it's associated with this number eight. To think about that, we have to look at the structure of the atom. From a chemist's point of view, there's a tiny little, heavy indestructible lump called a nucleus with a particular positive electrical charge, +1 for example. Around this nucleus in a very big, very loose cloud are light particles called electrons, each of which has a charge of -1. Opposites attract, so the +1 charge of our hypothetical atom can attract and hold 1 electron.

Let's start with some of the oddities of the main group, the first two elements. An atom with a +1 nucleus and 1 electron is called hydrogen. If you want to give the nucleus a bigger charge, you have to stick some more stuff in there, so the next atom, with a +2 nucleus and 2 electrons, is heavier. It's called helium. As it happens, if an atom only has two electrons, they're very happy together and the atom doesn't need to do anything else to be stable. This is a "closed shell"



Atoms want to have closed shells where possible, so a hydrogen atom is on the lookout for one electron. It can get to this by sharing an electron with another hydrogen atom. They then each have two electrons, and the molecule as a whole has a closed shell. Having one electron to share gives hydrogen its valency of 1:



Pairing up valence electrons by sharing like this is called covalent bonding.

Let's go up to lithium, +3 with 3 electrons around it. The first two form an s-shell on their own, leaving one electron to form an s-shell of its own. We call this the "core", and those two electrons are core electrons. We're left with one electron to deal with, giving the molecule a valency of 1. For this reason, we call this electron a "valence" electron.

As it happens, lithium's really bad at holding onto this odd electron. In fact, everything in its group is pretty easy to take an electron from, if another atom really wants it. This is fine, though - by taking that electron away, it gets down to the same closed shell as helium. By giving up this electron, though, lithium winds up only two electrons for its +3 nucleus. That's a +1 charge left. As it has a charge, we call it an ion. Whatever snagged lithium's electron has picked up an extra electron, so it now has a charge of -1 (it is also an ion). Opposite charges attract, so the two ions will stick together:



Reaching a closed shell by giving up electrons like this is an exampe of an ionic bond. Bonds can really be in between the two situations - the shared electrons can be a bit off-centre rather than in the middle (we'd call this a polarised covalent bond) or way off centre (completely ionic). Either way, you can only get one negatively-charged ion with each positively charged ion, so that's valence 1.

So, atoms like to pair up their electrons, and they particularly like to get to a closed shell by doing this. If you get a closed shell, the valence count "resets", at least when we look at going from helium to lithium. This is an interesting diversion, but the original question is: why this cycle every 8 atoms? We're getting there.

Boron is in group III, which is +5 with 5 electrons. The first two electrons form a closed, s-shell core as in lithium. That leaves us with 3 valence electrons. It can then use those to pair The same holds for carbon - group IV, 6 electrons, 2 in the core and 4 valence. So it can share those 4 to form 4 bonds, like we saw earlier.

Now, when you get up to nitrogen, group V (+7, 7 electrons) you might reasonably assume that because it has 5 valence electrons (remember, 2 electrons go into a core) it will be able to form 5 bonds by pairing those up. You'd be wrong, though! Actually, the closed shell for the main-group elements (except hydrogen and helium) has eight electrons, basically a 2-electron s-shell plus an extra 6-electron thing called a p-shell. Nitrogen has 5 valence electrons already, so it only needs 3 more and it's in business. So, it pairs up 3 of its valence electrons with electrons from other atoms. The other 2 electrons pair up on their own:



Nitrogen has 5 valence electrons and only has a valency of 3. So, that explains why the valence starts going back down when we get past carbon. The need to form a closed shell starts to take over, and atoms start pairing their own electrons off with each other. We can see the same thing with oxygen, which has 8 electrons, a valency of 6, and only forms 2 bonds with hydrogen atoms to form water:



Next up is fluorine. It has 9 electrons, of which 7 are valence. Fluorine only needs 1 electron to reach the closed shell, then. That means it's actually pretty tenacious when it comes to grabbing electrons. In fact it's just the sort of thing which will steal an electron from lithium as mentioned before:



Last, but not least, we have neon. It has 10 electrons, of which 8 are valence. That's the closed shell, right? It doesn't have any need for more electrons, so it hardly interacts with other things at all. And once we have a closed shell of 8 electrons, we have to start building up electrons around it again, using everything underneath as a core. So, that's where this cycle of 8 comes from. Elements with a similar number of valence electrons have the same sort of needs with regards to grabbing other atoms' valence electrons.

Summing up

Most of this post has been about satisfying atoms' need to link to other things, without discussing what sort of structures it makes. I've tried to keep it fairly simple, to get across this idea that an atom of a particular element can only stick to a specific number of other atoms, determined by this property we call valency. This comes from the number of electrons on the outside of an atom, and the need to pair those electrons up and/or form a closed shell around the atom. There are two "characters" a bond can have - it can be about sharing electrons (covalent), or transferring them completely (ionic), although often bonds can be found which lie in between.

Next, I'll be writing about how molecules stick together to form larger structures. Eventually, the limitations of valence bond theory will become apparent, but it's a good starting point. Now that I've addressed this, I'll get on to how molecules stick to each other, and I should be in a position to explain what my supervisor's paper is about.

Friday 15 February 2008

What we need more of is (being on the cover of) Science



For my PhD, I'm looking at chemical processes which happen when an electron is put into (or perhaps, taken out of) a system. As it happens my supervisor and his collaborators were finishing up a paper on this when I started work, and it just got published in Science. They even got the cover picture, up top. We're all really pleased about this: it's a neat bit of research on a familiar molecule, coming at it from both the experimental and theoretical sides, and having it published in a high-profile journal should lead to some interesting discussions. (If I'd started my PhD a couple of months earlier, I'd probably be a coauthor on this paper, which would've been great for my first year report. On the other hand, it's good to have an established bit of research to work outwards from.)

In a sense, all chemistry is about electrons - they're the enigmatic little particles whose interactions stick atoms together, to create molecules. That'll be the subject of my long post tomorrow. On Sunday, I'll also write a little about this paper, and the sort of stuff I'm going to be working on for the next few years.

Thursday 7 February 2008

It's usually a good idea to read the bottle

Orac reports on a spa which mistakenly used hydrogen peroxide, instead of water, to give enemas. Their excuse is that they looked the same. Well, when I was doing high school chemistry, we were taught a rhyme on the danger of making that assumpion:

Jenny was a schoolgirl,
Now Jenny is no more
For what she thought was H20
Was H2SO4

(I kept it in my head mostly as a way of remembering the chemical formula for sulphuric acid.)

It's a pretty fundimental rule of working in a chemistry lab that you assume everything is out to get you, and unless labelled otherwise, all clear liquids are deadly poisons which cause cancer on sight, never mind consumption. Likewise colourless solids which resemble nothing so much as table salt or castor sugar or sherbert are assumed to be pure, crystallised death unless the bottle says otherwise. (Yes, even chemistry researchers can paranoid about "chemicals" like everyone else. If there's hydrofluoric acid sitting around the lab, I'd like some warning before I pour it in a glass beaker and start wandering over to the wastes bottle with it.)

I really have to wonder what this person was like in their high school chemistry class. I should be a little grateful- the enema story gives that old rhyme an exciting new context:
Jenny was an enemist
Now Jenny has no backside
For what she thought was H20
Was hydrogen peroxide.

That quote I mentioned yesterday

... A MOLECULAR SYSTEM ... (PASSES) ... FROM ONE STATE OF EQUILIBRIUM TO ANOTHER ... BY MEANS OF ALL POSSIBLE INTERMEDIATE PATHS, BUT THE PATH MOST ECONOMICAL OF ENERGY WILL BE THE MORE OFTEN TRAVELED.
-- HENRY EYRING, 1945


From one of my (many, many) Gaussian03 log files.

The quantum mechanics of Mario World

NB: The links in this post are to flash video players which may fire up on their own. If you're watching at work or school, don't get yourself in bother.

Two posts in and already I'm going a little off topic, so I'm going to have to come up with some justification here. My own research is in the field of computational chemistry, which is a fancy way of saying I simulate the sorts of things that go on in real laboratories. To simulate something, you need a deep nuts and bolts understanding of it, which in chemistry often means quantum mechanics. Quantum mechanics is punishingly unintuitive, but if you're willing to sit back and accept them without explaination as to their origins, some of the results are fun.

One of these fun ideas is the "many universe" interpretation, the idea that in random events (on a small scale!) every possible scenario plays out somewhere. These different scenarios run along together until some event has to choose one scenario over another. This is brilliantly illustrated in this video, where a very difficult customised level of Mario World has been attempted many dozens of times and the runs overlaid on eachother. Every time Mario runs into an obstacle, he'll either get past it or fail, which gradually whittles things down. For a similar idea in a racing game, see The 1K Project. For a retro shooter version, there's Averaging Gradius.

The Mario video is actually a great lesson in a chemical principle, too. For non-chemists this will become clearer when I do my chat about reactions, but to cut a long story short, in chemistry the same reaction will be going off in a million slightly different ways at once, and usually dozens, or hundreds of different reactions are happening in the same beaker. You invariably have the same reaction going backwards and forwards. What's important is the average effect - even if only a tenth of the Marios make it to the end gate on each run, over time, practically all the Marios are going to make it. There's a great quote about this which I've misplaced, and I'll hopefully add tomorrow. If you've been watching UK TV lately, particularly Channel 4 last Friday, you'll probably see another similarity too, but I don't want to spoil things for anyone who's not seen the show in question.

Sunday 3 February 2008

Elements and reactions

Chemistry doesn't have a great public profile. Biology's pretty straightforward - it's the wet, squishy, science of life, so it's easy to see what it ties into. Everything else is pretty much either physics (big cool space things) or maths (the equation for the perfect cheese sandwich and such). Chemistry only really comes up as "chemicals" of one sort or another, be it drugs or poisons or preservatives or other artificial things that you don't want to spend too much time around. So with this in mind I decided to start off this blog with an explaination of what chemistry is, so that non-scientists could follow what I'm saying. Hopefully. Enjoy!

Chemistry is the science of the structure of matter, and how substances can be changed from one to another. Of course this is a very rough definition. You can change the structure of a Lego house by taking it to bits and reassembling it as a car, or turn a pile of uranium into a pile of plutonium by bombarding it with whooshy Science Particles, but those aren't chemistry. Chemistry deals with figuring out, and messing with, the structure at a particular scale, the scale which makes life work day-to-day.

It was alchemy, which amoungst other things frittered with trying to turn boring old lead into retirement-tastic gold, which gave birth to chemistry. The alchemists were doing what we'd call chemical reactions, in a very controlled setting - they put aqua regia (a nasty brew of scary acids) and some gold into a pot, heat it a little, and see what happens (the gold dissolves). Alas they didn't know that turning metals into one another falls outside of the realm of chemistry, so they were doomed from the outset. The problem lies with the notion of chemical elements.

Elements are a very old idea which is fairly intuitive. Everything in the world is a mixture of elements, the theory says - earth, fire, wind and water for example. The way they combine makes the difference between a cat and a tree and a rock. When you set wood alight, the fire gets out and it turns into ash, say. The elements themselves are basic and universal, like Lego bricks. Alchemy, as with chemistry, broke things down and put them together in order to find out what the real building blocks were (clue: not Lego) and how you could put them together. As it turns out, gold - and most metals, for that matter - are chemical elements. No amount of messing about with ever-hotter fires will break gold down into something else.1

It turns out that matter has the same sort of granularity as Lego, too. Imagine that you've got a big block made from yellow Lego bricks. You can break it down into individual yellow bricks. Likewise, you can chemically break down a lump of gold into individual grains of gold, called atoms.

With this atom theory, everything starts to come together. You can't break atoms down - each atom is an indivisible, particular chemical element, be it gold or carbon or lead or oxygen. Those atoms combine in various ways to give the materials around us. When they come apart from eachother and recombine in a different way, you have a chemical reaction, which turns material into another, and this happens under the sort of conditions you see in the everday world. As an example, your body takes in food, rich in carbon atoms, as well as air, rich in oxygen. You breathe out carbon dioxide gas, which a mixture of the carbon and oxygen.

How, exactly, do these elements combine together to give the substances we know? Why does the body bother changing food and oxygen into a pile of troublesome carbon dioxide gas? I'll discuss those in the next couple of introductory posts, on molecules and energy.

1Actually, you can break down elements and turn them into eachother, but this is very difficult. It wasn't really possible until around the 20th century, and is part of the field of nuclear physics. I'll bring this up when I write about the structure of the atoms themselves next time.

Wednesday 23 January 2008

Introduction

Hello out there! My name's Alex W., and I'm a theoretical chemistry PhD student. I'm starting this blog as a way for me to talk about interesting things going on in chemistry, and also to try to explain some of the ideas of chemistry to non-experts. I'll try to have a reasonably equal mixture of explainations of chemical concepts, interesting new research, and the kind of work I'm doing myself. Enjoy!