Sometimes being interested in lots of different things can be a curse. Just at the moment I’m being very strongly reminded that attending scientific conferences is without doubt one of those times. At a large conference there are almost always too many presentations for them all to be held one after the other. So what usually happens is this. Talks on roughly the same subject are grouped together and the different groups run in different lecture theatres at the same time as so-called parallel sessions.
This is all very well except that those of us who are interested in a broad range of (in this case) evolutionary biology have to keep running between sessions to catch different talks and it inevitably means we end up missing some great presentations on fascinating subjects. I’ve definitely had one of those days.
Of course, I went to some very interesting, well presented talks as well. Mohamed Noor from Duke University in North Carolina, for example, on the genomics of speciation where as part of his talk he presented data on the possible causes of divergence between two species of the fruit fly Drosophila.
Amongst other differences between the two, there are large pieces of DNA that have had their order in the chromosome completely switched around in one species relative to the other, closely related species; what are known as chromosomal inversions (logically enough).
If these inversions were involved in generating the split between the two species, we would expect to see them dated to the time when the two species are thought to have split. However, the data show that the split between the two species likely pre-dates the inversions, so the inversions probably weren’t involved in the initial split, although his data show that they've played an important role in keeping them separate. Simple but logical and neat.
It’s a sad fact that well carried out pieces of science that address interesting and important questions but which nevertheless find a negative result, like the one Mohamed Noor reported, rarely get the attention they deserve. Finding something doesn’t happen isn’t going to get you on the BBC science news page. But, as every science undergraduate gets told repeatedly, this is an essential part of the scientific process.
When presented with a number of different explanations for something you need to carry out experiments to see which is the most likely to be right. Having good evidence that one of those explanations cannot be the one you’re after means it can be crossed off the list. This is important. It stops you wasting your time going after the wrong explanations and can lead you to explanations that are not only more accurate but often a lot more interesting.
The way this is phrased is at fault. All too often you read that the experiment “…failed to find a result”. But there are two ways this can happen. Either the experiment was done badly and it really was a failure or it was done well and has shown that a possible explanation doesn’t hold water. The second one sounds an awful lot like a success to me.