Breaking Science: Laser hearing, chatty females, clean energy...

Featuring: Audio Audio

Shedding light on deafness, an ocean of clean energy, is Jurassic Park a reality, and do cats always land on their feet?

By: The Naked Scientists (Guest)

  • Duration 30 mins
  • Updated Friday 21st November 2008
  • Introductory level
  • Posted under Radio, Breaking Science
Share on Google Plus Share on LinkedIn Share on Reddit View article Comments
Print

The team explores the cochlear implant that uses infrared laser light for more complex hearing, clean energy from ocean thermal energy conversion (OTEC), how woolly mammoth DNA found in the permafrost is a mammoth step towards recreating extinct species, why social vocalisation replaces grooming in society, how early photon tomography (EPT) provides a safer way to image biological tissues and findings that how the brain categorises colours changes when we develop language.

Plus, in 'Stuff and Non-Science', do cats always land on their feet?

Listen

Copyright BBC

Read

Chris Smith: Hello, welcome to the Naked Scientists: Up All Night which is produced in association with the Open University. I'm Chris Smith.

In this week’s show, how scientists have developed a powerful new way to see through tissue but without having to use harmful X-rays.

Mark Niedre: Our approach was the idea of taking an extremely high speed pulse laser, shooting that into biological tissue and then, on the other side of the tissue using a camera that’s also very high speed, catch the photons that come out of the tissue on the other side first, and the idea is that the ones that came through first are the ones that had to have taken the shortest path through the tissue and therefore it should contain more spatial information.

Chris Smith: Mark Niedre who’ll be revealing how that works in just a moment. Also, on the way, how scientists have found that when you learn the name of a colour, you can also alter where in your brain that colour gets processed.

Anna Franklin: So, in a nutshell, what we’re finding is that toddlers who don’t yet know the words for blue and green, their colour category effect is right hemisphere-based, whereas toddlers who have learned the words to blue and green, their colour category effect is stronger in the left hemisphere, as it is in adults who also know the words for blue and green. So we’re finding a right to left hemisphere switch that occurs when the word for colour is learned.

Chris Smith: And is it true that cats always land on their feet?

Sorrel Langley-Hobbs: One paper did suggest that if they fall from more than five or six storeys, they actually have less injuries than falling from lower storeys, and there have been examples of cats that have fallen from 32 storeys in New York that actually survived with little more than a chipped tooth and a bit of air in the lung.

Chris Smith: That’s the feat of falling cats. And it's all coming up on this week’s Naked Scientists: Up All Night. But, first, let’s take a look at some of the hottest breakthroughs from the world of research this week, and here’s this week’s science sleuth, and that’s Ben Valsler. First up Ben, scientists have found a way to use lasers to improve certain types of hearing aid. Tell us more.

Ben Valsler: Well this is the discovery that laser light interacts with nerve cells in such a way that we can actually use it to greatly improve cochlear implants - these are implants inside the inner ear. Now a healthy inner ear will contain many thousands of hair-like cells which detect sound, and they pass that sound signal on to nerve cells. The nerve cells transport the signal to the brain where it can be decoded and you know what you’ve heard.

If these cells get damaged, as a result of illness, accidents or something like a genetic defect, you wind up being deaf. A cochlear implant takes the place of these hair cells. It communicates directly with the nerve cells so that someone who was formerly deaf can hear. And at the moment cochlear implants are actually very effective. Deaf children with implants can develop speech that is very close to that of hearing children.

Chris Smith: But how do they actually work? How are they getting the signal back into the nerve cells?

Ben Valsler: Well, at the moment, they use about 20 electrodes to connect to the nerve so they directly electrically plug into those nerve cells. Now that’s very few connections compared with the 3,000 or more hair cells that you find in a healthy ear. So that means that although an implant will certainly improve hearing, the hearing will still be quite poor, and many people with implants, they don’t enjoy music, they find it very difficult to converse in noisy environments and, actually, they find it quite hard to understand tone or languages like Mandarin or Thai. Now due to the way that tissue conducts electricity, you find that you just can’t increase the number of electrodes more than about 20.

Chris Smith: Is that because you get a sort of spill-over from one stimulated area to an adjacent one if you put more electrodes in, and that means the signal blurs rather than getting more precise?

Ben Valsler: Exactly, yes. The fact that the tissue will actually send some of the electricity through to the neighbouring electrodes will put a limit on the density of electricity that you can put into that tissue, and that in turn puts a limit on the quality of cochlear implants. But now a team from Northwestern University in Chicago lead by Claus-Peter Richter have announced at the Medical Bionics Conference in Victoria, Australia that they found a way to shine some light upon the problem - boom boom - using infrared laser light to be precise.

Now, for reasons that are not yet fully understood, laser light will stimulate neurons, even though they do not contain light sensitive proteins. Richter’s team decided to see if this would be a better way of communicating with the neurons in the inner ear, and knowing that the laser would not spread through the tissue, you wouldn’t get this spill-through in the tissue that an electric pulse would, lasers could be a way to achieve much higher definition for implants.

They tested this out by shining a laser of infrared light onto the neurons in the inner ear of deaf guinea pigs. And they also monitored the electrical activity in the region of the brain responsible for passing information from the ears to the brain cortex - that’s called the inferior colliculus. Now not only did the lasers register a signal which shows that the deaf guinea pigs could definitely hear, so they’re working just as the cochlear implants do, but the brain activity actually looked incredibly similar to that seen in guinea pigs that can hear.

So not only does it show that it works but actually it shows that you may be able to almost restore hearing completely, using lasers.

Chris Smith: Amazing to think that you can use light to hear a sound. If you’d said you could do that a few years back people would have said you were mad, for various reasons, but very encouraging work from Australia, the home of course of the cochlear implant where it was all invented. Now tell us about this though, this is very exciting for people that are eager to do their bit for the environment, a cleaner way of getting energy.

Ben Valsler: Yes, well recent very high oil prices have forced people to look into all sorts of different ways of finding energy, and they’ve been looking back at some older methods. This one in particular was first tested in the 70s, so over 30 years ago. It's called ocean thermal energy conversion, or OTEC, and it relies on the temperature difference between water at the surface of the sea and water deep down in the depth of the sea to drive a turbine and generate electricity.

Chris Smith: And talk us through the nuts and bolts, how does it actually do that?

Ben Valsler: Well it's actually incredibly simple. The warm water at the surface heats up what we call a working fluid. Now this is a fluid that has quite a low boiling point, something like ammonia. When the working fluid boils, the gas coming off that produces enough pressure to drive a turbine, which produces power just like steam would from a coal power plant. This gas is then cooled and condensed back into working fluid by cold water that’s brought up from the depths. Then the condensed gas can be boiled again and the water simply returns back to the deep ocean.

Chris Smith: How do you get that cold water up from the depths in the first place?

Ben Valsler: Well that’s one of the technical challenges actually because to get decent amounts of electricity, 100 to 500 megawatts, you need designs which have a pipe collecting water, a thousand metres long and 27 metres in diameter, sucking up water at an enormous rate of 1,000 tons of water per second.

Chris Smith: But how do you pump that much water? That’s going to take energy isn't it?

Ben Valsler: Well the pumping does take energy, yes. But luckily it actually takes slightly less energy than you get from the working fluid, from the pressure with the gas coming off when it boils. So the real issue is actually the physical mechanics of getting it all up there. The Indian Government has tried to use an 800 metre pipe for a plant generating just one megawatt, and they’ve met with failure. They tried it twice and both times it didn’t work. Now if we can overcome all these challenges, OTEC could represent an enormous source of clean energy, and the US Navy, several different governments and obviously lots of companies looking to make money out of it are all trying it out. According to Bill Taylor, the Director of the US Navy’s Shore Energy Office, this has the potential to become the biggest source of renewable energy in the world.

Chris Smith: Certainly a mammoth engineering project though, and talking about mammoths, genomes and mammoths got sequenced this week.

Ben Valsler: Exactly, yes, researchers took a step forward towards bringing extinct species back to life this week by decoding the genome of the woolly mammoth. Writing in Nature, Stephen Schuster and colleagues have now sequenced about 80 per cent of the mammoth genome. Now this is despite the fact that DNA like this is very, very prone to damage and contamination. Schuster collected samples of hair from several different mammoth species to give them as many pieces of this huge genetic jigsaw as possible.

Chris Smith: So where did the mammoth tissue come from?

Ben Valsler: Well this is mammoth tissue stuck on the bottom of hairs that’s been frozen in permafrost. Which is really lucky, it's like having a massive fridge that we can keep this DNA fresh in. And already several genes have been identified that are shared with the elephant, which helps to shed some light on elephant evolution. It also supports the idea that we might be able to use an elephant as a surrogate mother, should we ever try to clone a mammoth.

So should we expect to see mammoths in the zoo soon? No, not really, and Jurassic Park is still a very long way off yet. There are so many problems to overcome such as completing the genome itself by filling in that last 20 per cent, and it's very difficult to extract eggs from elephants, and we don’t really know if an elephant will make a very good surrogate mother.

On top of all of this, we have to work out how to put the DNA, we now know the code, but how do we put it together in chromosomes - because if we don’t get that right, then we won't get a viable egg. So we’re still a very long way off but the mammoth genome represents a huge, a mammoth-size step forward in being able to bring the species back from extinction.

Chris Smith: Well I'm sure there'll be lots of clues in there as to where these animals came from and perhaps even what happened to them ultimately.

To finish off this week, they say that women are the chatty ones. It's true in humans and certainly also true in animals it looks like.

Ben Valsler: Well it is a bit of a cliché but we do have solid evidence now that in macaques at least females definitely chat a lot more. Writing in the journal Evolution and Human Behaviour, Roehampton University researchers Natalie Greenough and Stewart Semple wanted to test the hypothesis that language developed to replace grooming as social networks became larger and more complex. If the hypothesis is correct then they expect to see lots of chat between the female macaques while the males kept relatively quiet.

Chris Smith: So when I have a chat to someone, this is the conversational equivalent of someone picking fleas off me?

Ben Valsler: Well so it would seem, yes. Picking fleas off one another, cleaning the fur was how we bonded together as a society. And, in particular, in macaque society the troop usually consists of a core of female monkeys and the males move about between different troops throughout their life. This means that the females must be able to maintain the social bonds that hold the troop together. Social contact through grooming is very time-consuming. It's very good but it's very time-consuming. And so it limits the time available for gathering food. The bigger your social group gets, the more time you spend grooming, you wind up not getting any food at all. So a simple language could actually fit the bill, helping to maintain relationships without spending all this time cleaning each other’s fur.

So to test it out, they observed a group of 16 females and eight males in Puerto Rico and recorded the number of sounds called grunts, coos and gurneys which is a sort of nasal whine often between a mother and an infant. They also discounted any sounds made in direct response to food or a threat so they could concentrate purely on social vocalisations. They found that females not only made significantly more of these noises but they were also very biased towards communicating with other females. The males were much quieter, said much less and were just as likely to communicate with females as with other males.

So what can we take from this? Well we shouldn’t really be surprised. We already knew that in female bonded primate species like the macaques, females invest far more time in social interactions than males do. This finding is important though because it supports the hypothesis that language developed as a means to bond societies together. And so indeed having a chat may now fill the role that picking fleas out did. So next time we sit down in the pub for a chat, I’ll let you know if there’s any fleas in my fur.

Chris Smith: Thanks Ben. That was Ben Valsler with a roundup of some of this week’s top science news stories. And if you’d like to follow up on any of those items, the details are all on the Open University’s website which is at open2.net/nakedscientists.

In a moment, we’ll be finding out how learning the words for things can alter where in the brain you process them. But first, to a new way to see inside the body in detail, but without needing to resort to X-rays, which can be harmful and even cause cancer. Now, a US-based scientist, Mark Niedre is investigating a new technique called early photon tomography or EPT. He sends very short but powerful pulses of harmless, near infrared, laser light through tissue and then he collects the first light particles called photons that come out the other side. The idea is that the first photons to come out must have made the shortest journey through the tissue without being bounced about on the way and therefore they can give you the most detailed picture of what the tissue looks like on the inside.

Mark Niedre: So what we’re really talking about here is imaging in biological tissues, ie small animals and hopefully in people, with light. And, as I'm sure you’re aware, the issue with imaging with light in biological tissues is the fact that light scatters like crazy in tissue. So if you can think of taking a laser pointer and shining it at your finger, you see that light really diffuses and is really scattering a lot. And the problem with imaging in biological tissue is this has actually obscuring the feature that you might be interested in.

Chris Smith: So, in other words, where your whole finger lights up with the laser pointer rather than getting a clear image of the tissues in the finger, you’re seeing just a blob. And you’re saying we need to try and resolve this so we can get more detail?

Mark Niedre: Exactly, so we want to think more like, for example, an X-ray which essentially goes straight through biological tissue. It either goes straight through or it’s absorbed so you get a much sharper image as opposed to light.

Chris Smith: And what are you trying to do to solve that?

Mark Niedre: So our approach was the idea of taking extremely high speed pulse laser so this is a femtosecond laser, so this is 10 to the minus 15 second laser pulse, and then shooting that into biological tissue and then on the other side of the tissue using a camera that’s also a very high speed, gated camera so this is taking images on the picosecond so 10 to the minus 12. And so what the idea is, is to catch the photons that come out of the tissue on the other side first, and the idea is that the ones that came through first are the ones that had to have taken the shortest path through the tissue and therefore have got straighter and therefore should contain more spatial information than more diffusive later arriving photon.

Chris Smith: And how does the laser light when it goes through the tissue actually discriminate different features and structures that you then pick up with the detector?

Mark Niedre: So that’s a very good point. What we’re actually imaging here, there’s two approaches. One is to use just the light and use the native contrast of the tissue so scattering and absorption of the tissue, so different features, for example, bones and different tissues in the animal. In this case, what we’re actually looking at is fluorescently labelled targets.

Chris Smith: Oh I get it, so what you’re saying is you can target some kind of signal molecule to a specific tissue or structure. It locks on to that selectively and the laser then shows you where that is, and this means you get very good quality resolution of those structures?

Mark Niedre: It's a good way to put it, yes.

Chris Smith: And how did you prove that your technique actually works? What have you done in terms of imaging real live, living tissue to show that this is feasible?

Mark Niedre: Right, so the work we did was actually in mice with a lung tumour model, and our choice for that was driven largely by the fact that lung scatters light a lot, even by the standards of biological tissue. So we did a number of studies. So one was just the image with our system and then we do sort of correlative images, so one is with an X-ray CT so X-ray computed tomography which is a more conventional approach, a high resolution approach where you can see the tumour.

Interestingly, what we found is that using our technique, the tumour itself was fluorescent but in addition to that, the lung tissue and adjacent lobes of the lung were actually fluorescent as well, and this is something that we actually didn’t quite expect and this is one of the big results of the work. And that’s the idea that we’re actually imaging biochemical changes in the adjacent lung that are associated with the presence of the tumour but this isn't directly visible on something that’s more structural as opposed to biochemical, like an X-ray CT.

Chris Smith: And when you use this technique to image tissues, what sort of level of detail can you get with this? One would assume that because you can focus those reporter molecules very tightly on certain tissue types or certain chemicals, you could get really quite high levels of detail?

Mark Niedre: That’s correct. Now we have to be a little careful because this technique allows us to do much better than more conventional optical techniques using what you would call a continuous wave or constantly on laser as opposed to using pulse lasers like we’re doing here. So we can see resolutions down to a millimetre and possibly submillimetre scale. The critical point is that you can actually use targeted fluorescent probes targeted molecular probes which are showing very specific molecular information, and that’s what’s really exciting about the combination of the two techniques. One is that the development of targeted fluorescent probes and the other is advanced optical imaging techniques.

Chris Smith: Now one of the criticisms levelled at existing imaging techniques, like X-rays, is that they can damage tissues. Now you’re not cooking your mice with these lasers presumably?

Mark Niedre: That is correct. No, this is a very safe level of optical exposure, and in fact that’s one of the big advantages of using optical techniques is that optical radiation is non-ionising and it's extremely safe, provided it's below a certain threshold, where you start to have heating effects, and we’re well below that, so it's a very safe technique.

Chris Smith: Mark Niedre of Northeastern University in Boston shedding some light on a new way to see inside the body.

Now from laser imaging to colour imaging because scientists have found that when a toddler learns the name of a colour, it alters where in the brain the colour is processed in future. Anna Franklin.

Anna Franklin: One of the effects that we've recently been studying is something called a colour category effect. Now this is where people are better at discriminating between two colours if they belong to a different colour category, such as blue and green, than if they belonged to the same colour category such as two greens. Okay, now we’re interested in the nature and the origin of this effect and how it changes across development and what the role of language in this effect is.

Chris Smith: Why should there be a role of language at all, Anna?

Anna Franklin: Okay, so some people argue that the effect is created by language, so because we call green and blue with different terms, that makes them look more dissimilar than two colours that we would call with the same term.

Chris Smith: So if you had two greens, because you’re using the word green or a bit green in both cases, they’re sort of tethered in the brain, whereas blue and green, being they’re different terms, they’re distinct in the brain and therefore the processing is different?

Anna Franklin: Yes, that’s exactly one argument. Another argument is that the distinction between blue and green, those two categories is actually fundamentally there without language. So we've shown infants when they’re categorising colour, will use the right hemisphere, the right side of their brain, whereas adults will use the left side of their brain.

Chris Smith: And I guess that’s important because, of course, language is processed on the left side of the brain which lends credence to this idea that language plays a role in distinguishing these things?

Anna Franklin: Yes, exactly. So the left hemisphere is dominant for most language functions and so the argument is that the category effect is stronger in the left hemisphere in adults because the language is reinforcing the distinction between the green and blue colours.

Chris Smith: So how did you set out to test this in your toddlers?

Anna Franklin: Okay, so we tested toddlers because we were interested in when the right to left hemisphere switch occurs, and we had a hypothesis that it would occur when toddlers learned the words for blue and green. So what we did was we tested toddlers between the ages of two and five, and we had one group of toddlers who were still learning the words for blue and green, so they were a bit confused about how to label colours, and then we had another group of toddlers who had actually accurately learned what the blue and green colours were called.

Chris Smith: And how were you doing that? How were you see which hemisphere was becoming active?

Anna Franklin: Yes. So we use a trick. So basically anything shown to the left side of your visual field will initially be processed by your right hemisphere, so it crosses over. And so, for example, if you’re faster at detecting something that’s shown to your left visual field, then it means that you’re right hemisphere is more efficient at processing it. And so this allows us to make a judgment as to which hemisphere the category effect is occurring in.

Chris Smith: And what do you find?

Anna Franklin: So what we find is that toddlers who are learning their colour words, the category effect is stronger when the target’s in the left visual field so they’re faster at detecting say a green on a blue background when that colour’s in the left visual field.

Chris Smith: And once they learn the words?

Anna Franklin: Once they learn the words, then the toddlers who have the words for blue and green, their category effect is stronger for the targets that are presented to the right visual field so that’s initially projecting to the left hemisphere so it's showing a left hemisphere bias. So, in a nutshell, what we’re finding is that toddlers who don’t yet know the words for blue and green, their colour category effect is right hemisphere-based, whereas toddlers who have learned the words to blue and green, their colour category effect is stronger in the left hemisphere, as it is in adults who also know the words for blue and green. So we’re finding a right to left hemisphere switch that occurs when the word for colour is learned.

Chris Smith: That’s absolutely fascinating. Are there any other aspects of brain processing other than just colour distinguishing which could be tied up with language in this way?

Anna Franklin: Yes, so we’re quite excited about the finding because there’s a lot of scope to test whether it generalises to other domains, so categorisation is a really pervasive aspect of our mental life so we’re categorising all the time. So if I could see your face, I would be able to see whether you’re happy or sad, whether you’re interested or bored, and it's possible that those kinds of categories as well also show this right to left hemisphere switch according to whether those categories have been lexicalised or not.

Chris Smith: It's amazing to think that learning the words for something can reconfigure how the brain processes those things in future. That was Anna Franklin. She’s at the University of Surrey and she’s published that study in this week’s PNAS.

This is the Naked Scientists: Up All Night, with me Chris Smith, and time now for this week’s Stuff and Non-Science where we massacre myths and bash bad science and hopefully not dropping too many cats out of the window, here’s Dianna O’Carroll.

Diana O’Carroll: This week’s Stuff and Non-Science is all about dropping cats. Apparently a cat always lands on its feet but there’s only one way to find out, so here’s Sorrel Langley-Hobbs.

Sorrel Langley-Hobbs: It is true most of the time that they do have a very good righting reflex so they generally will land on their feet, but it does depend on the weight of the cat, the level that they’re falling from or jumping from, whether there’s any obstacles on the way down and the landing surface. When they fall from heights of more than five storeys, then cats do reach their terminal velocity which is about 60 miles an hour. So this velocity, their vestibular system is no longer stimulated so they don’t have their legs extended and they have their legs horizontal so they tend to fall in a splayed position like mini parachutes and actually tend to land on their chest and their chin.

One paper did suggest that if they fall from more than five or six storeys, they actually have less injuries than falling from lower storeys and there have been examples of cats that have fallen from 32 storeys in New York that actually survived with little more than a chipped tooth and a bit of air in the lung.

Diana O’Carroll: Sorrel Langley-Hobbs from the Cambridge Veterinary School. However, it's not advisable to go dropping your cat to test this out though.

So that’s it for this week’s Stuff and Non-Science but if you have a bit of science knowledge you don’t believe in then send it to me, diana@thenakedscientists.com.

Chris Smith: Although I wouldn’t recommend using a cat for a parachute - thank you Diana. That’s Diana O’Carroll with this week’s Stuff and Non-Science.

Well that’s it for this time. We’re back next week with another roundup of the latest findings from the world of science. The Naked Scientists: Up All Night is produced in association with the Open University and you can follow up on any of the items in the programme via the OU’s website which is at open2.net/nakedscientists. Or, alternatively, you can follow the links on the Five Live Up All Night website. Production this week was by Diana O’Carroll from the nakedscientists.com and I'm Chris Smith. Until next time, goodbye!

Does it sound good? Why not get the Breaking Science podcast, and receive new episodes as they're released?

James Bruce explains a novel way to see what's going on inside the body: fluorescence imaging.

Background

These are the sources used by the team in making the show:

In the news

'Sequencing the nuclear genome of the extinct woolly mammoth'
by Webb Miller, et al
in Nature

'Sex differences in vocal communication among adult rhesus macaques'
by Nathalie C. Greeno, Stuart Semple
in Journal of Evolutionary Behaviour

'Light opens up a world of sound for the deaf'
by Rachel Nowak
in New Scientist

'Plumbing the oceans could bring limitless clean energy'
by Phil McKenna
in New Scientist

Interviews

Mark Niedre, "Early photon tomography allows fluorescence detection of lung carcinomas and disease progression in mice in vivo", by Mark J. Niedre, Ruben H. de Kleine, Elena Aikawa, David G. Kirsch, Ralph Weissleder, and Vasilis in PNAS

Anna Franklin, "Lateralization of categorical perception of color changes with color term acquisition," by A. Franklin, G.V. Drivonikou, A. Clifford, P. Kay, T. Regier, and I.R.L. Davies in PNAS

Sorrel Langley-Hobbs for 'Stuff and Non-Science'

More like this