The team explores how computers can now interpret what someone is seeing just from their brain activity – could we "see" dreams in the future? Oil extracted from used coffee grounds can be used to make a better smelling biodiesel; chimps recognise faces in the same way we do; proteins found to locate specific DNA sequences very effectively; and the ethical issues surrounding the use of robots for care and war.

Plus, in 'Stuff and Non-Science', are meteorites hot when they hit the ground?

Listen

Copyright BBC

Read

Chris Smith: In this week’s show, how Up All Night listeners’ favourite source of caffeine could be used to power cars.

Ben Valsler: Researchers have found that oil extracted from used coffee grounds makes very viable biodiesel. And currently with the amount of coffee we drink we could supply up to 350 million gallons of biofuel. And interestingly the fuel itself actually smells of coffee - which might be quite a lot nicer coming out of your exhaust than chip shop smells.

Chris Smith: Ben Valsler, who will be percolating his way through this week’s science news. Also on the way we take an in-depth look at the rising use of robots for just about everything.

Noel Sharkey: Service robots are the kind of robots that do domestic work or work in your farms, pump petrol, go out to war and fight for us. There are 5.5 million of those now on the planet since the turn of the century, and the World Statistics Authority say there will be 11 million by 2011. So with this number of robots we really need to look at what they’re going to do now and not be overtaken by them.

Chris Smith: A scary thought. That’s Noel Sharkey who’ll be talking about the prospect of hiring a robot to take care of your children or even your elderly relatives - but is it safe or ethical?

Plus fiery objects from space, just how hot is that meteor that’s come blazing through the atmosphere and landed in your garden? We'll find out in this week’s Stuff and Non-Science.

Hello, I'm Chris Smith and this is the Naked Scientists: Up All Night which is produced in association with the Open University.

First, let’s take a look at some of this week’s top science news stories from around the world with our science correspondent Ben Valsler.

In just a moment, scientists have found a new source of environmentally friendly fuel, and that's coffee grounds, so maybe we can look forward to a richer smoother blend of biodiesel in future. First though scientists in Japan have developed a way to read people’s minds, Ben.

Ben Valsler: Well, yes, scientists in Kyoto have reproduced an image based on how the brain responds to what you’re looking at, effectively reading an image straight out of the brain. Yoichi Miyawaki and colleagues at the ATR Computational Neuroscience Laboratories have published a report in the journal Neuron where they’ve used Functional Magnetic Resonance Imaging, that’s FMRI, to observe the changes in brain activity when a subject is looking at an image.

Chris Smith: So just talk us through what they do here and how the study works in order to read what the individual’s brain is doing.

Ben Valsler: Well, the new bit here is that they use an incredibly complicated computer model which they train to recognise what happens in the brain in response to certain images, and the system can then accurately reproduce the images that the subject had never seen before. The images they showed them were very simple high contrast pictures, and they produced random patterns on that that they call a contrast map.

Now in the training stage they showed the subject 440 random images for about six seconds each, and while they did this they got their computer model to observe the FMRI results. The computer model then went to work by analysing small overlapping portions of the FMRI in three dimensions and then putting together the data from each portion to reconstruct how the brain responded to each image.

Chris Smith: I get it. So what they’re basically doing is showing people pictures and then working out how the brain’s visual area responds to those pictures so that then you can pick out what patterns of brain activity correspond to what shapes so that you can begin to work out how the brain’s reacting to any kind of shape you might present to it.

Ben Valsler: Exactly that, yes, and once they trained it on all the random contrast maps, these just random blocks of black and grey and white, they then moved on to showing the subjects more defined images such as letters and shapes, and from the FMRI data the computer model was able to reconstruct the images as seen by the subject, and what they did with this was actually spelling out the word ‘neuron’. So it was able to pick out the shape of the ‘n’, the ‘e’, the ‘u’ and so on and actually spell out the word based solely on what was going on in the brain.

Chris Smith: So having shown the person all these different visual stimuli and working out how the brain reacted then by presenting ‘neuron’ to the person visually and recording what the brain did and then piecing back together just the brain activity they can work out what the person must have been looking at - which is pretty impressive.

Ben Valsler: It's very impressive stuff. At the moment it's only able to work in black and white, but they think that improving the measurement accuracy should enable it to work in colour, and they’re very optimistic but they think that one day we may be able to produce images from dreams or even a direct readout of someone’s feelings.

Chris Smith: It also sounds a bit scary and spooky because one wonders where this is going to go next. Will we be interrogating criminals or suspects in a brain scanner and then seeing what their brain’s doing in order to find out if they’re lying? But I guess we won't find out for the time being. Now from brain scans to biofuels tell us about how coffee, said to power many scientists and mathematicians, could also power cars.

Ben Valsler: I love this story. Researchers have found that oil extracted from used coffee grounds makes very viable biodiesel, and currently with the amount of coffee we drink we could supply up to 350 million gallons of biofuel, and they’ve put this report in the Journal of Agricultural and Food Chemistry. You can make biodiesel from many different plant oils such as soya bean or palm oil and even cooking oil recycled from restaurants which gives your exhaust a very distinctive chip shop smell. However, with the world’s population rising, we must strike a balance between the land we use to grow food and the land set aside to grow crops to meet the demand for biodiesel.

Chris Smith: How do we actually do it though? How does one get all the used coffee grounds and turn that into something that you can put into your car?

Ben Valsler: Well, Mano Misra and colleagues at the University of Nevada observed that spent coffee grounds contain between 11 and 20 per cent oil by weight, which is actually a similar amount to the more traditional biodiesel crops like rapeseeds. So they managed to grab lots and lots of used grounds from a multinational chain of coffee shops which would have just been sold off for compost.

So what they did was they dried out the grounds and mixed them up with solvents like ether and hexane, the oils dissolve into the solvents, and then they filter out all the solid stuff which can still go on to be compost. The solvents evaporate off leaving just the oil, which they then have to put through a stage called transesterification. They kept it going until there was no trace of oil left which showed the whole thing had been converted to biodiesel. And interestingly the fuel itself actually smells of coffee - which might be quite a lot nicer coming out of your exhaust than chip shop smells.

Chris Smith: Certainly give any caffeine addicts a hankering for a cup of coffee wouldn’t it, but is this actually beneficial to the environment? Because with all those steps and all those other chemicals being slung into the equation does it still make economic and scientific sense to do this?

Ben Valsler: Well simply put we are producing lots of biodiesel anyway, so all of the chemicals that go into the process as it expands those chemicals are going to be used. By using something like used coffee grounds, we’re actually taking a source that’s already being produced, it's already going to waste, and when they tested it, they found it's of a better quality and more cost-effective than other waste sources of oil such as used chip shop oil. So actually it does look like it may be a very, very good source of biodiesel with a slightly lower environmental impact than some of the others.

Chris Smith: We'll just have to wait and see I think. Now back to the brain and there’s interesting insights from monkeys’ brains as to how they decode faces.

Ben Valsler: Yes chimpanzees are known to recognise other chimps by their faces, so they can recognise another chimp just based on what they look like, but now researchers have found the part of the chimpanzee brain that’s actually responsible, and it's basically the same bit of the brain that works in humans.

As they are our closest relatives, the differences between us and chimps are really the differences that make us human. Now, publishing in Current Biology, John Votaw and his colleagues from Emory University studied chimpanzee brain activity after getting the chimps to match up pictures of fellow chimps’ faces.

Chris Smith: How do they know that the chimps were actually recognising the other animals and therefore matching them up on those grounds?

Ben Valsler: Well, to check that what they were seeing was face-related and was actually recognising the faces, they made sure that the chimps performed these two special tasks. The first one, they were shown three pictures, two identical and one different, and they were of chimp faces. The second time they did this were with just normal clip art, so again two identical pictures and one different, and they had to spot the differences. And they saw significant brain activity in the regions known to be responsible for face recognition in humans.

Now this included parts called the superior temporal sulcus and the orbitofrontal cortex which resides at the front of your brain just behind your eyes, and they also recorded face selective activity in their brain region called the fusiform gyrus which lights up in response to face recognition in humans. Now faces are a very special case. In fact humans have a tiny region of the brain called the fusiform face area, or the FFA. Now this study didn’t actually find that area in chimps. But as the chimpanzee brain is only about one-third the volume of the human brain it may just be hiding.

Now previous research has shown a very different pattern of brain activity in response to faces in monkeys, in particular macaques, so it could be that chimps and us share a novel way of recognising and responding to faces.

Chris Smith: One could argue that because we’re all very social animals that have to rely on group activity for success that this is a very important thing for them to be able to do to know who’s a friend and who might be an enemy.

Ben Valsler: Indeed, and who stole food from them last time, who they owe food to, all of these things that really keep a society cohesive.

Chris Smith: Well, talking about our nearest relatives and the fact that we share lots of DNA with them, what are people finding out this week about how DNA interacts with proteins, the little tiny machines that copy it?

Ben Valsler: Well, in almost every cell in your body proteins are seeking out the bit of DNA that they need in order to keep the cell running. Now this means that finding the exact DNA sequence among thousands is vitally important.

Several mechanisms for how the proteins do this have been put forward, and until now the best explanation has been something called direct readout. This is where the protein binds very closely to the DNA strand. It actually has the effect of distorting the DNA, stretching it a bit out of shape, rather like giving it a really big strong bear hug or a painfully tight handshake, and this also takes quite a lot of time, as binding to the DNA, distorting it and then detaching again, all takes time to occur, and you’ll have to do this however many times before you hit the right sequence.

But now Nancy Horton and her colleagues at the University of Arizona have discovered that proteins may only need to lightly brush the DNA giving it no more than a gentle squeeze to find their target site which actually speeds up the process without losing any accuracy. Publishing in the journal Structure they examined a mechanism called indirect readout where proteins have no physical contact with the DNA structure they’re examining and yet they’re still able to locate their binding site. Not only does indirect readout avoid binding to the DNA molecule itself but rather than distorting it in the full bear hug it just gives the molecule a gentle squeeze, rather like trying to work out what your Christmas presents are without actually undoing the sellotape.

Chris Smith: And why do the researchers think that this is useful to know? Tell us what the actual impact of this will be.

Ben Valsler: Well actually people are developing DNA binding proteins to turn genes on and off, said Horton, and this is the basis of gene therapy really. It allows us to cut out bad genes and replace them with good. And they’ve found that indirect readout is vitally important for finding the right sequence, and now they think that it's absolutely very important for finding it quickly.

Chris Smith: So that’s how the machinery that reads the recipes in your DNA code does it so quickly. Thank you, Ben. That was Ben Valsler from the Naked Scientists with a roundup of some of this week’s top science news stories. If you’d like to follow up on any of those items, the details and the references are all on the Open University’s website which is at open2.net/nakedscientists.

In just a moment how hot is the meteor arriving on Earth from outer space when it lands. That’s coming up shortly. But first to the new breed of robots that are increasingly doing the boring and repetitive jobs that we don’t like, like pumping petrol or spraying cars. Well now the industry has taken something of a more sinister turn because developers are producing robots that can replace a parent or a carer for an elderly person, and Noel Sharkey, who is Professor of Robotics at Sheffield University, is concerned about the possible impacts.

Noel Sharkey: Recently there’s been a great surge in robotics. It's something we've been waiting for since the 50s really, and what’s happening now is that technology has got to the point - sensor technology, electronic technology - where it's just a matter of creativity as to what kind of robot you produce. But if you look at the number of service robots, and I’ll explain what service robots are, service robots are the kind of robots that do domestic work or work in your farms, pump petrol, go out to war and fight for us, there are 5.5 million of those now on the planet since the turn of this century, and the World Statistics Authority say there will be 11 million by 2011.

So with this number of robots we really need to look at what they’re going to do now and not be overtaken by them. Bill Gates from Microsoft Systems, for instance, has recently said that robotics he expects to have the same role as the PC; in other words, it will be as pervasive as the PC is now.

Chris Smith: Now Bill Gates is also the person who said famously a year or so ago that he thought that the problem of spam would be solved within twelve months, and we've still got that.

Noel Sharkey: Yes, that’s true.

Chris Smith: But the issue with robots is that whilst they’re very good mechanically they’re not good cognitively, they don’t have a human brain, and this means that people tend to view them as potentially having one, but they don’t, and therein lies a problem, doesn’t it?

Noel Sharkey: It's a very serious problem in fact because I talk to the military quite a lot who are the biggest developers of robots at the moment. I mean there are four thousand robots in Iraq at the moment on the ground. And they’re remote controlled and they’re for a very good purpose, they’re for disposing of bombs, although there are some armed ones as well. But I have to talk to the military because they have a sort of science fictionish notion that they get from researchers.

I mean the American military are spending $4 billion over the next two years on robots, and so they go to your lab and they say hey I’ve got $2 million to spend here, and what researcher’s going to say well look you’ve got the wrong idea about this - but people generally don’t, so they have an illusion about the smartness of these robots.

Chris Smith: I think probably the worry is if you have a robot doing something terrifically useful such as de-arming a mine or something which means that a person is removed from danger, that’s a very different situation than if you have a robot that you are autonomously arming, and you’re saying go out into the battlefield and run a computer programme which means you pick the target, you attack it.

Noel Sharkey: That’s correct, that’s the real difficulty, and that's what's happening. Aerial robots as well – BAE Systems tested a team of autonomous aerial robots. That’s like fighter planes, essentially, and they select their own target and communicate among themselves. And they haven't been deployed yet because of legal problems but I think they’ll overcome these legal problems soon. But the big problem there is that according to any humanitarian law, the laws of war, the Geneva Convention, the prime thing that a military must be able to do is to discriminate between an innocent person and a civilian.

Chris Smith: It's not just the military that are interested in robots though is it because if you look at how they’re becoming quite pervasive in the domestic environment, people are talking about robots in healthcare. We've had doctors who are now doing their ward rounds in some countries via a robot which wheels its way around the ward. They’re making arguments on infection control grounds as to why this is a good idea but patients don’t necessarily like it. You’ve also got people suggesting that you could have childcare robots, even domestic care robots for old people.

Noel Sharkey: That’s correct. Now, as far as the medical robots are concerned, and I know you’re a doctor, Chris, so I’ve had words, you know, I’ve had a bit of a public debate with Robert Winston about this, because as far as I'm concerned the medical robots are good. They’re very good. I mean the robot surgeons at the moment are not any better than a normal surgeon. I mean there’s a normal surgeon working it, but my point to him is that when these become portable, which they will do if you keep using them, you’d be able to rush them to a roadside accident on the motorway and not have to bring the patient back to the hospital. Or if there’s a national emergency or disaster, you could take five or six surgeons, you know, have assistants on the site, and then have a person, one single good surgeon operating on several people at once with the help of assistants on the surgery.

So that kind of thing I think is quite good. I mean there are limits of course and the idea of a mobile doctor on the ward looks very lazy, but if you happen to be living in India in a little village where there are no doctors, the idea of having a remote doctor in Delhi looking at you and examining you is quite a good thing, you’ll have to admit. But let’s get on to some of the other issues. One of the other issues of course is elderly people, care of the elderly. Now this is a wonderful thing as well if taken in the right context. There are many, many robotic aids now being developed in Japan, including skeletal suits so you can stand up with a robot suit on and walk.

But my worry is that although they will keep old people out of hospital for longer, if we rely on them too much, the old people won't get any human contact. Because what will happen is they will be cared for exclusively by robots, and normally in this country, people, Meals on Wheels people or people coming in order to wash you, might be the only human contact you have when you’re 80 or 85 years old and a bit frail. So if you’re completely cared for by robots it would not be a good thing.

Chris Smith: And looking at the opposite end of the age spectrum, there was quite a famous episode of Star Trek which shows how important this is, because even Star Trek was talking about this in The Next Generation about 20 years ago, where a young boy becomes very attached to an android, Data, on the Enterprise. Now I think that if we had child care under the control of robots there’s probably quite a danger that people could lose the ability to interact with human beings. I mean we worry about children losing the ability to interact with human beings because they spend their life immersed on the internet as it is. I think this is a very real worry.

Noel Sharkey: Yes, it is. I mean if I had Commander Data in my household, as he stands as an actor on television, I’d be quite happy for him to look after my kids because he’s super smart. But the kinds of robots we’re talking about, there are 14 companies in Japan and South Korea that have developed these, and there’s quite a lot of experimentation going on in the United States with it, and they’ve even found that children after a short exposure to a robot prefer the robot to their teddy bear. So they will say which one do you want us to shred, your teddy bear or the robot, and they’ll choose the teddy bear for shredding.

Chris Smith: I suppose there are just no trials are there on what actually happens when you develop a relationship with a machine from a young age. Because what happens if you do this in animals. Because animals like ducks which imprint on objects and they think it's their parent they don’t do very well if you do this with a machine. So what do you think the impact could be like on a human?

Noel Sharkey: I think there could be very serious psychological impacts. But, of course, we can’t do the proper experiments to find out because it wouldn’t be fair would it. But we could look at other psychological findings. I mean there were some old studies by Harry Harlow, and admittedly it's with Rhesus Macaque monkeys, and these Rhesus Macaque monkeys had surrogate mothers, metal mothers that fed them, they had a little teat sticking out, and so they were brought up by a surrogate metal mother, and they preferred a metal mother on a swing that moved a bit more. But these monkeys were completely dysfunctional. They couldn’t even mate in later life. So we don’t want the same thing happening with our children. And there’s no way of finding out without doing the experiments, or let it happen, and we don’t want to let it happen.

Now at the moment I'm reading on the internet people writing things like I bought this, I won't mention the maker’s name of it but it's a very cheap childminding robot that just sits there, and this woman’s writing how wonderful it is because now when she’s working late at night she doesn’t have to listen to the lonely cries of her child, and that’s really sad to me.

Chris Smith: It is sad. It is also a big worry to think that people are replacing parenting with an electronic machine. But at the same time what are you advocating that we do about it, because it's one thing to identify this as an issue but the numbers speak for themselves. These robots are becoming more common, they’re becoming more likely to be found in the domestic environment, what do you propose that we actually set in place to make sure that there isn't a problem?

Noel Sharkey: Well, you’re pointing out one of the difficulties though is at the moment these childminding robots are like $60,000 but the prices are crashing so that’s the problem. I would suggest that we don’t use them for childminding. I mean one of the things I’d like to get done, I'm not a legislator, but what I’ve been trying to do is to get international discussion together. Because I’ve read through all the laws, I mean the only law you can get some on who leaves their child with a robot is negligence, and that hasn’t been tested in court. So the problem with these robots, in fact you might think this is an odd problem, but the problem with them is that they’re actually very, very safe.

You can watch your child all the time through the robot eyes with the little window coming up on your computer screen, or you can talk to them through the robot using your mobile phone. The child wears a radio frequency tag so the robot knows where it is at all times. So they’re extremely safe. I mean if you leave your child in front of the television, which I used to do for an hour here and there when I was writing a paper, but you have to keep popping in to see your child. In this case, you don’t have to pop in and see them. But the fact that they’re safe means that the parent might not be found to be negligent.

Well who knows what a future generation will say. So that’s the kind of problem. And with the elder care really what you need to do is talk to Help the Aged and that sort of person, and we need to have people sit down together and discuss it at an international level what kinds of codes of ethic we'll put in place for the care of the elderly using this, so we draw a line. With the military, as far as autonomous killing machines is concerned, I would like there to see an outright ban until robots can pass a discrimination test.

Chris Smith: So Star Trek was on the right track all along, even 20 years ago. That was Professor Noel Sharkey making the case for a code of robot ethics. He’s published a review on the subject in this week’s edition of the journal Science. You’re listening to the Naked Scientists: Up All Night with me, Chris Smith, and time now for this week’s Stuff and Non-Science where we murder urban legends and applying her thermometer to bits of rock, here’s Diana O’Carroll.

Diana O’Carroll: This week’s Stuff and Non-Science is the burning fireball of meteorite you’re supposed to find sizzling away in a hole in the ground after it's landed, and leaving no stone unturned now, here’s Vic Pearson.

Vic Pearson: The myth about whether meteorites are hot when they land generally stems from Hollywood movies, although through history fireballs have been seen entering the Earth’s atmosphere, which of course suggest to people that these objects are in fact going to be hot when they land. But in actual fact they’re cold when they land. When the meteor is travelling through space, it's obviously very, very cold, it enters the atmosphere at quite a speed, somewhere between 12 and 70 kilometres an hour, sometimes faster than the speed of sound, but the outside of the rock heats up, in the same way as if you were to rub your hands together and cause a frictional effect of heating, the atmosphere heats up the surface of the meteor, and this also causes it to break up.

The heating actually strips the outer edges of the meteorite away. So it strips the material away which also takes the heat with it. So as it heats up only sometimes for a few seconds and it falls through the Earth’s atmosphere, it slows down, and when it's no longer being heated by the atmosphere, it cools down, so when it lands on the surface of the Earth it's cold.

Now we know that we don’t heat these up in the atmosphere any more than just the outsides because one of the things that we look for when we’re hunting for meteorites, which is a meteor once it's landed, is called a fusion crust, and a fusion crust forms by the action of the frictional heat on the meteor on the surface of the rock, and this is only a few millimetres thin, and in the centre of the meteorite there’s been no effect of the heating, and in fact we still get some very light elements and molecules that have been retained so there’s definitely been no heating, and so the meteorites are cold when they land.

Diana O’Carroll: Vic Pearson from the Open University. So a landed meteorite will probably be cold, but I wouldn’t advise using it in your drinks. If you have any non-science, then send it to me for stuffing, diana@thenakedscientists.com.

Chris Smith: So no burning craters after this month’s meteor shower. Thank you, Diana. That’s Diana O’Carroll with this week’s Stuff and Non-Science. Well that’s it for this time and in fact this year. We'll be back in 2009 with more of the latest findings from the world of science. The Naked Scientists: Up All Night is produced in association with the Open University, and you can follow up on any of the items included in the programme via the OU’s website, and that’s at open2.net/nakedscientists. Alternatively, you can follow the links to get there from the BBC Radio Five Live Up All Night website.

Production this week was by Diana O’Carroll from thenakedscientists.com, and I'm Chris Smith. Until next time, have a very merry Christmas and a happy New Year and goodbye.

 

Does it sound good? Why not get the Breaking Science podcast, and receive new episodes as they're released?

Background

These are the sources used by the team in making the show:

In the news

'Spent Coffee Grounds as a Versatile Source of Green Energy’
by Narasimharao Kondamudi, Susanta K. Mohapatra and Mano Misra
in Journal of Agricultural and Food Chemistry

'Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders’
by Miyawaki et al
in Neuron

‘Face Processing in the Chimpanzee Brain’
by Votaw et al
in Current Biology

‘Early Interrogation and Recognition of DNA Sequence by Indirect Readout’
by Elizabeth J. Little, Andrea C. Babic and Nancy C. Horton
in Structure

Interviews

Noel Sharkey on ‘The Ethical Frontiers of Robotics’, by Noel Sharkey in Science

Victoria Pearson for 'Stuff and Non-Science'