Skip to content
Science, Maths & Technology
Author:

Blackhat, dark night: Could hackers really cause a power outage?

Updated Wednesday, 25th February 2015

Although the nuclear meltdown depicted in Blackhat is fiction, Mike Richards warns there are other ways those with bad intentions could switch our lights off...

This page was published over five years ago. Please be aware that due to the passage of time, the information provided on this page may be out of date or otherwise inaccurate, and any views or opinions expressed may no longer be relevant. Some technical elements such as audio-visual and interactive media may no longer work. For more detail, see our Archive and Deletion Policy

Chris Hemsworth Creative commons image Icon Eva Rinaldi under CC-BY-SA licence under Creative-Commons license Chris Hemsworth, star of Blackhat. (The other option was a photo of a North Korean nuclear plant.) Every few years Hollywood decides to make a movie about computer hacking. A few are good (Wargames and Sneakers are both terrific films); some, such as Hackers and Swordfish are best forgotten. The latest is Michael Mann’s Blackhat, supposedly inspired by a cyberattack on Iranian nuclear facilities which is – well, I’ll let you make your own mind up about that.

The movie starts (and this is not a spoiler) with a computer attack on a Chinese nuclear power plant, resulting in a reactor meltdown and loss of life. So let’s use the movie as a kicking off point and try to answer a question: are nuclear power stations vulnerable to computer attacks?

The answer is – sort of – yes. Several nuclear power stations have fallen victim to malicious software (often called malware); in 2002,a computer worm called Slammer disabled protective software at the Davis-Besse nuclear plant in Ohio and left its computers vulnerable for more than five hours. In 2014, an employee at Japan’s decommissioned Monju nuclear reactor updated a free software package. They did not realise that the software contained malware which allowed attackers to steal personal information about workers for more than five days. The same year, Korea Hydro and Nuclear Power found malware on computers at more than one site across the country. There have also been reports from Russia that at least one nuclear facility has been badly affected by malicious software. Clearly power stations are vulnerable.

But before getting too worried, it is necessary to explain that none of these incidents affected the operation of the reactors and did not compromise security. The malware only affected non-critical computers that are meant to be attached to the network and which are used for the same sort of purposes as any large business – sending emails, creating and distributing documents, employee records and the like. Although they contained important data, none of the computers had anything to do with nuclear power.

Nuclear reactors are computer controlled; under instruction from the plant operators, the computers are responsible for monitoring and adjusting tens of thousands of networked components including sensors, motors, valves and displays to keep the reactor safe and meeting power demands. Crucially the reactor network never connects to an outside network. Reactor computers do not come with standard interfaces such as USB ports that would allow an attacker to install software from a memory stick. These protections mean that an attack cannot pass from the outside world to the reactor computers (or vice versa). This deliberate break is called an air gap and it’s what stops Hollywood movie plots happening in the real world.

Reactor computers are especially designed for the purpose of controlling nuclear power stations – and nothing else and their programs are amongst the most tested ever written. Multiple computers are used to control a single reactor with each computer monitoring the same data and coming to independent decisions about what to do. At regular, tiny intervals, the computers share their decisions and a vote is taken about how the reactor’s operation should be adjusted. Faulty computers are outvoted and disconnected to protect the reactor.

At the same time, each computer is constantly checking that the program it is running is the same program that was originally installed and that it has not been corrupted or modified since. Any change in the program results in an immediate shutdown of the computer, and quite possibly the reactor itself.

Some of the latest reactor designs go even further and require the computers and software to come from more than one manufacturer, so although the computers all perform the same task, they do it in different ways on different hardware. In this way, even if a program has a bug, or the hardware has a fault, at least one computer will remain in operation long enough to bring the power station to a safe shutdown.

Three Mile Island and computer design

The result of all this investment in computer design is that computers have not been responsible for any serious accidents at nuclear power plants. What has been a real concern is that humans are not necessarily very good at interpreting information from those computers. In the early hours of 28th March 1979 an automatic cooling water valve stuck open in one of two pressurised water reactors at Pennsylvania’s Three Mile Island (TMI) power plant. The problem of an open valve was in itself, not especially serious, but the fault quickly turned into the most serious civil nuclear accident until Chernobyl. A single, poorly designed indicator light, led the plant’s operators to believe the valve had closed properly, when in fact huge amounts of radioactive cooling water draining from the reactor.

Literally hundreds of lights and audible alarms sounded in the first few minutes of the accident. The operators, unused to anything other than routine operation were simply overwhelmed by the barrage of information far outside of their experience. They did not have enough information to prioritise the tasks necessary to bring the reactor back under control. In fact it was nearly three hours later than the true scale of the problem became clear, but by then the reactor had partially melted and the interior of the reactor building was seriously contaminated by radioactivity. Although little radiation escaped from the plant and there were no observable health effects from the accident, TMI-2 was a total write-off at a cost of $1 billion and the American civil nuclear programme was put on hold for almost three decades.

Part of the investigation of TMI-2 concentrated on the design of nuclear reactor control rooms and how information is best presented to the controllers. As well as improving the instrumentation so that more, and crucially – more accurate, information was given to operators, control rooms were redesigned to help operators when things go badly wrong. New control rooms displays present information alongside related information rather than operators having to hunt for it amongst other readouts. Alerts are prioritised so that operators can concentrate on the most urgent tasks. Finally, staff are now routinely trained for a severe accident using simulators capable of replicating the most severe of accidents.

So there we have it, the computers running nuclear power stations are safely isolated from the world, they are designed to be resistant to corruption and the plants are run from modern, well-designed control rooms by highly trained staff. There’s nothing to worry about – is there?

The computers that keep the lights on

Well yes, actually. Our modern highly connected world is terribly vulnerable to a malicious attack, but the problem has nothing to do with nuclear power plants, it’s to do computers called Programmable Logic Controllers (PLCs). A large part of our society would stop working without PLCs, all around the world, hundreds of millions of PLCs are quietly working away controlling assembly lines, the flow of gas, oil, water and sewage through pipelines, quietly switching electrical networks on and off, moving people using lifts and escalators, monitoring growing conditions in greenhouses, even making rollercoasters work better. PLCs were invented in the 1960s to reduce the time it took factories to adjust machinery on their shop floor.

Before the PLC, every machine had to be hand-adjusted or rewired by trained staff – something that could take days or even weeks in a major factory. When a PLC was fitted to that machinery, it could be adjusted in a few seconds simply by loading new software. At first someone still had to go around the factory floor and update each PLC individually, but as time went by, more and more PLCs were connected to either wired or wireless networks; now an entire factory floor, or an oil pump in the middle of the prairies could be updated at the push of a button.  The task was made even easier as businesses replace creaky old computers and PLCs running bespoke software with modern PCs running the same Windows and UNIX operating systems found elsewhere, communicating with PLCs using a standard set of commands, and then connected these control computers to the Internet. What could possibly go wrong?

Quite a lot actually. By connecting PLCs to an external network it makes them vulnerable to attack from outside; something that’s made worse by many PLCs not having any real security – their built in programs can easily be overwritten by malicious software. The first PLCs sat on nicely isolated networks (if they were networked at all) and relied on custom software, incompatible with anything else that needed a highly trained specialist engineer. Nowadays, everything is standardised and almost everything is accessible with a bit of effort. A 2012 study found more than 10,000 PLC systems in the UK, including those responsible for water and power plants, that had been connected to the internet. Some had almost no protection against attackers.

You might ask, (quite reasonably), "why don't PLC manufacturers don’t make their computers more secure?" As early (!) as 1995, the United States government published a report in the wake of the Oklahoma City bombing that destroyed a government building and badly damaged key infrastructure. The Marsh Commission pointed out the risk of an attack on energy infrastructure from the internet:

The capability to do harm… is growing at an alarming rate; and we have little defense against it. … We should attend to our critical foundations before we are confronted with a crisis, not after.

A second report that year specifically cited an attack on a PLC in the electrical grid.

The Year 2000 Millennium Bug which threatened many older PLCs came and went, the first attack on PLCs at a sewage treatment plant in Australia caused serious environmental damage. Yet years later, security experts routinely reported that businesses had not installed protective firewalls around their internal networks to stop attacks from outside; many did not even bother to check the log files kept by their computers to see if they had been attacked; and PLC manufacturers were still not implementing basic security features in their devices.

Attacks on PLCs are increasing. In 2011, security researchers showed how easy it would be to take over the computers responsible for adding purification chemicals to drinking water in Southern California. Later that year, the controls for a water plant in South Houston were accessed by a hacker to demonstrate that the system was vulnerable. No harm was done – this time.

The following year, Telvent, a Canadian company who makes control software for devices connected to oil and gas pipelines was the target of a concerted cyberattack by hackers linked to the Chinese military; but the threat really made the headlines when American and Israeli intelligence attacked PLCs at Iran’s Natanz uranium enrichment plant.

Natanz separates different isotopes of uranium, increasing the amount of fissile material. Small amounts of enrichment are used to produce fuel for nuclear reactors, but further enrichment can be used to make weapons grade uranium. Although Iran has signed a treaty prohibiting it from making nuclear weapons, the Americans and Israelis have long suspected it had a covert programme to become a nuclear weapons state. Rather than a military strike, the two countries decided to develop a computer worm, called Stuxnet, that would target the PLCs at Natanz and render the delicate and extraordinarily expensive plant useless.

Stuxnet was a qualified success in that it certainly slowed Iran’s enrichment programme, but it may have unleashed an even worse threat to all of us for it showed not only how industrial plants can be attacked, but also that it had been done. Any reluctance to use malware as an instrument of war or terror has been weakened by Stuxnet, so there is now a race between military powers, intelligence services, and criminals to develop ever more powerful malware aimed directly at the heart of our civilisation. In late 2014 we might have seen a second attack when Germany’s Federal Office for Information Security reported that a malware attack aimed at a steelworks had resulted in massive damage to a furnace. Little more detail was given, but it is clear that someone, somewhere attempted to cause harm – and succeeded.

No one knows where the next Stuxnet will come from or who it will be aimed at, but the next time your lights go out, it might not be a blown fuse.

What next?

 

Author

Ratings

Share

Related content (tags)

Copyright information

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?