Skip to content
Skip to main content

The thorny issue of ‘driverless’ cars

Updated Tuesday, 8 May 2018
How will the law accommodate changing technology for driverless cars? Who is responsible if the vehicle crashes?

This page was published over 5 years ago. Please be aware that due to the passage of time, the information provided on this page may be out of date or otherwise inaccurate, and any views or opinions expressed may no longer be relevant. Some technical elements such as audio-visual and interactive media may no longer work. For more detail, see how we deal with older content.

Google Driverless Car Oh dear! The Google car has crashed and Google has admitted that it may be their fault. This was in California, but such vehicles could be on the roads of England and Wales in the next few years. Should we be worried? We don’t think so. According to The Highway Code, one must “always give way if it can help to avoid an incident” and, perhaps more importantly, one must “exercise full control of [one’s] vehicle at all times”. This raises an interesting philosophical debate about who – or what – is actually in control of an autonomous vehicle.

The legal position on driverless cars in England and Wales has yet to be determined, but it could be covered by an Act of Parliament, the common law, or both.

The most relevant Act is the Consumer Protection Act 1987, which provides that the producer and/or supplier of a “defective” product – that is, a product that is not as safe as “persons generally are entitled to expect” (section 3) – is liable to anyone who, because of the product, is injured or killed, or suffers at least £275-worth of property damage (section 2). In this case, Lexus was the producer of the physical car, but Google was the producer of the algorithms that enabled it to manoeuvre on roads without a driver. To the extent that this accident was the car’s “fault”, the defect must be in the algorithms, and Google clearly accepts this. Section 4 includes the “development defence”, which means that the producer will not be liable if the state of scientific knowledge at the time was not such that the producer could reasonably have discovered the defect. To its credit, Google does not seem to be arguing that it could not have discovered, before releasing the car onto the road, that other vehicles would not necessarily slow down for it.

The common law includes the principle of negligence, under which people and companies are liable to anyone to whom death, injury or loss is foreseeably caused because they failed to take as much care as the “reasonable” person or company would have done in the circumstances. The law recognises that perfection is unattainable, and so the courts have to decide in each case what would constitute reasonable care. The provision of a test driver could be seen as fulfilling Google’s duty of care to compensate for possible shortcomings in the car’s algorithms, and the company would be vicariously liable if the test driver did not perform this role to the reasonable standard of care.

In this case, the test driver’s assumption that the bus would slow down to allow the car to pull out in front of it seems to be a reasonable one: all road users have a duty of care to all other road users, and the timely pressing of vehicles’ brake pedals prevents innumerable accidents every day. A human driver of the car would probably have made the same initial assumption as the Google car: that the bus would respond to the car’s indicator by slowing down to allow it to pull out. The reasonable driver would then have checked – whether by eye contact, observation of an encouraging gesture, or an assessment of the bus’s speed – that the bus was actually decelerating before he or she completed the manoeuvre. An autonomous car, presumably, has only the third of these options open to it, so it needs a rapid-cancellation feature if it is not clear that the pulling-out would be safe.

An interesting feature of this story is that the California Department of Motor Vehicles has published the accident report on the internet, and it very clearly sets the scene. It is not clear why the bus driver did not have complete control of the bus, or anticipate the car’s movement. What is clear is that the very slow speed of the car relative to that of the bus, while the car was changing lanes, has played out in a detrimental way. It is not clear whether the Google car was truly “at fault” as there is the complicating circumstance of obstructions – sandbags – in the lane, and Google has not yet commented on how the algorithms that control the vehicle should have responded to these. Obstructions are, of course, a foreseeable feature of motoring, so autonomous cars must be able to react appropriately and safely to them.

However, putting all this aside, it is good to see both California and Google trying to be “open” about incidents and this should be at all costs encouraged. One of the ways forward with new technologies that stretch the boundaries of our current legal systems is to allow companies and users to experiment with new technologies and see what emerges.

Because Google cars autonomously make “reasoned decisions” in specified sets of circumstances, they test the limits and scope of the existing law. Parliament and the courts will have to decide important questions, such as  whether the law of negligence will have to adapt to include “the reasonable robot” – and, if it does, who should pay for the damage caused when the robot’s conduct falls below the legally acceptable standard. We live in interesting technological and legal times.

 

Become an OU student

Author

Ratings & Comments

Share this free course

Copyright information

Skip Rate and Review

For further information, take a look at our frequently asked questions which may give you the support you need.

Have a question?