A 2013 report to the UN Human Rights Council calling for a moratorium against the development of lethal autonomous robots got me pondering the state of "robot law"; that is, laws that regulate the operation of autonomous robots. While it's likely that we are still years away from having to ask the question "can a robot commit [a] murder?", an act that requires an intention to kill somebody else, laws are starting to emerge that regulate the use of autonomous vehicles on land and in the air.
Regulating autonomous vehicles
As described in Robot cars, part 1: Parking the future for now and Robot cars, part 2: Convoys of the near future, and in the OU/BBC World Service co-produced Click radio programme discovered, driverless cars are increasingly taking to the roads. OU emeritus professor John Naughton has also further observed that while the cost of producing a fully autonomous car may be so prohibitively expensive that it limits widespread adoption, robotic technologies that allow cars to operate in auto-pilot mode in certain conditions are likely to be increasingly commonplace (Do autonomous cars need to cost so much?). So how is the law responding?
As far as state law goes in California, home to Silicon Valley and the many technology startups associated with it, there is an attempt to distinguish between "auto-pilot" driver support applications and fully autonomous operation:
An autonomous vehicle does not include a vehicle that is equipped with one or more collision avoidance systems, including, but not limited to, electronic blind spot assistance, automated emergency braking systems, park assist, adaptive cruise control, lane keep assist, lane departure warning, traffic jam and queuing assist, or other similar systems that enhance safety or provide driver assistance, but are not capable, collectively or singularly, of driving the vehicle without the active control or monitoring of a human operator.
In contrast, an "autonomous technology" means technology that has the capability to drive a vehicle without the active physical control or monitoring by a human operator and "autonomous vehicle" means any vehicle equipped with autonomous technology that has been integrated into that vehicle.
Are we seeing signs here of "human law" versus "robot law", with distinctions made between humans and other autonomous (that is, self-regulating and independent decision-making) entities?!
Columbia State law takes a similar tack, whereby “autonomous vehicle” means a vehicle capable of navigating district roadways and interpreting traffic-control devices without a driver actively operating any of the vehicle’s control systems, along with similar disclaimers about what does not constitute an autonomous vehicle. In addition, it defines a "driver" as "a human operator of a motor vehicle with a valid drivers licence."
So - autonomy kicks in when the human driver cedes continuous and active operation of the vehicle to the vehicle itself.
On the other hand, Columbia state law also requires that the vehicle:
- "has a manual override feature that allows a driver to assume control of the autonomous vehicle at any time"
- "has a driver seated in the control seat of the vehicle while in operation who is prepared to take control of the autonomous vehicle at any moment";
- and has a driver who "is capable of operating in compliance with the District’s applicable traffic laws and motor vehicle laws and traffic control devices."
Florida state law considers the role of the operator in the following terms:
a person shall be deemed to be the operator of an autonomous vehicle operating in autonomous mode when the person causes the vehicle's autonomous technology to engage, regardless of whether the person is physically present in the vehicle while the vehicle is operating in autonomous mode.
If you would like to learn more about driverless car legislation in the US, it is tracked by The Center for Internet and Society at Stanford.
Although also prescribing that while autonomous vehicles may only operate on public roads for testing purposes, "a human operator shall be present in the autonomous vehicle such that he or she has the ability to monitor the vehicle’s performance and intervene". However, that the definition of an operator suggests they need not be "physically present in the vehicle while the vehicle is operating in autonomous mode" is notable; such a statement also appears in the aforementioned California state law:
An "operator" of an autonomous vehicle is the person who is seated in the driver's seat, or if there is no person in the driver's seat, causes the autonomous technology to engage.
Regulating drones
As well as concerning themselves with autonomous vehicles, US state legislatures have also been looking at regulations around autonomous and remotely controlled drones. Generally, legislation is being used to curb the use of drones for surveillance purposes. A recent Tennessee Senate bill, for example, declares that "[n]otwithstanding any law to the contrary, no law enforcement agency shall use a drone to gather evidence or other information", although exemptions are included relating to terrorism, searches effected under a warrant, or if "swift action is needed to prevent imminent danger to life".
While many legislatures are taking similar steps to limit the use of drones for surveillance, notwithstanding exclusions similar to the ones adopted by Tennessee, some are also pre-emptively restricting the "weaponisation" of drones, or as they are also referred to, unmanned aerial vehicles (UAVs) (which is to say, "aircraft that are operated without the possibility of direct human intervention from within or on the aircraft"). So for example, in Massachusetts a recent bill states "Any use of an unmanned aerial vehicle shall fully comply with all Federal Aviation Administration requirements and guidelines. Unmanned aerial vehicles may not be equipped with weapons" [my emphasis], Oklahoma requires that "No person shall operate an unmanned aircraft system that contains, mounts, or possesses any lethal or nonlethal weapon or weapons system of any kind"; and North Dakota declares "[a] state agency may not authorize the use, including grant a permit to use, of an unmanned aircraft while armed with any lethal or nonlethal weapons, including firearms, pepper spray, bean bag guns, mace, and sound-based weapons" [again, my emphasis].
If you would like to learn more about drone legislation being explored in the US, try the (admittedly partisan) Drone Legislation: What’s Being Proposed in the States?. As well as US state legislation, there is interest in development at the federal level too by the US Federal Aviation Authority Unmanned Aircraft Systems (UAS) Initiative. In the UK, the Civil Aviation Authority publish guidance on Unmanned Aircraft and Aircraft Systems, while at the European level, work is underway to develop a European strategy for the development of civil applications of Remotely Piloted Aircraft Systems (RPAS).
However, not all states are following suit. If we look again at the Tennessee bill, we see that a drone is defined as a powered, aerial vehicle that: does not carry a human operator; uses aerodynamic forces to provide vehicle lift; can fly autonomously or be piloted remotely; can be expendable or recoverable; and can carry a lethal or nonlethal payload [my emphasis].
Regulating lethal autonomous robots
While the question of whether or not the weaponisation of drones should be outlawed appears to have a mixed response across US states, none of them (as yet) appear to have started to address the question of lethal autonomous robots; that is, robots whose autonomy extends to the "right" to make a kill decision (a determination to use lethal force, or more generally, a decision to use a weapon system against an identified target).
According to Christof Heyns in his May 2013 report to the UN Human Rights Council, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, the use of autonomous robot warriors may have grave implications for our humanity:
Lethal autonomous robotics (LARs) are weapon systems that, once activated, can select and engage targets without further human intervention. ... ...[T]heir deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings. ... ..[Considerations made on the basis of International Humanitarian Law derive from] the belief that a human being somewhere has to take the decision to initiate lethal force and as a result internalize (or assume responsibility for) the cost of each life lost in hostilities, as part of a deliberative process of human interaction. This applies even in armed conflict. Delegating this process dehumanizes armed conflict even further and precludes a moment of deliberation in those cases where it may be feasible. Machines lack morality and mortality, and should as a result not have life and death powers over humans. This is among the reasons landmines were banned."
For aficionados of dystopian near-future science fiction, elements of the report seem as if they might come straight from the pages of their latest read:
"For societies with access to it, modern technology allows increasing distance to be put between weapons users and the lethal force they project. ... Lethal autonomous robotics (LARs), if added to the arsenals of States, would add a new dimension to this distancing, in that targeting decisions could be taken by the robots themselves. With the contemplation of LARs, the distinction between weapons and warriors risks becoming blurred, as the former would take autonomous decisions about their own use.
Or how about this? "LARs present the ultimate asymmetrical situation, where deadly robots may in some cases be pitted against people on foot".
Thinking back to the case of civilian autonomous vehicles, where the responsibilities associated with driving a vehicle seem generally to lay with the operator, who is the responsible party when it comes to LARs making their own lethal force decisions? The report suggests that "Command responsibility should be considered as a possible solution for accountability for LAR violations. Since a commander can be held accountable for an autonomous human subordinate, holding a commander accountable for an autonomous robot subordinate may appear analogous..." However:
traditional command responsibility is only implicated when the commander "knew or should have known that the individual planned to commit a crime yet he or she failed to take action to prevent it or did not punish the perpetrator after the fact." It will be important to establish, inter alia, whether military commanders will be in a position to understand the complex programming of LARs sufficiently well to warrant criminal liability.
Accepting a lack of responsibility from a lack of technical understanding seems to me to be a very dangerous route to take. That said:
It is an underlying assumption of most legal, moral and other codes that when the decision to take life or to subject people to other grave consequences is at stake, the decision-making power should be exercised by humans. The Hague Convention (IV) requires any combatant “to be commanded by a person”.
Once again we see the invocation of "humans" and "person[s]", with the implication that a person is also "a human". But do we need to start making explicit mentions of the involvement of "a human" party in the creation of legislation?!
While the report cautions against the use of phrases such as "killer robots", a term that appeared in the subtitle of the report LOSING HUMANITY: The Case against Killer Robots from campaigners Human Rights Watch and the International Human Rights Clinic in November 2012, there is no doubt that the report expresses a cautionary message:
There is clearly a strong case for approaching the possible introduction of LARs with great caution. If used, they could have far-reaching effects on societal values,... ... there is widespread concern that allowing LARs to kill people may denigrate the value of life itself. ... If left too long to its own devices, the matter will, quite literally, be taken out of human hands. ...
At this point, it is maybe also worth noting the report's comments about the legality of lethal autonomous robots and concerns about "the ability of the international legal system to preserve a minimum world order". This tension "come[s] on the heels of the problematic use and contested justifications for drones and targeted killing" as, for example, described by another UN Human Rights Council special rapporteur report on targeted killings and an April 2013 US Senate Judiciary Committee Hearing on Drone Wars: The Constitutional and Counterterrorism Implications of Targeted Killing. (In the UK, an April, 2013, YouGov survey on British attitudes to drones suggests that British sentiment towards drone attacks is mixed, although feelings about LARs were not referred to.) When addressing the ethics and legality of LARs, we need to clearly distinguish between the issues pertaining to their operation as autonomous entities in and of themselves, and a projection onto them of our own opinions relating to current policy surrounding the use of drones for targeted killing, notwithstanding the fact that LARs may also be used for targeted killings.
As far as Special Rapporteur Heyns' report goes, the recommendations call for "national moratoria on at least the testing, production, assembly, transfer, acquisition, deployment and use of LARs until such time as an internationally agreed upon framework on the future of LARs has been established".
In addition, the report suggests that a panel of experts from different disciplines "[p]ropose a framework to enable the international community to address effectively the legal and policy issues arising in relation to LARs, and make concrete substantive and procedural recommendations in that regard"; this expert panel should also provide an "[a]ssessment of the adequacy or shortcomings of existing international and domestic legal frameworks governing LARs".
It will be interesting to see whether there is an interplay between the various forms of "robot law" and the operation of autonomous machines as pertaining to autonomous vehicles and civilian drones and the report of any expert panel, if the recommendation to the UN Human Rights Council is followed and the panel convenes. Or conversely, the extent to which recommendations from the UN Human Rights Council feed into policy development and legislation at the national and local level.
What do you think about the way in which the law should treat robots? Should a human operator always be responsible for their actions, or should they been responsible for their own actions? In the latter case, if a robot acts irresponsibly, how should we treat it? And if robots are to have responsibilities, should they also have rights? Feel free to join in this very current debate in the comments below.
If you would like to learn more about robotics, you may consider studying the first-level course Technologies in practice from The Open University. If you are interested in design and engineering the future, you could look at the Open University's Engineering courses or try a free taster course on Introducing Engineering. If you are interested in learning more about the role of ethics in computing and ICT, why not try out this free OpenLearn unit on Introducing ethics in Information and Computer Sciences.
Rate and Review
Rate this article
Review this article
Log into OpenLearn to leave reviews and join in the conversation.
Article reviews
-----Old style comments (this format for comments is no longer supported but is still displayed for reference)------
Carolyn Djebbari - 21 July 2013 8:28pm
The questions raised are important but how are we supposed to answer correctly when the general public are so ill informed on matters concerning AI. How far has technology got? How will it impact upon society? Why are we not discussing it now rather than when the decisions have been made without adequate public consultation? After reading this article I wrote a blog in regard to these issues and others to stimulate discussion http://clamorbox.blogspot.co.uk/2013/07/blog-post.html#.UewzpDu1GSo
Tony Hirst - 24 July 2013 11:24am
You are quite right that public perceptions of artificial intelligence and the limits of robotics are maybe at odds with what is currently possible and pragmatically achievable. Perceptions are as likely to be coloured by science fiction movies as they are by recent news stories, although news stories also tend to favour the quirky or extreme example.
Every so often the research councils support public engagement activities around robotics, so the next time one comes around, I'd encourage you to get involved:-)
As to what the public currently does think, a recent Sciencewise report [ http://www.sciencewise-erc.org.uk/cms/assets/Uploads/Robotics-ReportFINAL02-07-2013.pdf ] suggests that the UK public are happy for robots to play a role in the workplace and supporting the military, but they are less comfortable with the idea of domestic robots. Assuming you don't class your washing machine as a robot of course!