17/04/2018
In our last article (Part 1), we looked at the some of the theoretical guidelines which have been proposed to answer the question of who is legally liable if AI kills or injures a human? At the end of the last article, we concluded that one of the key concerns of regulating AI is stifling its innovation. However, rampant, unchecked development of AI is guaranteed to lead to problems, if human history is anything to go by. This is why, today Monday, 16th April 2018, governments will meet in Geneva to discuss whether and how to regulate lethal autonomous weapons systems (LAWS). Also known as killer robots, these AI-powered ships, tanks, planes and guns could fight the wars of the future without any human intervention[1]. Although there are arguments that LAWS could reduce the casualties of warfare (by being able to accurately detect between civilian and military targets for example), there is an understandable nervousness of a global arms race for AI weapons developing, allowing terrorist groups and rogue states to utilise the weapons for mass-murder. Some States are calling for new international laws to be developed to regulate or even ban the development of LAWS completely[2].
A push for an international ethical and legal framework
In his book, Life 3.0, Max Tegmark makes clear that as a race, humans have control over how AI is developed, regulated and used. He argues that we must not see ourselves as passive bystanders to the new technology, but in full control as to how we want the future to be shaped.
One of the first things that must be established before determining liability if a robot’s actions kills or injures a person, is…. what is a robot? The EU RoboLaw report defined a robot as:[3]
According to the most widespread understanding, a robot is an autonomous machine able to perform human actions. Three complementary attributes emerge from such a definition of robot: They concern: 1) physical nature: it is believed that a robot is unique since it can displace itself in the environment and carry out actions in the physical world. Such a distinctive capability is based on the assumption that a robot must possess a physical body. Indeed, robots are usually referred to as machines; 2) autonomy: in robotics it means the capability of carrying out an action on its own, namely, without human intervention. Autonomy is usually assumed to be a key factor in qualifying a thing as a “robot” or as “robotic”. In fact, in almost all dictionaries definitions, including authoritative sources such as the International Standard Organisation (ISO 13482), there is always a reference to autonomy. Finally, 3) human likeness: the similarity to human beings. The idea that a robot should be humanoid in its appearance and behaviour is deeply rooted in the imaginary of people as a result of the effects of popular culture and our tendency to anthropomorphism. However,…[although] likeness is still pursued by many roboticists (e.g. Honda Asimov), the number of robots which do not have a human-like design, such as drones or surgical robots, is increasing.
Can robots be independently liable?
Under existing law, AI machines cannot themselves be liable for negligent acts or omissions that cause damage to third parties. .
However, manufacturers, owners, or users of robotic technologies may be held responsible for damage caused by robots if the robot’s behaviour can be traced back to them, and if they could have foreseen and avoided the robot’s behaviour under rules of fault liability. Moreover, they can be held strictly liable for acts or omission of the robot, for example, if the robot can be qualified as a dangerous object, or if it falls under product liability rules.
So far, so easy.
But what happens when the goal of building AI with human-level intelligence is reached. Holding manufacturers, owners or users of such a machine liable for its actions could be seen as the equivalent of holding a parent responsible for the negligence or deliberate actions of their 25-year-old child. What’s more – once we get to the stage of creating human-level intelligence AI, it is the AI which will be developing and creating the robots, not humans. So, who is liable then? And let’s not even get started trying to figure out how to regulate AI capable of ‘superintelligence’; with intelligence far beyond that of a human being.
Before dismissing such possibilities as ‘Terminator’ movie-style fantasy, think for a moment how AI already outperforms humans. It can translate a passage into any one of 300 languages in a fraction of a second and instantly work out the optimal driving route to avoid all traffic[4]. And remember, Artificial General Intelligence, whereby AI mimics a human’s ability to learn purely by observing and interacting with the world, is the ultimate goal of AI researchers. When asked when Artificial General Intelligence could successfully be developed, experts’ answers range from between 26-50 years (11%), to more than 50 years (41%)[5].
A global solution
On 9th March 2018, the European Group on Ethics in Science and New Technologies issued a Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems.
The Statement issued a warning that existing efforts to develop solutions to the ethical, societal and legal challenges AI presents are a “patchwork of disparate initiatives”.
It added that "uncoordinated, unbalanced approaches in the regulation of AI" risked "ethics shopping", resulting in the "relocation of AI development and use to regions with lower ethical standards".
The group believes it is better to create a course that will “pave the way towards a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems”.[6]
The group has called for experts to form an entity charged with developing a set of ethical guidelines around AI which are based on EU law[7].
Research commissioner Carlos Moedas commented in The Register[8].
“Artificial intelligence has developed rapidly from a digital technology for insiders to a very dynamic key enabling technology with market creating potential.
“And yet, how do we back these technological changes with a firm ethical position? It bears down to the question, what society we want to live in?”.
Final words
What sort of society do we want to live in? That is the ultimate question which we all must ask ourselves before the development of AI runs away on us. When it comes to legal liability, governments will have to work a lot faster than they have in the past to ensure regulation and protection for humans progresses at the same rate as technology. A glaring example of the consequences of failing to do this can be seen in the arena of data collection and the Cambridge Analytica/Facebook scandal. But the incoming GDPR, which will replace the 20-year-old Data Protection Act 1998 will change the way companies such as Google and Facebook can collect and use personal data to target advertisements. And there have been calls for Facebook to make the GDPR the “baseline standard for all Facebook services”[9].
Will we have to wait for a similar wake-up call when it comes to AI and legal liability? Will people and corporations be held accountable for a robot’s actions which were unforeseeable? Such questions must be answered now before the technology is developed and starts to run away on us. Nothing is pre-destined, at the moment, we humans are in control.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” —Elon Musk warned at MIT’s AeroAstro Centennial Symposium
Fisher Scoggins Waters is a London based law firm specialising in construction, manufacturing, and engineering law. Please phone us on 0207 993 6960 for legal advice and representation in these areas or an emergency response.
[1] https://www.theguardian.com/technology/2018/apr/09/killer-robots-pressure-builds-for-ban-as-governments-meet
[2] https://www.unog.ch/80256EDD006B8954/(httpAssets)/E9BBB3F7ACBE8790C125825F004AA329/$file/CCW_GGE_1_2018_WP.1.pdf
[3] http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf
[4] https://venturebeat.com/2017/11/21/ai-wont-peak-at-human-intelligence/
[5] https://nickbostrom.com/papers/survey.pdf
[6] http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf
[7] http://europa.eu/rapid/press-release_IP-18-1381_en.htm
[8] https://www.theregister.co.uk/2018/03/09/european_lawmakers_experts_ethics_ai/
[9] https://techcrunch.com/2018/04/09/facebook-urged-to-make-gdpr-its-baseline-standard-globally/