06/08/2018
In early July, and owner/operator of an SME was jailed for eight months for breaching Section 3(1) of the Health and Safety at Work Act, which resulted in the horrific death of two employees, who also happened to be brothers.
Simon Thomerson, who ran Clearview Design and Construction had been contracted by a company to refurbish several units in Hertfordshire. During the course of the work, he supplied the brothers and another man with highly flammable ‘thinners’, which were poured onto the floor to remove adhesive from carpet tiles. Somehow, the thinners ignited, resulting in the brothers suffering from almost 100% burns and dying within 12 hours.
The third worker also suffered burns but survived.
According to Health and Safety at Work Magazine, HSE inspector Paul Hoskins said:
“This tragic incident led to the wholly avoidable death of two brothers, Ardian and Jashar, destroying the lives of their young families.
“The risks of using highly flammable liquids are well known, and employers should make sure they properly assess the risks from such substances, and use safer alternatives where possible. Where the use of flammable solvents is unavoidable, then the method and environment must be strictly controlled to prevent any ignition.”
Mr Thomerson was held liable for the health and safety breach which occurred at his business because, either deliberately or through ignorance, he omitted to put in place proper risk management procedures and protect the safety of his employees and the public. He is a conscious being, who, because he can empathise, could foresee the pain his acts or omissions may cause others. He also possesses free-will, meaning he could foresee (or should have been able to foresee) the consequences of his actions or inaction and make choices.

Health and Safety and Artificial Intelligence
In 2018, because artificial intelligence (AI) does not experience consciousness, the courts do not have to concern themselves with the question of whether a machine can be in breach of health and safety or any other type of law.
But as AI continues to advance, the issue of consciousness will become a legal question to be debated in both Parliament and in the courts. As discussed in previous blogs about AI and legal liability[PM1] , as the sophistication of AI develops it is almost inconceivable that a machine will not eventually be held accountable for its actions. But, can you impose legal liability on a non-conscious being? And what standard will we use for declaring AI ‘conscious’?

What is consciousness?
As humans, we take consciousness for granted. Most of us don’t trouble ourselves with thinking about the fact we are experiencing our life and surroundings; we just know we are.
The issue of consciousness is one of the thorniest of all philosophical questions and one, that as mere lawyers, we do not feel qualified to answer. Instead, we quote a few people who are.
John Locke gave us the modern concept of consciousness in his 1690, Essays in Human Understanding. He defined consciousness as "the perception of what passes in a man's own mind". More recently, the Routledge Encyclopaedia of Philosophy defined consciousness as follows:
Consciousness—Philosophers have used the term 'consciousness' for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates), and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.
Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence, defines consciousness simply as “subjective experience”.
These definitions provide an understanding of consciousness with regard to humans, but how do we define consciousness as it might apply to a machine? Dr Soumya Banerjee, a researcher at the University of Oxford, tackles this problem in his paper: A framework for designing compassionate and ethical artificial intelligence and artificial consciousness[PM2] . He ‘tentatively defines a conscious machine as:
“A computing unit that can process information and has feedback into itself.”
Dr Banerjee believes a computer can be made to recognise itself as a computer.
“We can show computer images of other computers to help it recognize it self (using deep learning-based image recognition algorithms). We can also, for example, show the machine images of a smartphone, birds and buildings to reinforce the concept that it is not any of these things (non-self). Finally, we can design an algorithm to select out all images of non-self; all that remains is self. This kind of an algorithm can be used to design a sense of self in machines. Such a supervised learning approach is similar to negative selection in biology where the immune system learns to discriminate between all cells in the body (self), versus all that is foreign and potentially pathogenic (non-self)”.
Max Tegmark identified four conditions which must be met in terms of information processing to for consciousness to be achieved; these are:
1. It must have substantial information-storage capacity,
2. It must have substantial information-processing capacity
3. It must be fairly independent from the rest of the world, otherwise it could not subjectively feel it had any independent existence, and
4. The system must be integrated as a unified whole.
Mr Tegmark writes that:
“The first three principles imply autonomy: that the system is able to retain and process information without much outside interference, hence, determining its own future. All four principles together mean that the system is autonomous, but its parts aren’t”.
The questions of empathy and free-will
As well as consciousness, it can be argued that to assign legal liability to a machine, that machine must be capable of empathy and free-will.
Empathy
For a person to be liable in negligence, three things must be present:
a) The defendant owed a duty of care to the claimant,
b) That duty was breached, and
c) The breach of duty caused the defendant to suffer damage.
It is difficult to see how the foreseeability element of proving negligence (c) can be achieved without the defendant, whether it be a person or organisation (run by people) having empathy. For it is empathy which allows us to feel another’s pain, distress, and consequences of economic loss etc. Therefore, if an independent AI machine cannot feel empathy, how can it foresee the damage which may be caused by breaching its duty?
Dr Banerjee links the ability to have empathy with a sense of self. And many believe that ensuring AI has the ability for empathy is not only possible but vital. AI theorist Eliezer Yudkowsky once wrote, [PM3] "The AI neither hates you nor loves you, but you are made out of atoms that it can use for something else."
Robots that can recognise and respond to human emotions are already available; for example Nao[PM4] , who is used in classrooms to assist autistic children. But empathy goes much further than merely being able to understand human emotions; it would require AI to have a range of experiences including joy, anger, envy, achieving success and failure, experiencing grief, and even love[PM5] .
And for it to develop true empathy, consciousness and independent thought is necessary.
Free-will
There is no question at present as to who is legally liable when a computer makes a mistake and injures a human being; it is the person who created, programmed or is operating the machine. The human had a choice to choose between various options presented, known as ‘free will’.
Can a machine develop free-will? If they acquire consciousness, there is little reason to believe such a possibility is out of the question through the interaction of a storytelling algorithm with one that responds to the environment.[PM6] But the ethical questions presented with such a development are mind-boggling. If a machine is conscious and has free-will, then for a human to control it could be considered little more than participating in slavery. What are the consequences of a machine having consciousness and free-will, but no or little empathy? And could humans accept that they may not be in fact unique, that something other than us can acquire true free will?
The issue of consciousness and AI brings forth more questions than can possibly be answered at this time. But for law, especially health and safety law, to develop in-sync with what the greatest minds believe is the inevitable future of humanity, these uncertainties must be carefully considered so we can shape the world we want to live in, rather than reactively deal with one created for us.
Fisher Scoggins Waters is a London based law firm specialising in construction, manufacturing, and engineering law. Please phone us on 0207 993 6960 for legal advice and representation in these areas or an emergency response.
[PM1]http://www.fisherscogginswaters.co.uk/blog/article/304/who-is-legally-liable-when-ai-kills-or-injures
[PM2]https://peerj.com/preprints/3502/
[PM3]https://bigthink.com/philip-perry/can-ai-develop-empathy
[PM4]https://www.softbankrobotics.com/emea/en/robots/nao/find-out-more-about-nao
[PM5]https://www.thedailybeast.com/can-robots-fall-in-love-and-why-would-they
[PM6]http://serendipstudio.org/sci_cult/evolit/s05/web2/lpaterek.html