12/12/2017
Last month, the government released the Industrial Strategy White Paper for the UK. Labelled as “ambitious”, the white paper contains the Prime Minister’s plans to “boost the country’s economy, build on its strengths and embrace the opportunities of technological change.”
The government also announced it would be investing another £725 million, on top of the previously committed £1 billion for the first wave of the challenge fund projects, including devoting £246 million in next-generation battery technology and £86 million in robotics hubs across the UK.
The white paper makes clear that the UK will be at the forefront of the Artificial Intelligence (AI) and the data revolution.
AI and the construction and manufacturing industry
AI will massively affect the construction sector, as it will many other industries. And the legal challenges this will present are yet to be determined. The only thing we can be relatively sure about is that nothing will happen quickly. Look at data protection for example. The General Data Protection Regulations (GDPR) will come into force in May 2018. They will replace the Data Protection Act 1998, which was passed before phones were ‘smart’, social media was part of everyday life, and people did their shopping online. No wonder so many businesses, local bodies and charitable organisations are panicking because they do not feel they will be GDPR compliant in time.

Many believe the construction sector is ripe for disruption with AI and other technology changing the way work is performed. Examples listed in a recent article include: “Real-time collaboration, building information modelling in the cloud, 3D printing, augmented reality, survey drones, big data and the internet of things, as well as wearable technologies, gaming and digital workflows[1].”
AI could also be the answer to the UK’s manufacturing productivity woes. Recent reports showed that productivity in Britain is no higher now than it was just before the 2008 financial crisis, in stark contrast to the average annual growth of 2.1% recorded during the decade before the crash. Had the pre-crisis trend persisted, productivity would now be 20% higher[2]. One of the reasons highlighted for the low-growth in productivity is businesses’ reluctance to invest in machines and new technology. However, this must change if Britain wants to be a viable exporting economy post-Brexit. In the first quarter of 2017, North American manufacturers spent $516 million on industrial robots, a 32% jump from the same period a year earlier[3]. Japan, a world leader in utilising AI, along with China, is now engaging robots to work in care homes for the elderly[4]. British industries must keep up, which is what the white paper’s message clearly advocated for.

How will construction and health and safety laws adapt to AI?
The million-dollar question (aside from what will happen to the millions of displaced workers), is how will AI be regulated?
Regulators all are struggling with how to apply the law to AI. The construction and manufacturing industries are an infinitely small part of the challenge, but many of the fundamentals of how AI will be regulated are the same across all sectors.
For example, who is responsible if AI makes a breach of health and safety and causes injury or death? There is no doubt a robot can do a risk assessment better than a human. A machine can analyse thousands of possible scenarios in a matter of minutes and put together a strategy to minimise or eliminate danger completely. And before you argue that a robot could not take into account human fear and emotion in the face of danger; robots who can recognise and feel human emotions have already been developed[5]. But what if the machine makes a mistake on a risk assessment, either by accident or, as many fear could happen, deliberately? Or say a building, being constructed by AI turns out to have serious flaws? Who is liable? The owner of the machine? The manufacturer? In his new book Life 3.0 Being Human in the Age of Artificial Intelligence[6], Max Tegmark references legal scholar, David Vladeck stating it could be the machine itself.
Here is how such an option would play out. Professor Vladeck proposes that machines, such as electronic cars be allowed and required to hold insurance. There is no reason why this concept could not be transferred to AI in almost any sector. Models with an excellent safety record would be granted lower premiums, probably lower than what is granted to humans. Machines made by ‘sloppy’ manufacturers would be charged a higher premium, making them prohibitively expensive to own[7].

This, of course, brings up a whole series of other questions. Max Tegmark remarks if machines can hold insurance policies, then can they own property? If so, should they be allowed to vote? The conundrums thrown up by AI are endless.
The government is being proactive in looking into the questions of how AI will impact UK laws and ethics. In October 2017, a Lords Select Committee considered two weeks of public evidence from experts on questions such as[8]:
-
“Has AI given rise to new and distinctive ethical issues?
-
Who should be ethically accountable for the decisions made by AI systems?
-
Will an AI system itself be ever held to account for its own decisions?
-
How can an AI system be developed not be discriminatory or unfair in its decision making?
-
Does the ethical development and use of AI require regulation?
-
What are the biggest opportunities and risks for the law in the UK over the coming decade in relation to the development and use of AI?
-
If new legislation was to be introduced to deal with the issues presented by AI, should the UK Government go it alone, or work with other governments to create international frameworks for legislation?
-
As AI systems become increasingly autonomous in practice, will the legal system need to change in order to reflect and accommodate this autonomy?
-
Does the creation of some form of electronic personhood need to be considered in the UK?”

These questions have the potential to affect every industry and every person. As with all new technology, the law is likely to be forever playing catch-up to developments. There is nothing new in this. After all, child labour powered the industrial revolution for 50-100 years before the Factory Act (1833) and the Mines Act (1842) came into force, prohibiting the employment of children aged under ten years. The internet has been part of our lives since 1991, but most of it remains completely unregulated in many parts of the world. Therefore, it is doubtful that the laws relating to AI will keep pace with its rapid development.
We will endeavour to keep you updated as matters progress and advance over the next few years.
Fisher Scoggins Waters is a London based law firm specialising in construction, manufacturing, and engineering law. Please phone us on 0207 993 6960 for legal advice and representation in these areas or an emergency response.
[1] https://www.raconteur.net/business/construction-an-industry-ripe-for-tech-disruption
[2] https://www.ft.com/content/b6513260-b5b2-11e7-a398-73d59db9e399
[3] https://www.bloomberg.com/news/features/2017-10-18/this-company-s-robots-are-making-everything-and-reshaping-the-world
[4] http://www.scmp.com/week-asia/business/article/2104809/why-japan-will-profit-most-artificial-intelligence
[5] https://qz.com/838420/scientists-built-a-robot-that-feels-emotion-and-can-understand-if-you-love-it-or-not/
[6] ISBN 978-0-241-23719-9
[8] https://www.parliament.uk/business/committees/committees-a-z/lords-select/ai-committee/news-parliament-2017/ethics-and-law-evidence-session/