Driverless car makers could face jail if AI causes harm

Makers of driverless vehicles and other artificial intelligence systems could face jail and multi-million pound fines if their creations harm workers, according to the Department of Work and Pensions.

Responding to a written parliamentary question, government spokesperson Baroness Buscombe confirmed that existing health and safety law "applies to artificial intelligence and machine learning software".

This clarifies one aspect of the law around AI, a subject of considerable debate in academic, legal and governmental circles.

Under the Health and Safety Act of 1974, directors found guilty of "consent or connivance" or neglect can face up to two years in prison.

This provision of the Health and Safety Act is "hard to prosecute," said Michael Appleby, a health and safety lawyer at Fisher Scoggins Waters, "because directors have to have their hands on the system."

However, when AI systems are built by startups, it might be easier to establish a clear link between the director and the software product.

Companies can also be prosecuted under the Act, with fines relative to the firm's turnover. If the company has a revenue greater than £50 million, the fines can be unlimited.

The Health and Safety Act has never been applied to a case of artificial intelligence and machine learning software, so these provisions will need to be tested in court.

Image: Bosses at AI firms could face prosecution if their technology causes harm

In practice, says Chris Jackson, a partner in the health and safety team at law firm Burges Salmon, the law could apply to a wide range of activities.

"The general duties… deliberately apply a wide obligation to manage all risks generated by a work activity," he said. This could include harm to members of the public caused by unsafe AI.

Perhaps the true significance of the announcement is the responsibilities that ruling gives the Health and Safety Executive (HSE).

HSE now becomes one of the numerous regulators of AI, a group that includes the Information Commissioner's Office and the recently-opened Centre for Data Ethics and Innovation.

For some, this suggested that existing legal systems were well-equipped to cope with new technology.

"There is nothing magical about AI or machine learning, and someone building or deploying it needs to comply with the relevant regulatory framework," said Neil Brown, director of legal technology firm decoded:Legal.

However, others questioned the Health and Safety Executive's ability to understand the complex technology, which under the current regime is left to companies to test.

"I'm sceptical both that industry's own tests will be deep and comprehensive enough to catch important issues, and that the regulator is expert enough to meaningfully scrutinise them for rigour," said Michael Veale, researcher in responsible public sector machine learning at University College London.

"While killer robots might be the first thing that comes to mind here," Mr Veale added, "less flashy systems designed to manage workers, such as to track them around the factory or warehouse floor, set and supervise their tasks, and monitor their activities in detail, can have complex mental and physical effects that health and safety regulators need to grapple with."

Sky News has contacted HSE for comment.

https://news.sky.com/story/driverless-car-makers-could-face-jail-if-ai-causes-harm-11508164?dcmp=snt-sf-twitter