Love, Cybersecurity & Hacked-Robots - Can Robot Manufacturers Be Held Liable for Murder Perpetrated by Hacked Sexbots?

“Mark my words - A.I. is far more dangerous than nukes” - Elon Musk

Introduction

Whilst the idea of a human apocalypse being brought about by artificial intelligence (“A.I.”) has been in Hollywood films for decades, cybersecurity experts warned that it isn’t just military A.I. that poses a threat to humanity, unassuming sex robots are equally dangerous!

“She understands COVID-19…” - Brick Dollbanger, Robot Beta Tester

The COVID-19 crisis has seen the leapfrogging of technology across every aspect of humanity. With physical inter-human interactions turning “deadly”, it is no surprise that the sex robot industry has received a boom. Yet, as with all infant technology, cybersecurity experts warned that these new robots can be a grave threat to humanity.

Warning!

“Hacking into many modern-day robots, including sexbots, would be a piece of cake compared to more sophisticated gadgets like cellphones and computers…

Hackers can hack into a robot or a robotic device and have full control of the connections, arms, legs and other attached tools like in some cases knives or welding devices…

Once hacked, they could absolutely be used to perform physical actions for an advantageous scenario or to cause damage…” - Dr Nick Patterson

The recent decision in HKSAR v. Mak Wan-ling [2020] HKCFI 3069 highlighted the increasing number of medical manslaughter cases in Hong Kong , raising awareness within the medical community about criminal liability in medical negligence. However, such retrospective self-reflection is, for all intents and purposes, too little too late.

The same can be said regarding the tech industry. Hopefully, coders will pay more attention to vulnerabilities on their platform and be mindful that their inventions can cause danger during the development process, rather than a retrospective reflection.

Vulnerabilities in cybersecurity resulting in death have already been documented. On September 11, 2020, a patient died after hackers disabled the computer systems at Düsseldorf University Hospital. What had begun as a routine transfer turned deadly when inter-hospital logistics became crippled by the cybercrime. This attack triggered Germany’s first cybersecurity manslaughter investigation (different from medical manslaughter).

Manslaughter by Cybersecurity Negligence

As reaffirmed in Mak Wan-ling, the chief elements of manslaughter by gross negligence includes:

1. The defendant owed an existing duty of care to the deceased;
2. The defendant negligently breached that duty of care;
3. It was reasonably foreseeable that the breach of that duty gave rise to a serious and obvious risk of death;
4. The breach of that duty caused the death; and
5. The circumstances of the breach were truly exceptionally bad and so reprehensible as to justify the conclusion that it amounted to gross negligence and required criminal sanction.

In the present scenario, physical interaction with a robot may entail certain health risks (e.g. heart attacks, muscle strains, etc.). Any glitch in a robot’s operating system may cause serious harm to an end user.

Furthermore, any operating system can be compromised. Where the program is interactive, personal data of the end user will be processed. It is therefore crucial for manufacturers to make appropriate safeguards.

As such, risks of using a robot platform is foreseeable and any break in ensuring its security will be negligence. Where the harm is both foreseeable and unmitigated, the manufacturer may be at risk of being grossly negligent and be liable to the consequential harm that a user may suffer.

Pre-Emptive Mitigation

Whilst liability from manslaughter by cybersecurity negligence is a danger for tech developers, developers can pre-emptively protect themselves. The most traditional approach is using risk disclaimer. Robot developers can require prospective users confirm their understanding of risks associated with the use of their product before its activation.

That said, the best way to protect oneself will still be to ensure delivery of quality product. For example, in Mak Wan-ling, the patient was duly advised of the risks associated with the procedure. What had ultimately doomed the defendant’s practice was that the quality of care delivered was so sub-standard that any reasonable practitioner would have found offending.

Conversely, developers ought to make certain that their platforms are more protected than devices such as cell phones. Unfortunately, that may not be the case yet with existing sex robot developers.

Conclusion

In an age where Artificial Intelligence (“A.I.”) too have such ability to physically manipulate surroundings, developers should remember:

Perfect your product! The premature launch of a product with backdoor may attract serious liability. Where a life is involved, great care is to be taken.
Ensure that the A.I. will do no harm to other sentient beings. The ability to wield tools meant they have great power, and with great power comes great responsibility.
Ensure that users are aware of risks of the product! Back to the basics, make sure your product is legally certified before launch. Retrospective reflection will be too late.

Jurisdictions: 

Solicitor, ONC Lawyers

Joshua Chu is a Litigation Solicitor qualified to practice in Hong Kong. Before becoming a lawyer, Joshua worked in the healthcare industry serving as the IT department head at a private hospital as well as overseeing their procurement operations.

Since embarking upon his legal career, his past legal experience includes representing the successful party in one of Hong Kong’s first cryptocurrency litigation cases as well as appearing before the Review Body on Bid Challenges under the World Trade Organization Government Procurement Agreement concerning a health care industry related tender.

Today, Joshua’s practice is mainly focused in the field of dispute resolution and technology law.

Aside from his legal practice, Joshua is currently also a Senior Consultant with a regulatory consulting firm which had been founded by ex-SFC Regulators as well as being a management consultant for the Korean Blockchain Centre.

Partner, ONC Lawyers

Michael Szeto is a litigation partner of ONC Lawyers and heads the firm’s Employment practice. Prior to joining the firm, he had practised with various prominent law firms in Hong Kong. He has many years of experience in handling complex commercial dispute resolution, shareholders’ and joint venture disputes, bankruptcy and insolvency matters, debt recovery and mortgagee actions. He also routinely deals with regulatory actions and compliance matters under the Securities and Futures Ordinance, the Hong Kong listing rules and anti-money laundering laws and guidelines.

On the employment side, Michael often advises on various contentious and non-contentious employment matters, covering contract reviews, termination disputes, injunctive relief, discrimination and harassment claims, data privacy matters, as well as advice on matters relating to team moves, remuneration packages and employee incentive schemes. He is a frequent author of employment articles in industry publications and presenter to legal and human resource professionals.

Michael’s broad clientele includes listed companies, directors, shareholders, local and overseas banks, financial institutions, local and international corporations as well as statutory bodies.