The most widely read magazine for Canadian lawyers
Issue link: https://digital.canadianlawyermag.com/i/1477874
www.canadianlawyermag.com 45 in a period of transition, whether the current legislative framework is sufficient." Studies show AI can perform as well as or better than humans at several "key health- care tasks," according to a 2019 article in the UK's Royal College of Physicians' Future Healthcare Journal. Doctors use machine learning to predict treatment protocols and detect cancer in radiology images. They use natural language processing to create, analyze, and classify clinical documenta- tion and published research. AI-embedded surgical robots help surgeons with vision, executing precise incisions, and stitching wounds. And AI is being developed for diag- nosis and disease treatment. According to Paul Harte, a medical malpractice trial lawyer, AI represents an "obvious opportunity" for tasks that humans do not do well. He says that tort law may soon incorporate a requirement to use certain AI tools into a doctor's standard of care. "At some point, you're going to see physi- cians and hospitals being held liable for not using AI." In terms of liability, current legal structures will adequately accommodate the increased use of AI in health care, says Harte. But this technology introduces a third player into the process: the developer of the AI system. While there are well-known class actions involving failed hip implants and defective "The progress of AI should not be halted as a result of a fear of liability" Paul Harte, Harte Law hernia meshes, product liability is "rela- tively uncommon" in medical malpractice. AI will introduce new subject matter and multiple parties with overlapping liability. Harte says that these new elements will make an already complex area of litigation even more complicated. Increased AI use will also affect informed consent discussions with patients, says Harte. Patients will likely be entitled to know the extent to which a doctor relies on AI. And if the doctor is using a reputable, high- quality diagnostics tool but opts to deviate from its diagnosis, the doctor will probably be required to justify that decision and to allow patients to make an informed decision as to whether they accept the doctor's ratio- nale, he says. Harte adds that regulation is a critical component. In the research paper "Regulating the Safety of Health-Related Artificial Intelligence," authors Michael Da Silva, Colleen Flood, Anna Goldenberg, and Devin Singh examine Health Canada's approach to regulating AI in health care and identify three safety-related concerns: general safety, algorithmic bias, and privacy/security. How the government classifies a type of AI technology as a medical device is one potential regulatory gap, says Da Silva, a senior fellow in AI and health care at the University of Ottawa's AI + Society Initiative and permanent lecturer at the University of Southampton School of Law.