The most widely read magazine for Canadian lawyers
Issue link: https://digital.canadianlawyermag.com/i/1086191
18 M A R C H 2 0 1 9 w w w . c a n a d i a n l a w y e r m a g . c o m T here has been much written on the topic of ethics in artificial intel- ligence and innovation recently. It seems we're moving in the right direction in having an open discussion on the role of ethics and privacy in the algorithms and systems we rely on and use. From what I've read so far: (1) Today's AI and innovation develop- ment efforts are still in their teenage years, and (2) Trying to simplify the more complex human interactions and behaviours down into a bunch of mathematical equations with this early-generation technology is going to be messy and flawed. According to one revered AI computer scientist: "We have more to fear from dumb systems that people think are smart than from intelligent systems that know their limits." And as Meredith Broussard's book Artificial Unintelligence: How Computers Misunderstand the World has it: "Just because you can imagine some- thing doesn't mean that it's true, and just because you can imagine a future doesn't L E G A L I N N O VA T I O N N O W ETHICS BEFORE ALGORITHMS By Kate Simpson mean that it will come to be." (Does that mean no jet-packs then?) Her book deep-dives into several real-world examples to show the risks of too much techno-chauvinism — the belief that technology is always the solution. The stories of people apparently blindly fol- lowing Google maps into lakes and down one- way streets have been replaced with stories of professionals taking the outputs of sentencing and parole algorithms as fact. But there's no magic in those algorithms; there's no ghost-in- the-system coming up with the perfect answer free from human bias. Human bias is built into the very foundation of these systems. The data needed to train the algorithms is all socially constructed and is, therefore, flawed because of it. It is this bias in the data that points to a gap in how we teach the robots about the complexity of our world. In the 2018 book The Book of Why: The New Science of Cause & Effect by Judea Pearl and Dana Mackenzie, there is a fascinating discus- sion about the predictive abilities of current AI and the modelling in high-frequency and habitual circumstances of Big Data. But the lim- its of this narrow AI mean it cannot yet handle places where learning systems are "governed by rich webs of causal forces." Predicting how a judge might rule based on prior common fact scenarios is a great example of this narrow AI, particularly if it's based on the clear "human- in-the-loop" algorithm training that companies such as Blue J Legal adopt. But predicting the recidivism risk rates for convicted criminals is different and needs a way to program in the complexities of our world. There isn't yet an established way to remove the existing inequali- ties that get amplified using our current biased data sets. Data-driven decisions will never be 100-per-cent correct because they can't yet take account of the randomness and nuance of real life. So, before we get too carried away with building algorithms that make decisions for us, we should be giving voice to the concerns of machine bias and the lack of an ethics frame- work or consideration of the socio-economic factors involved. The classic ethical dilemma of the "trolley problem" and how it applies to the engineers designing and building self-driving cars is a case in point. How should cars choose between swerving to hit one person and instead hitting five or, indeed, killing its driver and passengers? Broader society needs to take a more active part in the design and build of new technology that will forever change our lives. Dr. Ann Cavouki- Technology needs to be used ethically and sometimes not at all @k8simpson O P I N I O N HUAN TRAN