Canadian Lawyer

July 2021

The most widely read magazine for Canadian lawyers

Issue link:

Contents of this Issue


Page 42 of 51 41 you are using automated decision-making systems to make predictions, recommenda- tions or decisions, that people are aware of that and that they have the right to request an explanation of how their personal information was used in that process." Under Quebec's Bill 64, which would amend several statutes in that province, companies must provide "more specific information at the time that the deci- sion-making system is being used," says Mee. "The federal bill looks to be proposing disclosures about the use of AI in the compa- ny's privacy policies, whereas the Quebec bill is proposing notice at or before the time of processing [information], which in my mind suggests even greater transparency because no one reads the privacy policy," she says. One challenge of Bill C-11, if imple- mented as proposed, will be compliance across the board, says Aaron Baer, a Toronto partner in Renno & Co., a Montreal-based law firm focussing on startup and emerging technology law. "If I'm running any [kind] of platform that can be used by anyone in the world, I may be collecting data from people all over the place, and each of these places has different legislation," he says, noting the various provincial regimes and patchwork of laws across the U.S. as well. Bill C-11's definition of an automated decision system is also expansive, Baer says. In essence, it includes "any technology that assists or replaces the judgment of human decision-makers." The definition could consist of a digital questionnaire that provides a few questions to a consumer and then issues a quick answer on whether he or she is eligible to receive a service or product, such as a mortgage. Currently, at second reading, "it doesn't look like it will fly through and get passed," as proposed, says Mee. The delay could also be because of the significantly higher fines the CPPA would impose. Europe's GDPR and the new AI framework Under the GDPR, there were already rules around automated decision-making, says Mee. "It's similar to what's been proposed in Canada in terms of transparency, but it goes much further and gives individuals the right not to be subject to a decision based solely on automated processing if it has a significant impact on the individual." The GDPR "is more robust than what's proposed in Canada," she says, and with more of the accountability requirements that the federal privacy commissioner has recommended. On April 21, the European Commission published its Regulatory framework proposal on Artificial Intelligence, which protected individual rights. There is now a category of AI tools that would be prohibited. For example, so-called "manipulative AI" is used to manipulate human behaviour through "social scoring." This process involves creating a score based on an individual's behaviour that can affect their ability to access services. Another category in the new framework is high-risk AI, "which is not outright prohib- ited but subject to heightened requirements to make sure that the appropriate checks and balances are in place," Mee says. The AI systems identified as high-risk include technology used in: • Critical infrastructures (e.g. transport) that could put the life and health of citi- zens at risk; • Educational or vocational training that may determine the access to education and professional course of someone's life (e.g. scoring of exams); • Safety components of products (e.g. AI application in robot-assisted surgery); • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures); • Essential private and public services (e.g. credit scoring denying citizens opportu- nity to obtain a loan); • Law enforcement that may interfere with people's fundamental rights (e.g. evalua- tion of the reliability of evidence); • Migration, asylum and border control "It's not just about updating your privacy policy, … it goes back to privacy by design and really understanding your business's data flow." Aaron Baer, Renno & Co. EU'S FIRST LEGAL FRAMEWORK ON AI address risks specifically created by AI applications propose a list of high-risk applications set requirements for AI systems for high-risk applications define obligations for AI users and providers of high-risk applications propose a conformity assessment before the AI system is put into service or on the market propose enforcement after such an AI system is in the market propose a governance structure at European and national levels Source: European Commission Regulatory framework proposal on Artificial Intelligence

Articles in this issue

Links on this page

Archives of this issue

view archives of Canadian Lawyer - July 2021