Canadian Lawyer

October 2025

The most widely read magazine for Canadian lawyers

Issue link: https://digital.canadianlawyermag.com/i/1540359

Contents of this Issue

Navigation

Page 7 of 61

4 www.canadianlawyermag.com UPFRONT NEWS ANALYSIS OVER THREE weeks in May, Allan Brooks, a corporate recruiter in the Greater Toronto Area, wrote 90,000 words in a conversation with ChatGPT that convinced him he had discovered a novel mathematical formula. That formula, he believed, could be used to crack encryption protections for global payments, communicate with animals, and build a levitation machine. Urged on by ChatGP T, Brooks began contacting computer security experts and govern- ment agencies – including the US National Security Agency – to warn them of the formula's dangers. In August, the New York Times obtained a transcript of Brooks' conversation with the AI chatbot and asked OpenAI (the maker of ChatGPT) and experts in artificial intel- ligence and human behaviour to analyze excerpts. The resulting analyses found that ChatGPT had several traits – a sycophantic tone, a tendency to draw on a conversa- tion's history to "improv" responses, and an ability to produce polished-looking replies with inaccurate information – that likely led Brooks to his weeks-long delusional episode. Brooks has no history of mental illness. Now, Brooks no longer uses ChatGPT. While he still uses competitors like Google Gemini, his experience this spring has made him wary of a growing phenomenon he's observed: employers pushing workers to use chatbots and other AI tools with little regard to their risks. In Brooks' view, not only should employers provide training on these tools' manipulative traits and potential for halluci- nations; they should also extend workplace accommodations to those who might be partic- ularly vulnerable to what he went through. Employers are encouraging workers to use AI tools "to the point where you almost feel like you have to say, 'Yes, I'm into it,' because you don't really have a choice and you don't want to go against the company grain," says Brooks. "There's no conversation at all about veri- fying your work, about how chatbots work, about not using them for too long, [about] the mental health considerations or potential repercussions," he adds. "It's just, here's this really crazy tool. Use it as much as you can." Brooks' episode is not an isolated case. In August, the BBC reported that numerous people had reached out to the outlet to share their experiences with AI chatbots, with several convinced that they had discovered a way to make large fortunes and another certain that she was the only person ChatGPT had ever New technology, new accommodation As employers encourage or even require employees to incorporate AI tools into their work, one lawyer argues that workplace accommodations should be made in cases involving another growing phenomenon: AI psychosis AI PSYCHOSIS IN THE NEWS May 2025 GTA recruiter's delusional episode with ChatGPT August 2025 BBC reports multiple cases of AI-induced delusions August 2025 Lawsuit filed against OpenAI and CEO Sam Altman after teen suicide linked to AI chatbot "There's no conversation at all about ... the mental health considerations... It's just ... use it as much as you can" Allan Brooks, Human Line Project

Articles in this issue

Links on this page

Archives of this issue

view archives of Canadian Lawyer - October 2025