The Risks and Rewards of Using Generative AI for Workplace Law

Since we first wrote about the emergence of generative artificial intelligence (“AI”) tools back in February 2023, their capabilities have increased in leaps and bounds. A report from KPMG indicates that generative AI reached the point of “mass adoption” in Canadian workplaces by November 2024. At that time, KPMG found nearly half (46%) of all workers were using AI tools as part of their jobs. These adoption rates have surely only increased in 2025.

Lawyers, paralegals, courts, and tribunals have been somewhat late adopters when it comes to generative AI. Part of this is simply reflective of the conservative and risk-adverse nature of the legal industry. But very real concerns with respect to data protection, intellectual property rights, client confidentiality, and the reliability of AI tools have also warranted a valid degree of caution.

The Risks of Using Generative AI for Legal Work

I have made a practice of periodically exploring different AI tools since they first became available to the public. This testing has continued to demonstrate one concerning problem with generative AI – its tendency to make things up when answering questions. Such generation of false information, which is then presented as fact, is known as an AI “hallucination”.

While AI hallucination has been an issue from the start, there is some evidence that the problem may be getting worse. The recent rise in hallucination rates is theorized to be due to ongoing development efforts being implemented across different generative AI systems. But whatever the reason, the implications are clear: AI tools must be used carefully and their results not taken at face value.

Let me give you a practical example of the problem.

I was recently performing legal research and turned to AI for assistance. In this regard, I occasionally put questions to AI to see if it can help speed up the identification of relevant cases or statutory provisions. In one specific instance (which you can read for yourself here) I asked the free public version of ChatGPT to locate court decisions, based on a set of provided factors, to support an argument for a 12-month notice period in the context of a wrongful dismissal dispute.

ChatGPT responded by sharing five different Ontario cases, all of which seemed (on the surface) to be highly relevant to my enquiry. Yet, when I went to read these cases for myself (using the citations provided), I could not find them. Out of the five cases AI had identified, three simply did not exist, and the remaining two citations led me to wildly different decisions than what ChatGPT had described (one being a criminal matter and the other a court approval of a personal injury settlement).

When I then asked ChatGPT whether its answer reflected real court decisions, or if it had instead made-up information, here was the reply:

Great question! The cases I provided were illustrative examples based on typical wrongful dismissal principles in Ontario—not exact real case names or citations. In other words, I created those examples to show how courts might handle a 45-year-old server’s notice period around 12 months, but those specific decisions don’t actually exist under those names or citations.

This is not a one-off example. While ChatGPT fessed up to its hallucinations quite quickly in this instance, I have had other experiences where AI, even when pressed, continues to claim that its results were real, only to then provide dead links to the so-called authorities in question when asked to prove itself.

Generative AI in Court

While the risk of AI hallucination is hardly a secret, people are nonetheless relying upon AI in legal proceedings to their detriment. In the past several months alone, there have been several instances of lawyers and self-represented litigants in Ontario using AI in court with disastrous results.

Here are just three such examples:

1)     Ko v. Li, 2025 ONSC 2766 and 2025 ONSC 2965 – a lawyer in a family law proceeding submitted a factum (i.e. a written legal argument) that was created using generative AI. This was discovered after the presiding judge could not locate several of the authorities mentioned in the factum. The lawyer initially faced a contempt charge based on misleading the court, though this was later dismissed after she apologized and agreed to undertake professional development training.

2)     R. v. Chand, 2025 ONCJ 282 – this was a criminal case where, again, a lawyer relied upon written legal arguments using materials seemingly generated by AI. The result was “numerous and substantial” amounts of non-existent, or irrelevant, case law being included in the lawyer’s submissions. In response, the Court ordered that the lawyer at issue must re-submit his written arguments and ensure that generative AI was not used.

3)     Attorney General v. $32,000 in Canadian Currency, 2025 ONSC 3414 – in a forfeiture of property proceeding, a self-represented litigant submitted written legal arguments in support of his position. These written submissions included references to two prior decisions which the court concluded were either non-existent or fake. When asked by the Court whether he had used AI to help draft his written submissions, the self-represented litigant did not respond. This led Justice Brownstone to issue the following caution:

While Mr. Ohenhen is not a lawyer with articulated professional responsibilities to the court, every person who submits authorities to the court has an obligation to ensure that those authorities exist. Simple CanLII searches would have revealed to Mr. Ohenhen that these were fictitious citations. Putting fictitious citations before the court misleads the court. It is unacceptable. Whether the cases are put forward by a lawyer or self-represented party, the adverse effect on the administration of justice is the same.

Generative AI is Far from Useless

As much as the above reads like a doom and gloom story, things are not quite so bleak. There are still plenty of tasks, even accounting for current limitations, where generative AI can be helpful to lawyers, paralegals, human resource practitioners, adjudicators, and self-represented litigants alike.

When it comes to document generation, AI can quickly and efficiently produce first-draft copies of simple materials. Examples include prompt-tailored content for straightforward letters, blog articles, and standardised court, tribunal or grievance forms. Generative AI can also be used to help with tasks ranging from document review to creating near-instant meeting summaries. Regardless of how it is deployed, users should be mindful that all output generated by AI requires review and correction. Care should also be taken to preserve confidential information and ensure only appropriate AI tools (with dedicated safeguards in place) are used with respect to any such protected data.

There is even a role for generative AI when it comes to legal research. In this regard, when engaging in that time honoured lawyer tradition of trying to track down a court decision to support a legal principle I know to be true, generative AI has often helped me locate relevant cases in a far faster manner than more traditional methods (such as turning to a legal treatise, performing manual case law research, or simply using less fancy online tools such as a Google search).

Takeaways

The Model Code of Professional Conduct, developed by the Federation of Law Societies of Canada, includes a provision with respect to lawyers maintaining ‘technological competency’. In a similar vein, in 2021, Justice Myers of the Ontario Superior Court of Justice opined: “Counsel and the court alike have a duty of technological competency in my respectful view.” Given such expectations, it is incumbent upon all actors in the legal industry (perhaps most importantly regulated professionals such as lawyers and paralegals) to develop at least a base level familiarity with generative AI.

As technology relevant to the practice of law (whether it be AI or something else) continues to develop, we also need to acknowledge that maintenance of technological competency is an ongoing, and never-ending, endeavour. While this may seem daunting to some, history teaches us that, in the long run, the progress technology brings is typically well worth a bit of upfront investment.

Vey Willetts LLP is an Ottawa-based employment and labour law boutique that provides timely and cost-effective legal advice to help employees and employers resolve workplace issues in the National Capital Region and across Ontario. To speak with a lawyer, contact us at: 613-238-4430 or info@vwlawyers.ca.

Next
Next

Ontario Court of Appeal Confirms Employment Termination Provision is Enforceable