Businesspeople at big creative agency having a meeting
Friend or foe? Generative AI tools promise to transform much of lawyers’ day-to-day work, but worries persist about data leaks and erosion of differentiation © Getty Images

August is traditionally a quiet month for announcements but, this year, it was awash with news from law firms, big-tech businesses and service providers about innovations that promise to transform legal work.

All intend to harness generative artificial intelligence and machine learning software, applying those technologies as part of so-called large language models (LLMs). These can analyse and produce plausible text, images, or code in response to plain-language prompts.

LexisNexis unveiled plans to integrate AI-powered solutions with Microsoft’s 365 software, and Meta, Facebook’s parent company, released Code Llama — a commercial tool for fine-tuning and generating code to fit with the AI-powered LLMs it had already unveiled.

Law firms also got in on the action, reporting a range of new developments.

Silicon Valley-based Gunderson Dettmer introduced a “homegrown” proprietary tool that its lawyers use to provide legal agreements — or other relevant source material — as context for queries.

New York-based Sullivan & Cromwell promoted tools that it is developing and plans to sell to other law firms, including versions to help comprehensively review documents and conduct depositions.

At the same time, OpenAI, the company that started the generative-AI revolution with its reveal of ChatGPT in November 2022, launched an upgraded LLM system for corporate customers that addresses lawyers’ fears about leakage of client data.

And Thomson Reuters, the legal data and media company, closed on its $650mn acquisition of Casetext, known for its AI-driven tools.

All the new tools promise to tackle some of lawyers’ most laborious tasks with greater ease and speed. For example, they can compare and analyse contracts for key clauses, summarise compliance rules, and rewrite complex regulations in layperson’s vernacular. Their time-saving potential has raised expectations that they will transform — and also, perhaps, take away — much of lawyers’ day-to-day work.

For Thomson Reuters, the aim is to deliver a legal drafting product that, like Lexis-Nexis, is ready to plug into Microsoft’s Copilot AI assistant feature and can go on sale sometime later this year.

No exact date is set yet, says Jake Heller, Casetext’s chief executive officer and co-founder. “This is one of those ‘better together’ stories,” he says about his team’s decision to sell up to Thomson Reuters.

LexisNexis also has no exact date for the formal launch of generative AI upgrades to its most-used software and research tools. It specifies only “late summer” as the timing. “It’s important to get it right,” explains Jamie Buckley, chief product officer for LexisNexis’s legal and professional unit. The system is “never ever always going to be accurate every single time, but we’re trying to get as close as possible to perfection”.

Concerns over privacy and loss of distinctiveness

However, amid this flurry of system launches and upgrades, many law firms remain hesitant about how and when to jump on the AI bandwagon.

When market pioneer ChatGPT first became available, many prospective users of legal generative AI tech — law firms and in-house legal departments — suffered from “Fomo”, or fear of missing out, according to Dan Katz, a professor at Chicago Kent College of Law, who leads the school’s legal tech centre. But that early enthusiasm has given way to “Fud” — fear, uncertainty and doubt — he says.

Clients’ concerns over data leaks have made them “skittish”, Katz says, noting that law firms do not want even their prompts — the computerised requests they ask platforms to answer or undertake — captured by outsiders. They also express concerns about losing their competitive distinctiveness: “[If] they buy an off-the-shelf solution alone, where is their differentiation?” Katz points out.

Also, the initial exclusivity of the invitations to participate in big-tech groups’ LLM pilots, and their subsequent high-priced retail offers, have prevented some lawyers from becoming early adopters.

Microsoft charged more than $50,000 to each of the 600 enterprises it invited to join trial runs for its Copilot AI assistant, which will retail to businesses as a $30 per user per month subscription upgrade. LexisNexis and Thomson Reuters will require law firms to pay additional, and so far undisclosed, subscriptions to access their Copilot legal plug-ins.

The extra costs of the computer hardware needed to operate these AI systems could also be a short-term financial deterrent.

Natasha Blycha, a managing director at Stirling Rose in Sydney, Australia, says her firm is “not a purchaser” of the latest wave of commercialised LLM software systems.

Instead, Stirling Rose — which specialises in advising about AI, digital assets and other emerging technology — plans to build its own tools, using freely available foundational LLMs and in-house experts and engineers, she explains.

At global management consultancy McKinsey, the in-house legal department will make a choice between “build, buy or partner” to develop generative AI tools only after its leaders have evaluated their most effective commercial uses, says Ilona Logvinova, associate general counsel.

“From there, it’s exploring what tech vendors offer,” she adds.

But Thomas Pfennig, global head of compliance data and privacy at Bayer, the Germany-based multinational life sciences company, has already deployed generative AI to automate repetitive, low-value legal tasks to reduce labour costs.

“One step that we have taken is preparing the organisation for a significant operational change, moving away from human-based to more technology-based interactions,” he says.

Pfennig is working with Prime Legal AI, based in Hanover, Germany, to develop a law-specific LLM. He shares general concerns about information garnered from such developments leaking in the big tech players’ models.

“Data privacy compliance is usually not built into LLMs,” he notes. “While they are trained on billions of parameters, they are not necessarily legal compliance or data privacy parameters.”

Copyright The Financial Times Limited 2023. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article