The legal profession and AI

Would you trust a lawyer that was prone to hallucinations?
12 September 2023

The legal profession and AI – can it work?

• Like all kinds of business, the legal profession has been keen to adopt generative AI.
• But some AI hallucination and some human carelessness have combined to prove the technology is potentially dangerous in a legal setting.
• What can you do to make sure generative AI can be safely used by the legal profession?

Generative AI has seen one of the fastest and most diverse technological take-up rates since the wheel. In fact, on points, it probably has the wheel beaten.

It’s in an enormous number and variety of businesses, less than a year since ChatGPT exploded onto the scene. It’s overt and invisible, doing everything from summarizing meetings to writing sonnets.

Not very good sonnets – at least not yet – but the point is that even as we sleep, it learns, it’s being trained, and it’s getting better.

The missing half-worm.

But the question of whether it’s good enough yet for some of the things it’s being tasked to do is a valid one, and there are several professions which depend absolutely on correctness, in which using AI can show us by example the boundaries of the developing technology.

Professions where things have to be right, or there are profound consequences.

One of the most important of those areas is the legal profession. And the potentially negative influence of generative AI on the legal profession is already starting to be felt. New York law firm Levidow, Levidow & Oberman, PC was fined $5000, after one of its lawyers, Steven Schwartz, fed prompts into ChatGPT to gather citations for a personal injury case.

The world-changing generative AI returned data including references to fake cases, and the legal case proceeded with the falsehoods intact until they were discovered in court – and the law firm bore the brunt of the fine for allowing the fake references into its affirmation.

The similarity between legal and AI consequences.

Who’s responsible for what in cases of legal and ethical uncertainty with AI?

While, as far as is known, Levidow, Levidow & Oberman, PC is an isolated case, the fact that it has definitely happened once and been discovered proves that it could have happened a lot more often, and not been brought to light.

To use an old adage, it’s not the half-a-worm you find wriggling in your apple that should concern you – it’s the half that’s nowhere to be seen.

We spoke to former litigator and now CPO at NetDocuments, Dan Hauck, to go hunting for the generative AI worm in the current legal profession.

AI as a legal tool.


There are several industries which live and die on the basis of their exactness. Architecture has to be right, or your buildings fall down. Politics has to be right, or your constitution falls down. Medicine has to be right, or people die. And the law is right up there with them, isn’t it? Do you think there’s sufficient understanding in the legal profession yet of the tendency of generative AI to hallucinate based on its training data?


Yeah, it’s important to kind of think about generative AI as a tool within the legal profession, rather than a replacement for counsel – or for legal diligence. Just like any other tool that legal professionals use, they have to put AI in context and use it in the context of their own ethical obligations.

What are their professional responsibilities? We’ve seen this happen with previous technologies – new tech comes out and you have to make a judgment about the impact of the technology in terms of the client file.

For example, email – that entailed looking at what constituted a privileged communication, for example. Even some earlier generations of AI that helped analyze contracts, or helped with discovery, identifying relevant documents that needed to be produced. Where does the line lie between useful technology and the lawyer’s ethical and professional responsibilities?

That’s the key thing here – none of those technologies relieve a legal professional from their obligations, and generative AI doesn’t either.

What we’ll see is legal professionals adopting it and implementing it, but hopefully doing it in a responsible way. One of the things we’re trying to do is help be part of that value delivery chain that comes with implementing AI in the delivery of legal services, but allowing the professionals to do it in a responsible way. We want to put guardrails around the process, and that can be done.

The potential of AI and the legal profession.


So generative AI is just another technology as far as the legal profession’s concerned? Or are there particular things about this technology that the profession needs to be aware of and work around?


Certainly, generative AI has a lot of potential in the legal profession. What you want to be able to do is take the work product that’s been done in the past, and incorporate it into a generative AI solution so that it can understand how the law and the lawyers dealt with particular situations previously – including prior contacts with particular clients.

If you can use that sort of training data to inform your AI, you should be able to get better results in a live-wire legal setting, with more relevance and less likelihood of hallucination.

Also of course, this is a technology where you need to be as sure as you can be that you’re thinking about where the data is going and how you’re protecting it, because AI’s a technology that, if badly handled, can accidentally expose that data – which could have catastrophic legal implications both for your client and your firm.

Your data, in a legal case, is often either the secret sauce you think will win you the argument, or it’s privileged, confidential information from your client. So you want to understand where that data goes at all points during your use of generative AI. Those are the kind of next level considerations that legal professionals and their technology providers, like us, are trying to solve proactively.

Can AI take the monotony out of legal work?


So we’re essentially teaching the generative AI to be… lawyers from previous cases?

Sheer volume of work suggests a connection between legal and AI remits.

Sheer volume of work suggests a connection between legal and AI remits.


Ha. A twist on that would be to say that, for instance, we’re a document management provider, and one of the things that a lot of legal professionals do on our platform every day is research prior work product that they’ve done. It’s very rare that somebody starts drafting a contract or a brief from scratch. So what they’re doing is going in, finding something, and that can then help inform the project that they’re working on today.

Generative AI has really valuable capabilities in allowing them to do that kind of work, not only in searching for conceptually similar content, so that they can have more accurate searches and pull that content forward, but also then to give that initial draft of something in the way they did it in a previous instance, with the suggestion that the similarities between then and now mean it’s worth looking at as a way of doing it now.

So there’s definite day-to-day value for the legal profession in generative AI. But still, that (human!) lawyer is reviewing the work – just as they would for something they got from an assistant or a junior associate – and making sure that it’s of the quality that the client needs.

The legal profession and data security in AI.


That raises a couple of points. Firstly, about where the data goes, because that’s clearly a big issue with generative AI (Here’s lookin’ at you, Samsung). People fed proprietary or privileged information into the generative AI, which then took it outside the safety of the privileged system and into the world. What sort of checks and balances need to be in place for generative AI to function in the extremely useful way you’ve mentioned, while lawyers are secure in the safety of their data?


From our perspective, what we’ve done is partnered with Microsoft, and identified some of these issues upfront. As part of that, we’ve gotten certain agreements and exemptions in place to handle these kinds of concerns.

So for instance, if you’re putting your data into a product through our AI platform, that does not get stored and fed back into the large language model.

As you understand, that’s a really important consideration. Because again, that could be secret sauce data, that could be your own client information. So we’ve gotten those exemptions, so that none of it is stored. It’s immediately expunged – both what you put in, and what it gives you back.

Important exemptions could seal the deal between legal and AI progress.

Important exemptions could seal the deal between the legal profession and AI data safety.

The other important element we flagged was that often, lawyers have to engage with obscene content, or profanity, or things like that.

Traditionally, that might be flagged for human review. And again, that’s not something our customers were comfortable with – the idea that it would be retained for a certain amount of time, and then a third party would look at it. So again, we got an exemption on that.

Ultimately, the dangers you raise, we’ve raised too, and got exemptions from the way generative AI usually works, because we’re dealing with this kind of highly sensitive or privileged data. Those are the types of things that you need to be thinking about when you try to bring generative AI into your legal practice.

Would AI-tticus Finch have secured Tom Robinson’s freedom?


In Part 2 of this article, we’ll delve deeper into the legal and data ramifications of generative AI in the legal profession.