A black street sign with the word PRIVATE in white text.
|

Rethinking eDiscovery in the Age of AI

Two posts in the last week have made me wonder how much we’ll have to change the eDiscovery process.

The first one was from a talk at LegalWeek and reported by Doug Austin:

One of those topics that was brought up by Olga Friedman of Latham & Watkins was the potential that receiving parties would load their clients’ produced documents into a public GenAI tool and why they are needing to pursue a protective order to prevent that from happening.

From a legal perspective, the court should allow that protection, and the receiving party should do the ethical thing.

I place some emphasis on that word “should” in this case. Lawyers should not do many things that we know happen anyway. Not often, but enough that I would worry about the risk. After all, if your client’s private data gets uploaded to a free version of AI that uses those items to build out their public model, that information is in the model, and from what I’ve seen from OpenAI and others when talking about copyright, there’s no solution to get it out. The damage is done.

This makes me wonder if eDiscovery productions shouldn’t be done using a solution like SharePoint, with permissions that allow the opposing side to read documents online but not download a copy. Thus, they would not have a copy to upload to a GenAI platform. Yes, I can already hear the wailing from attorneys used to get copies and print them. How will they present them in court? I’d argue that none of that is impossible. The same platform where you read the producing parties’ documents can be shared in court and referenced in any court filings; they would remain in that same shared repository for the life of a case.

We’d have to adjust many expectations and processes, but we’d significantly reduce the risk of private data leaking into a public LLM.

The other challenge is a bit more complicated. With the recent upgrade from ChatGPT in terms of creating AI images, you can now create images that include text, which was shortly followed by this:

AI Receipts and the End of Trust

Because it can accurately insert user-generated text into images and follow a sequence of prompts while preserving continuity, people have started creating fake receipts with it.

Luiza shared an AI-generated restaurant receipt. It was a little simple and maybe wouldn’t pass for an expense report, but it also took her almost no time. It wouldn’t take much to make it very realistic. I’ve seen more than one example on LinkedIn that looks like meal receipts I’ve submitted over the years.

So, we have to worry about where private data winds up and how we verify information. Expense receipts, IDs, financial information, etc., are easily faked. Now that we know that, what will we do when we need to verify information? How do we know you even spent the money you’re requesting reimbursement for? How do we know you have the cash your financial statements say you have?

I’m sorry, but your expense report will become much more complicated. In litigation, we’ll see much more arguing about the veracity of evidence.

As well we should. We cannot afford to believe our eyes any longer. Rather than simplifying our lives, this is an excellent example of AI making it much more complicated.

Where else are you seeing complications with AI and eDiscovery?

Similar Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

To respond on your own website, enter the URL of your response which should contain a link to this post's permalink URL. Your response will then appear (possibly after moderation) on this page. Want to update or remove your response? Update or delete your post and re-enter your post's URL again. (Find out more about Webmentions.)