ChatGPT betrays the lawyers
What you can do to protect against the risks of being an early adopter of generative AI
My original plan for this week was to write about something completely different. I had a whole draft written up but then this crazy lawyer-uses-ChatGPT-improperly “scandal” popped off late Friday night and I just HAD to share my thoughts about it. For those of you who subscribed to this newsletter to read about my thoughts on supply & demand in the legal industry—worry not! It’s on deck for next weekend.
So what’s this scandal all about anyways? Well, if you were on Twitter or LinkedIn, this weekend, you probably read about it. (If you haven’t, all you need to do is take a look at this tweet.) The story even made its way all the way to the New York Times, in an article called Here’s What Happens When Your Lawyer Uses ChatGPT.1 From the NYT:
The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions…
…There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.
That was because ChatGPT had invented everything.
In the 24 hours that followed, I got to work doing what I did best—creating a whole bunch of jokes and memes about it on Twitter.