

Discover more from Off The Record
Sanctionable conduct
Some unspoken truths about the legal profession revealed by that ChatGPT case
Remember those lawyers who had to explain to a Southern District of New York judge why they relied on ChatGPT to find case law without double-checking to see if they were real?
Well, last week, Judge Kastel published an order sanctioning the lawyers, which included the following punishment:Mailing copies of the fake cases, the hearing transcript, the sanctions order, and supporting documents to their client and to each of the judges mentioned in the fake cases; and
Paying a penalty of $5,000
Most commentators immediately were struck by how mild this punishment was. I generally agree. Although I think the overall punishment was pretty severe if you take into account how much publicity and negative attention the lawyers received. Seems like that’s what Judge Kastel was thinking, too:
Anyways. The punishment is probably the least interesting thing about the sanctions order. To me, there were at least three other things this case revealed: (1) Misleading the court was the real problem; (2) Cutting corners is a reality of law practice; and (3) The law is inaccessible, even to lawyers.
I’ll explain below.
1. Misleading the court was the real problem
The first major problem in the ChatGPT case was that the lawyers—Schwartz and LoDuca—failed to verify that the cases were real. As Bob Ambrogi explained shortly after the hearing took place:
Had this lawyer never learned the cardinal rule that you never cite a case you have not read? I don’t care if the case comes from ChatGPT or a learned treatise — read it before you rely on it
But there was more to it than that. After all, citing to cases you haven’t read probably wouldn’t earn you sanctions. No, the actual issue here was when the lawyers realized their mistake and then doubled down on it.
As Judge Kastel pointed out, Schwartz and LoDuca probably first realized something was wrong with their caselaw when opposing counsel pointed out that they couldn’t find them.
That was the perfect time to double check and come clean to say, hey we used this new AI tool that we shouldn’t have, and it generated these fake cases. Doing that probably would’ve helped them avoid sanctions and all this negative publicity. Instead, the lawyers came up with “shifting and contradictory explanations” (the judge’s words, not mine) to try to cover up their mistake.
Or, as one Twitter commenter succinctly explained:
Instead, the lawyers ended up making things far worse than they should’ve been.
2. Cutting corners is a reality of law practice
The sanctions order also showed us how often lawyers cut corners. In addition to Schwartz’s ChatGPT extravaganza, LoDuca also admitted that he lied to the court to get a deadline extension, and signed off on an affidavit he didn’t fully review.
Unfortunately, cutting corners is pretty common in law practice. It goes beyond white lies and neglecting paperwork. Lawyers are frequently unprepared for their cases, outsource significant legal work to paralegals, and inflate hours spent on billable tasks, etc. No one will admit to doing anything like this (obviously) but I’d be willing to bet that every single lawyer has a story about someone else who does.
Why do lawyers cut corners? Part of it is immense workload and stress, combined with the fact that the odds of getting caught are low. Part of it is also that some of the rules seem silly, and breaking them feels harmless. You could imagine an imaginary lawyer’s thought process:
I mean, so what if the paralegal handled everything in the case? They probably know more about the law than most of the junior lawyers at the firm. And so what if we inflated our hours to the client? As long as they’re satisfied with the work and overall cost, does it really matter?
At the end of the day, cutting corners is just part of the reality of law firm practice. There’s just too much work to be done. So as a business owner, you find ways to be efficient, which sometimes involves breaking certain rules. And yet lawyers cannot admit to this publicly.
All this is super interesting in light of the AI revolution. Because the biggest objection to generative AI, and technology generally, is that it’ll make mistakes that human lawyers won’t. It’s not just hallucinations, it’s also privacy and security. At its core, the objection is that technology will be careless—but the humans will be diligent and careful.
Is that actually true though? Are we actually more careful, or is that based on lies and fictions we tell ourselves?
3. The law is inaccessible, even to lawyers
Part of this is related to cost, since Westlaw—the popular legal research database considered by litigators to be the most comprehensive—has opaque and expensive pricing. That being said, the specific database Schwartz used—Fastcase—had free & low cost plans with transparent pricing, as my friend Ed Walters, the founder and CEO of Fastcase, pointed out on LinkedIn.
So theoretically, cost shouldn’t have been a problem at all!
But cost is only part of the law’s inaccessibility. There’s also the question of how “unpublished” cases are used. According to Wikipedia:
In the system of common law, each judicial decision becomes part of the body of law used in future decisions. However, some courts reserve certain decisions, leaving them "unpublished", and thus not available for citation in future cases … Conversely, studies have shown how non-publication can distort the law.
In other words, there’s a set of secret, hard-to-find cases that you can’t use to prove your argument. Except sometimes, I guess. Because lawyers constantly cite to unpublished cases. Especially when the published cases aren’t helpful their argument.
Pretty confusing if you ask me.
Finally, there’s the issue of readability. In the ChatGPT case, Judge Kastel said that it should’ve been obvious to the lawyers that one of the fake cases—Varghese vs. China Airlines—wasn’t real. According to Judge Kastel, the opinion was flawed and its legal analysis was filled with “gibberish.”
I mean, he’s not wrong. I read the Varghese case several times, and it felt impossible to understand what was happening. But imagine you’re a lawyer who only litigates in state court where you’re used to reading incoherent and confusing opinions.
You might not realize that incoherent writing is extremely unusual when you’re reading something from the U.S. Court of Appeals.Maybe in your state court practice, you’ve learned to skip over everything and simply find quotes to use to support your argument. (Another example of a common shortcut many lawyers take.) And then you stumble into this section:
Is it so obvious that this is all gibberish? Or does it look like something you might be able to use to advance your legal argument in your own case? And what if you weren’t a litigator, or a lawyer, even? What if you were someone with no legal training, trying to understand what the law is?
Conclusion
There’s no question in my mind that the two sanctioned lawyers in the ChatGPT case deserved they punishment they received. But the case does raise some unspoken truths about the problems that exist both in our profession, and in how accessible the law is to lawyers and the general public. My guess is that over the next few years, as AI makes its way through the legal industry, we’ll start to learn a lot more—maybe more than we want—about the reality of the practice of law.
If you enjoyed the article, please forward it on to someone who might find it interesting or helpful. If you’re new around here, and don’t know what my newsletter is all about, check out Welcome New Readers. Feel free to also show your support for this newsletter by posting this article on social media!
Latest News
Super interesting article about how technology is playing a role in all of these Biglaw layoffs. I thought it was a great read—but as I shared on LinkedIn—I’d love to hear more about technology and AI impacts client demand, as opposed to how it impacts law firm staffing models.
One law firm is publicly sharing how they’re evaluating AI vendors. I think this is super cool, especially given my previous skepticism about law firms taking AI (and technology, generally) seriously.
Law firm leaders seem to be coming around to the fact that there are real reasons why young lawyers are disillusioned with Biglaw, as opposed to the idea that “kids these days are just lazy.”
This is a free weekly email for Off The Record subscribers. If you’re interested in learning more about sales, marketing, and business development, consider upgrading to a paid subscription, which gives you full access to premium sales content.
I wrote about the hearing in this case a few weeks ago in ChatGPT betrays the lawyers.
From the founder of Fastcase, in a comment responding to my LinkedIn post on this point: “This lawyer had NY state cases for free through the NYSBA. Solving #3 is our mission, and why virtually every state bar association offers free access to legal research fro Fastcase to their members.(That’s why the real cases he included are from Fastcase.) Upgrading to the full national plan is $195 per *year* and takes about 60 seconds online.”
The problem is, litigators aren’t comfortable relying solely on non-Westlaw databases. You can argue that it’s irrational. But there are understandable reasons why they feel that way. As law students, we are given free Westlaw accounts and develop familiarity with how it works—leading to lower (perceived) risk of user error. Westlaw is also given out for free to federal law clerks, who often end up being the most senior litigators or most capable appellate lawyers at their firms. They exert a strong preference for using Westlaw.
How do I know all this? Because that’s exactly what happened to me. As a law student, Lexis and Westlaw were equally foreign to me, but over time I gravitated to Westlaw. By the time I was a litigation associate, I couldn’t not use Westlaw. The same was true for my peers. In fact there was a running joke among many of us that any associate that relied on Lexis wasn’t a serious litigator.
At the end of the day, only a full Westlaw subscription can give litigators the peace of mind they need that they ran a comprehensive search for cases. So even if there are alternatives—they’re not exactly the same thing. Even if it’s based on vibes and irrationality. That’s why at the end of the day, cost still matters.
My sincerest apologies to all my state court judge friends for this generalization.
Sanctionable conduct
As a former state court litigator (prosecutor), we had an entire online library on our intranet of motions and other sample responses to pull from when creating our own documents to submit to the court. The purpose was not to waste time reinventing the wheel. I'm now horrified by the thought that something might end up in an online repository like that without being properly vetted. Like I have won arguments before a judge just by having a longer submission than opposing counsel. I've watched it happen in real time, where the judge started flipping through the pages and barely skimming before making a decision. I can imagine a situation in which not every case, but maybe one or two, was fabricated and the overworked defense attorney and judge didn't catch it. The winning motion gets shared with the rest of the firm as a sample/example and the mistake/misinformation compounds as others start to use it.
So these lawyers couldn't pony up $195/yr for a level of Fastcase that would have given them all the case access they needed? That's a horrible way to run a law firm. I'm a solo and I pay for Lexis; it is money well spent.
Oh, and the reason I have Lexis instead of Westlaw is that the Westlaw rep didn't return my calls or emails for four weeks. I suspect Westlaw doesn't want to work with solos.