

Discover more from Off The Record
ChatGPT betrays the lawyers
What you can do to protect against the risks of being an early adopter of generative AI
My original plan for this week was to write about something completely different. I had a whole draft written up but then this crazy lawyer-uses-ChatGPT-improperly “scandal” popped off late Friday night and I just HAD to share my thoughts about it. For those of you who subscribed to this newsletter to read about my thoughts on supply & demand in the legal industry—worry not! It’s on deck for next weekend.
So what’s this scandal all about anyways? Well, if you were on Twitter or LinkedIn, this weekend, you probably read about it. (If you haven’t, all you need to do is take a look at this tweet.) The story even made its way all the way to the New York Times, in an article called Here’s What Happens When Your Lawyer Uses ChatGPT.
From the NYT:The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions…
…There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.
That was because ChatGPT had invented everything.
In the 24 hours that followed, I got to work doing what I did best—creating a whole bunch of jokes and memes about it on Twitter.
After the dust settled, I started to process the whole thing. I mean, this is a pretty crazy story because I figured every lawyer who has ever even heard of ChatGPT understands that the chatbot sometimes makes up cases. There’s even a cute name for it—“hallucinations.” In fact, I was recently quoted in a Law360 article titled Fears Aside, Generative AI May Benefit Small Firms Most about this very topic:
Su said that lawyers, especially small-firm lawyers who are staking all of their reputation and authority on the advice they give out to clients, are generally pretty cautious about using ChatGPT for any kind of real legal work. They know to double check everything.
But I was wrong. Apparently some lawyers had no idea that this free chatbot would be unreliable and use it without reviewing the work.
This is why technological competence matters so much in the law. If you don’t know that ChatGPT could make up cases, you might not know other, less obvious but much scarier things about it—like the fact that anything you share with ChatGPT will be used for training purposes unless you remember to opt out. This isn’t just a cautionary tale about reviewing the work AI does for you. It’s also a cautionary tale about understanding the tools and technologies that you use for your practice.
To me, the most damaging part of this story is that the one thing—maybe the only thing—most lawyers will take away from this cautionary tale is that they should never trust AI.
On that note, I want to emphatically share a few reminders with everyone:1. ChatGPT is not synonymous with AI, and must be used carefully
It is *not* the same as other AI-powered legal tech tools, especially from companies with a history of making legal customers successful. That’s why I keep referring to ChatGPT as a free chatbot.
Calling it that makes it more obvious to people what it really is—a turbocharged, 2023 version of SmarterChild. Calling the chatbot by its formal name, ChatGPT, makes it sound too legit. Anyways. Lawyers know this better than most—there’s a reason why some things are more expensive than others. You get what you pay for. So if a chatbot is free to use, it’s probably not a great idea to rely on it completely to do your work.2. However, ChatGPT’s LLM can be harnessed by others to produce more reliable AI
Third party vendors have very strong incentives to use large language models (LLMs) to make their AI products far more accurate for certain use cases. That’s why so many of them announced integrations with GPT-4, the LLM that powers ChatGPT itself. If the vendors mess up, e.g. hallucinate, the consequences are far reaching. The damage to the vendor’s brand will be enormous. Now that doesn’t mean that their generative AI products will be 100% reliable, of course. But vendors will be incentivized to warn users and speak candidly about their accuracy rates, which should far exceed ChatGPT’s—at least for law related use cases.
3. Always have someone review the AI’s work, especially if it’s ChatGPT
Look, AI has been around for a while, and its value is always the same: It’s incredible as a first pass or first draft tool. That’s where the efficiency gains come from.
But you absolutely have to have someone review its work make sure there aren’t any mistakes. Because they always make mistakes. That’s something many people seem to have forgotten because of how good ChatGPT is.LLMs are advancing at such a rapid pace, and ChatGPT is getting so much mainstream attention (rightfully so, in my opinion) for its capabilities—that people are wildly overestimating what it can do. Just look at the lawyers in that case. They thought ChatGPT could do all of their work for them! No wonder the media keeps believing that AI can replace lawyers.
In this rapidly changing environment, what can we do? Well, as lawyers, I believe we should adopt a strong learning mindset. Don’t trust it completely, but don’t sit on the sidelines either. We should investigate the risks involved, and proceed with caution. This is true of any kind of legal tech, by the way, not just AI. Use it on a small set of data before relying on it for your most important work. You can even take advantage of free trials or sandboxes.
Zooming out, I believe we’re still very early in the hype cycle. As we progress, more risks will emerge. From the perspective of the vendors, capturing attention isn’t hard these days—just say the words “generative AI” and you’ll get media coverage. I bet that at this point, everyone will get super excited, and lots of legal professionals will start using free AI technology incorrectly. And they’ll inevitably get disappointed when it messes up.
That’s when they’ll start pointing fingers at all of the tech providers.
So if there’s one thing I want to say to the lawyers in the midst of all this AI drama and excitement it’s this: Treat everything as a learning exercise. Don’t get caught up in the hype. Don’t be afraid to try out the technology and incorporate it into your practice—but always, always make sure to review its work. Especially if the AI is free or cheap.
Best of luck my friends.
This is a free weekly email for Off The Record subscribers. If you’re interested in learning more about sales, marketing, and business development, consider upgrading to a paid subscription, which gives you full access to premium sales content. And if you’re here to read about supply & demand dynamics in the legal industry—stay tuned for more on that next weekend!
Latest News
Legal AI startup Casetext is rumored to be in acquisition talks by some unknown BigCo, possibly Thomson Reuters (which happens to own Casetext’s biggest competitor, Westlaw). In the past few weeks it seems like established legal tech providers are starting to jump into the “Generative AI fray.”
Magic Circle firm Allen & Overy merged with U.S. based Shearman & Sterling, creating an even more massive firm. How does this benefit clients? I’m not sure. I guess access to more lawyers who use Harvey AI?
The trend of Biglaw making everyone return to the office continues, with Skadden announcing a 4-days-a-week-in-the-office policy. I’ve heard hints of this beyond Biglaw too—for example, one legal tech company I’m familiar with is requiring 3 days a week in the office.
What do you think?
If you enjoyed the article, please forward it on to someone who might find it interesting or helpful. You can also show your support for this newsletter by posting this article on social media! If you’re new around here, and don’t know what my newsletter is all about, check out Welcome New Readers.
Another article from The Verge on the scandal for those who, like me, are too stubborn to pay for a subscription to the New York Times.
For example, when hyped legal startup Atrium tried to launch a new tech-enabled business model but ended up failing, lawyers and legal tech commentators reacted with glee, sharing post-mortems that essentially boiled down to “I told you so.” I am fairly certain it scared off many lawyers who wanted to do build something similar. Any entrepreneur who wants to start a legal startup these days has to overcome the “what makes you different from Atrium?” objection—even if their proposed business has nothing in common with them.
I guess there’s also a $20/month version of ChatGPT you can use that’s more advanced. But my point still stands.
It’s worth noting that the standard here is not that the vendor, with AI or not, should produce flawless work. No one can promise you that. What they can promise is that if their technology makes up cases or laws out of thin air—they are accountable for those results. The vendor will lose business, and take a hit on their financials. That’s ultimately why at this point in time, you want to leverage AI by a trusted brand that’s been around for a while. They have far more to lose by bullshitting you than a new startup.
If having someone check over the AI’s work would destroy most or all of the efficiency gains, I would reconsider whether that use case is the right one for AI. At least for the time being.
A related point: AI can still be super valuable even if it makes mistakes. Back when I was in enterprise sales, I had a buyer who wanted to run a human vs. AI test for a very unique use case. The results showed that while the AI made quite a few mistakes, it also flagged items that the human completely missed. We decided that combining both—humans and AI—would ensure the highest possible accuracy rate. Tech doesn’t need to be infallible to create huge value.
As an observer of the space for over a decade, I will say that there are some types of use cases that are more “ripe” for generative AI than others. For example, I have long believed that contract data is more structured than in other types of legal data (e.g. e-discovery, dockets, etc.). That’s why we’re seeing an explosion of startups addressing contracting problems.
These days, even legal content providers are launching their own contract technology (e.g. CLM) products. This pile into the CLM space (see my commentary about CLOC from last week) isn’t irrational from the vendor side, since there’s so much money to be made in this market. But as a buyer or user, you have to be cautious about partnering with one of these new kids on the block.
Relatedly, that’s why Ironclad invests so much into our community function—we *want* customers to meet prospects to discuss our product. This is not true of all CLMs. Some of them would be horrified at the prospect of customers talking to each other.
ChatGPT betrays the lawyers
Great article Alex! With every great technological advance there have been spectacular mishaps. I’m sure this won’t be the last story of a lawyer being caught for essentially being naive or lazy. But like it or not, the technology will become the norm albeit with some better safeguards incorporated.
Your exploration of AI-powered tools for knowledge management is both insightful and timely. With the emergence of Web3 and decentralized technologies, it's intriguing to think about how ChatGPT and similar tools could play a role in shaping the future of information interaction and dissemination.