5 Comments
User's avatar
Outdoorluvr's avatar

Recently dealt with a large law firm that uses AI to find previous court cases for citation in their responses to various arguments. More than once, an actual reading of a case they cited proved that their usage of AI brought them to the wrong conclusion - the case decision was not based on the reasoning they were claiming in their petition, and was easily refutable by defense attorneys. Rather embarrassing for them.

Alex Su's avatar

Out of curiosity was this a hallucination situation? Or did the human lawyers fail to understand the opinion? I ask because if it’s an understanding issue this phenomenon isn’t new, it’s why senior lawyers train juniors to avoid relying on West head notes or summaries, even humans generated interpretations are fallible.

Outdoorluvr's avatar

I'm not really sure, Alex. They were senior associates in the firm. It was like they read the "Cliff Notes" regarding the decision, and formulated their arguments based on the exact opposite reasoning of the decision. If a layman like me could see how wrong they were (based on logic alone), I'd have to guess that they didn't read much, if anything, about the actual cases they cited. They were clearly losing in their clients' case, though, and pretty much throwing spaghetti at the wall to see if anything would stick.

Nate Kostelnik's avatar

“Moral crumple zone” is an interesting concept and magnified 100x in legal because if professional and ethical responsibilities.

I don’t know who said it, but they said that every firm and legal tech co needs an AI Futurist. Seems like a helpful way of thinking about mitigating the crumple zone.

Alex Su's avatar

I’m not sure what exactly the best solution is. But I do think it’s important to point out why some robot that churns out memos can’t just replace the value a lawyer brings to the table.