this post was submitted on 25 Jun 2023
206 points (98.1% liked)

Technology

59147 readers
2302 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The judge scolded the lawyers for doubling down on their fake citations.

top 28 comments
sorted by: hot top controversial new old
[–] DeriHunter 44 points 1 year ago* (last edited 1 year ago) (1 children)

So, they used CHATGPT to do their work, didn't validate it and and used made up cases to support theirs and when got cought, they lied and got only $5K fine? Wtf?

[–] Rooki 11 points 1 year ago (1 children)

Depending on the case. They should get jailed for that.

[–] Gradually_Adjusting 8 points 1 year ago (1 children)

It should be several thousand per false citation, and disbarment for any repeat offense.

[–] Rooki 4 points 1 year ago (2 children)

But for example in a case where the other party could get in jail for his false citation? I would not be accepting yeah you could have gone to jail for this fake citations but he gets just pat on his hands with some cash money.

[–] Gradually_Adjusting 2 points 1 year ago

Maybe you're right. If we're trying to be thorough, I'd probably go as far as offering an incentive for firms to have a dedicated paralegal as an "AI reviewer".

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

In this case, the worst that could have happened if NOONE checked whether the cited cases were real and relevant AND the judge somehow decided that New York law stood above federal law and international treaties, would have been for an aviation company paying someone compensation for allegedly hurting a passenger and causing a bad knee.

[–] ulu_mulu 31 points 1 year ago (1 children)

which I falsely assumed was, like, a super search engine

A "super search engine" is still a search engine, if you're incapable of validating the results, or if you don't know you should, you shouldn't be a lawyer at all.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

I mean, dude apparently didn't even know how to read a case reference and guessed Fd3 stood for Federal District 3 (spoiler: it's Federal reporter Edition 3). Not that I would have guessed any different, but I'm not and never was in any kind of law education, not to mention being a lawyer, who supposedly deal with case references all the time.

[–] Candelestine 23 points 1 year ago (1 children)

The LegalEagle breakdown was thoroughly entertaining.

[–] [email protected] 17 points 1 year ago (1 children)

Link for those curious. Agreed, the breakdown does far more 'justice' to this story.

[–] [email protected] 2 points 1 year ago

The best bit about the LegalEagle breakdown is the revelation that ChatGPT itself told them it shouldn't be used this way multiple times.

[–] itsnotlupus 22 points 1 year ago

Court documents are at https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/

The transcript of the hearing where the judge grilled the lawyers won't be available to the public for another 2 weeks.

I feel like the lawyers are getting off really easy, considering.

They just have to pay $5k each and notify their client and every judge they "cited" in their made-up cases that they did an oopsy.

Oh and they lost the case, but it seems like that was foreshadowed long before the lawyers decided that ChatGPT was a court docket search engine.

[–] DannyMac 13 points 1 year ago (1 children)

Everyone wonders why ChatGPT is highly censored, this is a good example as to why. However, maybe instead of "As an AI language model" it should say something like, "Large language models like me tend to hallucinate/make up things and confidently convey them in my response. I will leave it up to you to validate what I say." The ultimate problem is the general public is treating LLMs like they are super sci-fi AI, they are basically fantastic autocomplete.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Super autocomplete doesn't sound as appealing as "AI". The general public need to know that though so they can adjust their expectation. For example, it doesn't make sense to expect an autocomplete system to solve complex math problems (something people unexpectedly use ChatGPT for).

[–] [email protected] 9 points 1 year ago (1 children)

Even if you thought it was just a search engine, it's hard to imagine citing a case without independently validating it first.

[–] [email protected] 9 points 1 year ago

Here's the thing, even if you had zero intention of actually reading a case, there are STILL next steps once you get a cite. There is an entire "skill" you're taught in law school called Shepardizing (based on an older set of books that helped with this task) where you have to see if your case has been treated as binding precedent, had distinctions drawn to limit its applicability, or was maybe even overturned. Back when I was learning, the online citators would put up handy-dandy green, yellow, and red icons next to a case, and even the laziest law student would at least make sure everything was green before moving on in a Shepardizing quiz without looking deeper. And even THAT was just for a 1-credit legal research class.

These guys were lazy, cheap (they used "Fast Case" initially when they thought they had a chance in state court; it's a third-rate database that you get for free from your state bar and is indeed often limited to state law), and stupid. They didn't even commit malpractice with due diligence. I can only assume that they were "playing out the string" and extracting money from their client until the Federal case was dismissed with prejudice, but they played stupid games and won stupid prizes.

[–] [email protected] 7 points 1 year ago

Surprised it was only $5,000

[–] [email protected] 6 points 1 year ago

Paywall.

Here's the article on archive.is: https://archive.is/aQhso

[–] MargotRobbie 6 points 1 year ago

Here lies the problem, ChatGPT is not a search engine, instead you can think of it as a compressed JPEG of the Internet (Credits to Ted Chiang). It can get you things that LOOK right if you squint your eyes a bit, but you just can't be sure that it is not just some random compression artifacts.

The problem is that OpenAI is hyping ChatGPT up as something that it is not.

[–] kredditacc 6 points 1 year ago (1 children)

Hallucination is also why I don't use AI to write code for me. I either have to check for hallucinations and fix them, or accept wrong results.

[–] Rooki 3 points 1 year ago (1 children)

I wont let ChatGPT write 100 lines of code but with github copilot its somewhat good occasionally.

[–] kredditacc 2 points 1 year ago

You still have to check if it falls into the "somewhat good occasionally" category.

[–] [email protected] 5 points 1 year ago

Should be more

[–] paperlama 2 points 1 year ago (2 children)

I wonder if something like this might get overlooked one of these days.

[–] cedarmesa 4 points 1 year ago* (last edited 1 year ago)
[–] wmassingham 1 points 1 year ago (1 children)

What makes you think it isn't already?

[–] ech0 2 points 1 year ago

I mean I think the above article is a pretty good indicator...

[–] [email protected] 1 points 1 year ago

Again?!
They need to stop doing that.

load more comments
view more: next ›