this post was submitted on 11 Feb 2024
15 points (80.0% liked)

Futurology

1564 readers
2 users here now

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 4 months ago (2 children)

Is there no risk of the LLM hallucinating cases or laws that don't exist?

[–] RedditWanderer 6 points 4 months ago

How to use Chat GPT to ruin your legal career.

AI does help with discovery and they don't need to spend 8 days scanning emails before the trial, but they'll still need lawyers and junior lawyers.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

GPT4 is dramatically less likely to hallucinate than 3.5, and we're barely starting the exponential growth curve.

Is there a risk? Yes. Humans do it too though if you think about it, and all AI has to do is better than humans, which is a milestone it's already got within sight.