this post was submitted on 29 Jun 2023
1 points (100.0% liked)

Home

1 readers
0 users here now

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/407454

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here