Lemmy.World

166,329 readers
6,964 users here now

The World's Internet Frontpage Lemmy.World is a general-purpose Lemmy instance of various topics, for the entire world to use.

Be polite and follow the rules โš– https://legal.lemmy.world/tos

Get started

See the Getting Started Guide

Donations ๐Ÿ’—

If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

If you can, please use / switch to Ko-Fi, it has the lowest fees for us

Ko-Fi (Donate)

Bunq (Donate)

Open Collective backers and sponsors

Patreon

Liberapay patrons

GitHub Sponsors

Join the team ๐Ÿ˜Ž

Check out our team page to join

Questions / Issues

More Lemmy.World

Follow us for server news ๐Ÿ˜

Mastodon Follow

Chat ๐Ÿ—จ

Discord

Matrix

Alternative UIs

Monitoring / Stats ๐ŸŒ

Service Status ๐Ÿ”ฅ

https://status.lemmy.world

Mozilla HTTP Observatory Grade

Lemmy.World is part of the FediHosting Foundation

founded 1 year ago
ADMINS
1
2
3
 
 

Some interesting quotes:

  1. LLMs do both of the things that their promoters and detractors say they do.
  2. They do both of these at the same time on the same prompt.
  3. It is very difficult from the outside to tell which they are doing.
  4. Both of them are useful.

When a search engine is able to do this, it is able to compensate for a limited index size with intelligence. By making reasonable inferences about what page text is likely to satisfy what query text, it can satisfy more intents with fewer documents.

LLMs are not like this. The reasoning that they do is inscrutable and massive. They do not explain their reasoning in a way that we can trust is actually their reasoning, and not simply a textual description of what such reasoning might hypothetically be.

@AutoTLDR

4
 
 

There is a discussion on Hacker News, but feel free to comment here as well.

5
 
 

[ comments | sourced from HackerNews ]

view more: next โ€บ