this post was submitted on 22 Dec 2024
521 points (96.1% liked)

Technology

60103 readers
3134 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 2 days ago* (last edited 2 days ago) (1 children)

you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that's irrelevant instead of giving you answers that are subtly wrong or misleading.

Isn’t this what the best search engines were doing before the AI summaries?

The main problem now is the proliferation of AI “sources” that are really just keyword stuffed junk websites that take over the first page of search results. And that’s apparently a difficult or unprofitable problem for the search algorithms to solve.

[–] [email protected] 1 points 2 days ago* (last edited 2 days ago)

That's what Google was trying to do, yeah, but IMO they weren't doing a very good job of it (really old Google search was good if you knew how to structure your queries, but then they tried to make it so you could ask plain English questions instead of having to think about what keywords you were using and that ruined it IMO). And you also weren't able to run it against your own documents.

LLMs on the other hand are so good at statistical correlation that they're able to pass the Turing test. They know what words mean in context (in as much they "know" anything) instead of just matching keywords and a short list of synonyms. So there's reason to believe that if you were able to see which parts of the source text the LLM considered to be the most similar to a query that could be pretty good.

There is also the possibility of running one locally to search your own notes and documents. But like I said I'm not sure I want to max out my GPU to do a document search.