this post was submitted on 05 Jul 2023
24 points (92.9% liked)

Technology

59192 readers
2283 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The top quartile of funds selected by the AI model generated 2.1x the original investment versus an industry average of 1.85x.

you are viewing a single comment's thread
view the rest of the comments
[–] Olap 2 points 1 year ago (2 children)

For now. Companies are going to quickly have prospectuses, quarterly reports and annual statements sanitised after this

And that's what AI is going to really do. Make the world more grey. Lemmy instance owners: ban LLMs now!

[–] [email protected] 2 points 1 year ago

I think that there are other possible risks.

Exploiting AI fragility is a serious concern. A given "AI" today is vastly more simple than a human. It may be able to operate well given certain assumptions, such as that statements are trying to affect human investors rather than AI models. What happens if I try to craft information and inject it so as to swing AI-deiven investments?

I remember going through an article a while back on AI for serious applications, like military and the like, and one point is that unless you are very rigorously careful about where you are pulling your training data from -- and a lot of people training models are not -- an adversary may be able to attack an AI-deiven system by aiming to poison its training data.