this post was submitted on 25 May 2024
50 points (89.1% liked)

Ask Lemmy

26921 readers
2999 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

Did they determine this by comparing what DNA fragments they've managed to recover, or by physical skeletal structure similarities, or what?

I'm no expert in the field, but I just don't see it.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 5 months ago* (last edited 5 months ago) (2 children)

This exact scenario scares me, because what we know about current LLMs is not that they are good discovers of things, but that they are very convincing liars.

[–] [email protected] 3 points 5 months ago (1 children)

That's because most of what we hear about "AI" is revolving around content "creation" controversies, but these are successfully used in analyzing wide data sets for scientific purposes, like finding new foldings of proteins, diagnosing cancer, reading ancient burned scrolls via etcxrays

[–] [email protected] 2 points 5 months ago

And all of those things are then analyzed and verified before anything is done with them. No reputable scientist is taking those results and dumping it straight into a paper; the deep learning engines are pointing scientists in the right direction; they're taking the haystack and making it a handful. Protein folding is a little different because the results can be directly verified programmatically (I think; I'm not an organic chemist, or biologist, or whoever is doing this research).

The output of LLMs can be great outlines. They can also be wildly, and confidently, wrong.

[–] [email protected] 1 points 5 months ago (2 children)

the tech is in its infancy. dont discount what its capable of based on its current iteration. science is a progression

[–] over_clox 4 points 5 months ago* (last edited 5 months ago)

Yeah, about that..

Different AI models are developing in different ways. Some are learning from legit, curated sources from reputable scholars and professors and such.

But other AI models are learning from less than reputable sites, such as Reddit...

Google is learning from Reddit. This tech journey is gonna be fun...

[–] [email protected] 3 points 5 months ago (1 children)

Oh, believe me, I don't. At all. I've been working in the software engineering sector since the mid 90's; I'm quite aware of the rapid pace of change in this sector. I was briefly considering a focus on AI when getting my degree, back in the early 90's.

But this specifically mentions LLMs, and the fundamental way LLMs function is not going to lead to self-aware AI, or any sort of system that is going to be able to self-evaluate for accuracy or "truthiness." It's going to take an advance in neural net science; maybe in combination with LLM - but LLMs by themselves will only ever be dumb machines that generate predictive text based on - I don't know, Bayesian probabilities, or whatever.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

ha, i never meant full-on GAI, singularity. i just meant a visual model good enough to classify what it sees in a very specific context. i never mentioned or meant to refer to 'ai'