this post was submitted on 25 Dec 2023
1946 points (97.9% liked)

People Twitter

5392 readers
502 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 139 points 1 year ago (9 children)

We really need to stop calling things "AI" like it's an algorithm. There's image recognition, collective intelligence, neural networks, path finding, and pattern recognition, sure, and they've all been called AI, but functionally they have almost nothing to do with each other.

For computer scientists this year has been a sonofabitch to communicate through.

[–] CeeBee 61 points 1 year ago* (last edited 1 year ago) (38 children)

But "AI" is the umbrella term for all of them. What you said is the equivalent of saying:

we really need to stop calling things "vehicles". There's cars, trucks, airplanes, submarines, and space shuttles and they've all been called vehicles, but functionally they have almost nothing to do with each other

All of the things you've mentioned are correctly referred to as AI, and since most people do not understand the nuances of neural networks vs hard coded algorithms (and anything in-between), AI is an acceptable term for something that demonstrates results that comes about from a computer "thinking" and making ~~shaved~~ intelligent decisions.

Btw, just about every image recognition system out there is a neural network itself or has a neural network in the processing chain.

Edit: fixed an autocorrect typo

load more comments (38 replies)
[–] [email protected] 34 points 1 year ago (2 children)

I think you're fighting a losing battle.

[–] Sterile_Technique 14 points 1 year ago (4 children)

You're right, but so is the previous poster. Actual AI doesn't exist yet, and when/if it does it's going to confuse the hell out of people who don't get the hype over something we've had for years.

But calling things like machine learning algorithms "AI" definitely isn't going away... we'll probably just end up making a new term for it when it actually becomes a thing... "Digital Intelligence" or something. /shrug.

[–] [email protected] 10 points 1 year ago (1 children)

It isn't human-level, but you could argue it's still intelligence of a sort, just erstatz

[–] [email protected] 3 points 1 year ago (1 children)

I dunno... I've heard that argument, but when something gives you >1000 answers, among which the correct answer might be buried somewhere, and a human is paid to dig through it and return something that looks vaguely presentable, is that really "intelligence", of any sort?

Aka, 1 + 1 = 13, which is a real result that AI can and almost certainly has recently offer(ed).

People are right to be excited about the potential that generative AI offers in the future, but we are far from that atm. Also it is vulnerable to misinformation presented in the training data - though some say that that process might even affect humans too (I know, you are shocked, right? well, hopefully not that shocked:-P).

Oh wait, nevermind I take it all back: I forgot that Steven Huffman / Elon Musk / etc. exist, and if that is considered intelligence, then AI has definitely passed that level of Turing equivalence, so you're absolutely right, erstatz it is, apparently!?

[–] [email protected] 1 points 1 year ago (1 children)

What's the human digging through answers thing? I haven't heard anything about that.

[–] [email protected] 1 points 1 year ago (2 children)

ChatGPT was caught, and I think later admitted, to not actually using fully automated processes to determine those answers, iirc. Instead, a real human would curate the answers first before they went out. That human might reject answers to a question like "Computer: what is 1+1?" ten times before finally accepting one of the given answers ("you're mother", hehe with improper apostrophe intact:-P). So really, when you were asking for an "AI answer", what you were asking was another human on the other end of that conversation!!!

Then again, I think that was a feature for an earlier version of the program, that might no longer be necessary? On the other hand, if they SAY that they aren't using human curation, but that is also what they said earlier before they admitted that they had lied, do we really believe it? Watch any video of these "tech Bros" and it's obvious in less than a minute - these people are slimy.

And to some extent it doesn't matter bc you can download some open source AI programs and run them yourself, but in general from what I understand, when people say things nowadays like "this was made from an AI", it seems like it is always a hand-picked item from among the set of answers returned. So like, "oooh" and "aaaahhhhh" and all that, that such a thing could come from AI, but it's not quite the same thing as simply asking a computer for an answer and it returning the correct answer right away! "1+1=?" giving the correct answer of 13 is MUCH less impressive when you find that out of a thousand attempts at asking, it was only returned a couple times. And the situation gets even worse(-r) when you find out that ChatGPT has been getting stupider(-est?) for awhile now - https://www.defenseone.com/technology/2023/07/ai-supposed-become-smarter-over-time-chatgpt-can-become-dumber/388826/.

[–] [email protected] 1 points 1 year ago (1 children)

So reading through your post and the article, I think you're a bit confused about the "curated response" thing. I believe what they're referring to is the user ability to give answers a "good answer" or "bad answer" flag that would then later be used for retraining. This could also explain the AIs drop in quality, of enough people are upvoting bad answers or downvoting good ones.

The article also describes "commanders" reviewing and having the code team be responsive to changing the algorithm. Again this isn't picking responses for the AI. Instead ,it's reviewing responses it's given and deciding if they're good or bad, and making changes to the algorithm to get more accurate answers in the future.

I have not heard anything like what you're describing, with real people generating the responses real time for gpt users. I'm open to being wrong, though, if you have another article.

[–] [email protected] 1 points 1 year ago

I might be guilty of misinformation here - perhaps it was a forerunner to ChatGPT, or even a different (competing) chatbot entirely, where they would read an answer from the machine before deciding whether to send it on to the end user, whereas the novelty of ChatGPT was in throwing off such shackles present in an older incarnation? I do recall a story along the lines that I mentioned, but I cannot find it now so that lends some credence to that thought. In any case it would have been multiple generations behind the modern ones, so you are correct that it is not so relevant anymore.

[–] [email protected] 1 points 1 year ago (1 children)

There's no way that's the case now, the answers are generated way too quickly for a human to formulate. I can certainly believe it did happen at one point.

load more comments (1 replies)
[–] [email protected] 4 points 1 year ago

This problem was kinda solved by adding AGI term meaning "AI but not what is now AI, what we imagined AI to be"

Not going to say that this helps with confusion much 😅 and to be fair, stuff like autocomplete in office soft was called AI long time ago but it was far from LLMs of now

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 21 points 1 year ago

AI = "magic", or like "synergy" and other buzzwords that will soon become bereft of all meaning as a result of people abusing it.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

Computer vision is AI. If they literally want a robot eye to scan their cluttered pantry and figure out what is there, that'll require some hefty neural net.

Edit: seeing these downvotes and surprised at the tech illiteracy on lemmy. I thought this was a better informed community. Look for computer vision papers in CVPR, IJCNN, and AAAI and try to tell me that being able to understand the 3D world isn't AI.

[–] [email protected] 4 points 1 year ago (1 children)

You're very wrong.

Computer vision is scanning the differentials of an image and determining the statistical likelihood of two three-dimensional objects being the same base mesh from a different angle, then making a boolean decision on it. It requires a database, not a neutral net, though sometimes they are used.

A neutral net is a tool used to compare an input sequence to previous reinforced sequences and determine a likely ideal output sequence based on its training. It can be applied, carefully, for computer vision. It usually actually isn't to any significant extent; we were identifying faces from camera footage back in the 90s with no such element in sight. Computer vision is about differential geometry.

[–] danielbln 5 points 1 year ago (2 children)

Computer vision deals with how computers can gain high level understanding of images and videos. It involves much more than just object reconstruction. And more importantly, neural networks are a core component is just about any computer vision application since deep learning took off in the 2010s. Most computer vision is powered by some convolutional neural network or another.

Your comment contains several misconceptions and overlooks the critical role of neural networks, particularly CNNs, which are fundamental to most contemporary computer vision applications.

[–] [email protected] 3 points 1 year ago

Thanks, you saved me the trouble of writing out a rant. I wonder if the other guy is actually a computer scientist or just a programmer who got a CS degree. Imagine attending a CV track at AAAI or the whole of CVPR and then saying CV isn't a sub field of AI.

load more comments (1 replies)
[–] CobblerScholar 4 points 1 year ago

There's whole countries that refer to the entire internet itself as Facebook, once something takes root it ain't going anywhere

[–] schmidtster 1 points 1 year ago (2 children)

Shouldn’t there be a catch all term to explain the broader scope of the specifics?

Science is a broad term for multiple different studies, vehicle is a broad term for cars and trucks.

[–] [email protected] 4 points 1 year ago (2 children)
[–] [email protected] 3 points 1 year ago (1 children)

Glorified chatbots. Tops. But definitely not something with any kind of intelligence.

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago)

Yesterday I prompted gpt4 to convert a power shell script to Haskell. It did it in one shot. This happens more and more frequently for me.

I don't want to oversell llms, but you are definitely underselling them.

[–] schmidtster 2 points 1 year ago (1 children)

Is that not a type of AI already?

[–] [email protected] 2 points 1 year ago (1 children)

Well, there's an argument over not calling machine learning AI in this very thread, so… ¯\_(ツ)_/¯

[–] schmidtster 3 points 1 year ago

So why suggest it for the catch all term for AI when it’s only one portion of the argument itself? Such a strange suggestion,

[–] MotoAsh 2 points 1 year ago (1 children)
[–] schmidtster 2 points 1 year ago (2 children)

So people think of programming instead?

load more comments (2 replies)
[–] danielbln 1 points 1 year ago (1 children)

Language is fluid, and there is plenty of terminology that is dumb or imprecise to someone in the field, but A-ok to the wider populace. "Cloud" is also not actually a formation of water droplets, but someone's else's datacenter, but to some people the cloud is everything from Gmail to AWS.

If I say AI today and most people associate the same thing with it (these days that usually means generative AI , i.e. mostly diffusion or transformer models) then that's fine for me. Call it Plumbus for all I care.

load more comments (1 replies)
load more comments (2 replies)