this post was submitted on 17 Sep 2023
24 points (78.6% liked)

Futurology

1760 readers
161 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 year ago (1 children)

Not that I disagree but just to understand where you're coming from, what definition are you using for AI? And Intelligence for that matter?

Coming at this from a compsci/comp e viewpoint I think of it simply as "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making,..."

By that definition it absolutely exists. On our smart phones even.

Of course in each of these areas it isn't on par with human intelligence by any stretch. Often it's far more limited. But it can also be better at certain specific tasks. Most of my limited familiarity is with computer vision but I think that illustrates how far off the mark it is from human intelligence. It is insanely difficult for MV to identify a thing. You can train it to identify one or a few or maybe a small set of things.

But it is easily confused by different ambient lighting intensity or hue, shadows, objects partially obscuring the thing, and myriad other conditions.

Meanwhile humans can identify an enormous number of objects in all sorts of conditions, easy-peasy. By a young age even. I hadn't fully appreciated how sophisticated our abilities were until I started looking at the artificial side of it.

Anyway, all that said, to me the real issue is what new developments in AI (as I defined) mean to society at large. How do jobs change, how does it affect quality of life, quality of products and services, does it change how we as a society value those things (art, writing) that can be partly replaced?

[–] [email protected] 2 points 1 year ago (1 children)

i like your definition of these ai tools. Its feels broad enough to cover all of the recent accomplishments so many are praising.

Many people aren’t able to distinguish that the software is just a tool and even less so as it becomes more autonomous

[–] [email protected] 1 points 1 year ago (1 children)

I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn't magic and it isn't emergent behavior. It isn't truly intelligent.

LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it's nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and "good" human responses.

There is no understanding behind it. No higher cognitive process. Just "what words go next based on Q&A training data." Which is why we get well written answers that are often total bullshit.

Even so, the tech could easily upend many writing careers.

[–] [email protected] 2 points 1 year ago

I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.