this post was submitted on 24 Jan 2024
7 points (54.1% liked)

Technology

60012 readers
2178 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
7
submitted 11 months ago* (last edited 11 months ago) by kromem to c/technology
 

I've been saying this for about a year since seeing the Othello GPT research, but it's nice to see more minds changing as the research builds up.

Edit: Because people aren't actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

you are viewing a single comment's thread
view the rest of the comments
[–] Redacted 7 points 11 months ago (2 children)

This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.

Humans already anthropomorphise everything, so I'm leaning towards the latter being easier.

[–] [email protected] 5 points 11 months ago (1 children)

I'd take a step farther back and say the argument hinges on whether "consciousness" is even really a thing, or if we're "faking" it to each other and to ourselves as well. We still don't have a particularly good way of measuring human consciousness, let alone determining whether AIs have it too.

[–] Redacted 1 points 11 months ago (1 children)

...or even if consciousness is an emergent property of interactions between certain arrangements of matter.

It's still a mystery which I don't think can be reduced to weighted values of a network.

[–] automattable 1 points 11 months ago (2 children)

This is a really interesting train of thought!

I don’t mean to belittle the actual, real questions here, but I can’t shake the hilarious image of 2 dudes sitting around in a basement, stoned out of their minds getting “deep.”

Bro! What if consciousness isn’t real, and we’re just faking it

brooooooo

[–] General_Effort 1 points 11 months ago (1 children)

Now I get it. That dude is explaining the Boltzmann brain.

[–] Redacted 1 points 11 months ago* (last edited 11 months ago)

Brah, if an AI was conscious, how would it know we are sentient?! Checkmate LLMs.

[–] Redacted 1 points 11 months ago

Bold of you to assume any philosophical debate doesn't boil down to just that.

[–] [email protected] -2 points 11 months ago (1 children)

Or maybe our current understanding of conscious and intelligence is wrong and they are not related to each other. A non conscious thing can perform advanced logic like the Geometrical patterns found within the overlapping orbits of planets, the Fibonacci being found about everywhere. We also have yet to proof that individual strands of grass or rocks aren't fully consciousness. There is so much we don't know for certain its perplexing how we believe we can just assume.

[–] Redacted 1 points 11 months ago* (last edited 11 months ago) (1 children)

Standard descent into semantics incoming...

We define concepts like consciousness and intelligence. They may be related or may not depending on your definitions, but the whole premise here is about experience regardless of the terms we use.

I wouldn't say Fibonacci being found everywhere is in any way related to either and is certainly not an expression of logic.

I suspect it's something like the simplest method nature has of controlling growth. Much like how hexagons are the sturdiest shape, so appear in nature a lot.

Grass/rocks being conscious is really out there! If that hypothesis was remotely feasible we couldn't talk about things being either consciousness or not, it would be a sliding scale with rocks way below grass. And it would be really stretching most people's definition of consciousness.

[–] [email protected] 3 points 11 months ago* (last edited 11 months ago) (1 children)

I understand what you're saying but i disagree that there is any proper defining of the concept. The few scientist that attempt to study it can't even agree on what it even is.

I agree that my example where far out, they are supposed to be to represent ideas outside the conventional box. I don't literally believe grass is conscious. I recognize that if i/we don't know, then i/we don't know. In the face of something we don't know the nature off, the requirements for, the purpose it serves i prefer to remain open to every option.

I know Wikipedia isn't a scientific research paper but i expect that if there really is a agreed upon scientific answer it wouldn't be like it currently is:

"Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations and debate by philosophers, theologians, and all of science. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. The disparate range of research, notions and speculations raises a curiosity about whether the right questions are being asked."

[–] Redacted 1 points 11 months ago (1 children)

I feel like an AI right now having predicted the descent into semantics.

[–] [email protected] 3 points 11 months ago

I fear it was inevitable, with no framework where we can agree upon semantics are all there is.

I truly wish we humanity had more knowledge to have a more proper discussion but currently it seems unproductive, especially in the context of a faceless online forum debate between 2 strangers.

Thank you for your time, and input on this matter.