this post was submitted on 22 Mar 2024
22 points (100.0% liked)

SneerClub

983 readers
26 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 18 points 8 months ago* (last edited 8 months ago) (1 children)

Okay, let's see...

first paragraph: Unfolding as some cosmic story, the rapid advancements in artificial intelligence have given rise to large language models (LLMs) with unprecedented accuracy, fluency and utility. It's almost as if there's a cognitive big bang exploding into our human universe.

Nope, no, nonono, you're full of shit and I refuse to read the rest. Good day sir. I said good day!

[–] [email protected] 4 points 8 months ago

I was already saying no after reading this opening line:

Let's go on a journey—a journey of pure speculation So, please give me a little cognitive latitude...

[–] [email protected] 16 points 8 months ago (1 children)

With each interaction, we feed more data into these systems and our unique perspectives and thought patterns become part of their ever-expanding neural networks.

Such a simple way to give away that you know absolutely fuck-all about actual Machine Learning as a science. Neural networks (ML term of art) don't fucking expand with data, they have a fixed size. A neural network is just a spicy matrix with real numbers.

This also seems to suggest the author thinks "bigger = better", which is just false. The whole field would probably be much easier if it were true, but you learn this im ML 101, just making the matrix bigger can reduce accuracy.

Neural networks are such an unfortunate name. They kinda look like a layered network and propagation-backpropagation kinda might give you "neurons firing" vibes, so scientists named it like that; only for dipshit pundits to immediately go "neuron means brain means machine thinks!!!" years later...

[–] [email protected] 6 points 8 months ago

Yeah, I work in a field where machine learning has incredible applications, but also we're facing a deluge of trash. It's been clear for a while now that smaller, domain specific models are where it's at

[–] [email protected] 16 points 8 months ago (1 children)

People are increasingly turning to LLMs for a wide range of cognitive tasks, from creative writing and language translation to problem-solving and decision-making.

If this guy's circle of acquaintances includes an increasing number of people who rely on fancy autocomplete for decision-making and creative writing, I might have an idea why he thinks LLMs are super intelligent in comparison.

To achieve human escape velocity, we might need to leverage the very technologies that challenge our place in the cognitive hierarchy. By integrating AI tools into our educational systems, creative processes, and decision-making frameworks, we can amplify our natural abilities, expand our perspectives, and accelerate innovation in a way that is symbiotic rather than competitive.

Wait, let me get this straight. His solution to achieve human escape velocity, which means "outpac[ing] AI's influence and maintain human autonomy" (his words, not mine) is to increase AI's influence and remove human autonomy?

[–] [email protected] 14 points 8 months ago (1 children)

Wait, let me get this straight. His solution to achieve human escape velocity, which means “outpac[ing] AI’s influence and maintain human autonomy” (his words, not mine) is to increase AI’s influence and remove human autonomy?

Well how do YOU plan on shilling for the tech industry by scaring people up about LLMs?

[–] [email protected] 9 points 8 months ago

Always love a good bit of critihype.

[–] [email protected] 16 points 8 months ago

Could we be witnessing the emergence of a new kind of singularity, where the boundaries between human and machine cognition start to blur and dissolve?

No.

[–] [email protected] 15 points 8 months ago* (last edited 8 months ago)

this is so stupid. this is what happens when you're so enamoured with your metaphor that you don't stop and think what it is exactly you're trying to describe.

which i suppose is exactly how an LLM would write so he might be right in his case.

[–] Al0neStar 15 points 8 months ago* (last edited 8 months ago)

There's no doubt that there's an attraction LLMs.

I love how the "attraction" hyperlink leads to an article about mating.

[–] [email protected] 8 points 8 months ago (1 children)

My initial reaction was essentially a mix of “sorry to this man” and “well I don’t really expect psychology today to publish anything approaching meaningful on tech/ai” so I wanted to see if those reactions were justified. Well…

This makes PT look like a total rag. The author, John Nosta, bills himself as “The World’s Leading Innovation Theorist and Keynote Speaker” which really just sounds like “lecture circuit grifter” to me. Let’s take a look at the rest of his front page:

STRATEGIST

Driving change that is changing the world. John’s informed voice has become a beacon of insight to help dissect and define innovation in health, medicine, and technology.

INNOVATOR

Not just a simple observer, John is directly engaged with top companies, thinkers and initiatives. His perspective is from the inside out and provides an “insiders” view of a complex and changing world.

THOUGHT LEADER

John is consistently ranked among the top names in health technology and innovation. Beyond simply an influencer, he is also defined as “most admired” to “top disruptor” in technology, life sciences and medicine.

So yeah if you asked chatGPT to come up with the profile for an AI lecture circuit grifter, it’d probably look like the above.

Looking through his contributions to PT you might think he is just recycling armchair AI philosophy to make a quick buck and honestly I’m having a hard time thinking otherwise.

[–] [email protected] 10 points 8 months ago (1 children)

His LinkedIn has no background in tech beyond serving on the board of google health. He’s essentially just some guy!

[–] [email protected] 10 points 8 months ago (1 children)

You say that as if just anyone could end up serving on the board of Google health. I'm sure they have a rigorous vetting process to ensure that only thoughtful and experienced health and technology professionals are on tha-- naaah I'm just messing with you, I bet you just need the right connections.

[–] [email protected] 8 points 8 months ago

Oh yeah sorry, sometimes I forget that board members and execs are the core contributors to human innovation, and that Google spends all its money on hiring the best and brightest think tank people. It’s why they only make good and ethical decisions that no one gets mad at, ever. /s

To be completely fair it appears he has experience in medical research, with some publications in the 80’s. That being said his LI plots his trajectory from just some guy to thinker tanker grifter guy.