this post was submitted on 22 Aug 2023
156 points (93.8% liked)

Technology

35003 readers
328 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 69 points 1 year ago (1 children)

Come on now, next you’ll be saying the tech industry consistently overplays its incremental improvements as Earth-shattering paradigm shifts purely for the investment money!

This message posted from the metaverse

[–] [email protected] 16 points 1 year ago

Yup. As someone who works in tech, I was baffled by the number of people in my field who started freaking out about it. AI isn't some magic panacea, it's just another tool that needs to be designed for the task at hand. It's cool that ChatGPT can get 80% of the way there in so many fields, but that last 20% is where all the hard work is (see the pareto principle).

[–] [email protected] 32 points 1 year ago (2 children)

3 months ago: Everyone’s going to lose their jobs!

Today: Generative AI’s dead!

More realistically: Generative AI is a tool that will gradually get better over time. It is not universally applicable, but it does have a lot of potential applications. It is not going to take over the world, nor will it just suddenly go away.

[–] bouh 9 points 1 year ago (1 children)

IMO it'll be more like internet: society will take years to adapt to it and democratise its use. It took 30 years for Internet to bloom and it is now a primary service in Europe. I'm pretty sure AI will take this road.

[–] diffuselight 4 points 1 year ago (1 children)

please the internet was great 10 years ago

load more comments (1 replies)
[–] [email protected] 5 points 1 year ago

That's pretty much been my take from the beginning. My main concerns were and still are:

  • IP law, specifically copyright infringement
  • correctness - ChatGPT makes stuff up
  • detection - esp for school

My main fear was that it would be more useful for scammers and fraudsters than legitimate uses because of the above issues. I still have those concerns.

With any new technology that people say well change the world overnight, take a step back and think it through. For example:

  • self driving cars - we still have taxis, Uber, etc, so it hasn't taken over despite being here for years
  • robotics in manufacturing - it's incredibly expensive to put together and end to end robotic factory, so there are still plenty of manufacturing jobs
  • automated fast food - again, the most I've seen is increased number of kiosks, that's it

And so on. People freak out about new tech, then a couple years later they realize that it's not "finished" and there will be plenty of time to adapt. Unless we recover an alien spaceship or something, that's just not how technology progresses. Eventually generative AI will redically change our society, but it'll take decades, so by the time your job is threatened, you'll be ready to retire.

[–] [email protected] 31 points 1 year ago (3 children)

"If hallucinations aren't fixable, generative AI probably isn't going to make a trillion dollars a year," he said. "And if it probably isn't going to make a trillion dollars a year, it probably isn't going to have the impact people seem to be expecting," he continued. "And if it isn't going to have that impact, maybe we should not be building our world around the premise that it is."

Well he sure proves one does not need an AI to hallucinate...

[–] [email protected] 13 points 1 year ago (1 children)

Clearly nothing can change the status quo if it doesn’t also make trillions

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!

And, let us be clear, mere optimism for this 'new knowledge' does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church's teachings, but also the broader societal fabric that relies on a stable cosmological understanding.

This new theory probably isn't going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

Imagine if someone had said something like this about the 1st generation iPhone... Oh wait, that did happen and his name was Steve Balmer.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 29 points 1 year ago (1 children)

In the early 1980s, a teacher refused to let me word-process my homework (my penmanship was shit) on the grounds that I shouldn't be able to produce a paper at the touch of a button.

Upper management look at AI end results and imagine a similar scenario: they don't see the human effort behind the dumb-waiter and imagine a clerk can just tell an LLM to make me a sequel to Dumbo without getting very specific and then having a team of reviewers watch hundreds of terrible elephant films to curate the few good ones.

But what is telling is how our corporate bosses responded to the prospect of automated art. Much like the robot pizza company who did not automate the process and pass the savings on to you! (his offerings were typical pizza at typical prices and he kept all the savings for himself) our senior execs imagine ways to replace workers with cheaper automation rather than producing better stuff or cheaper movie tickets for their customers.

So maybe we should growl at them and change the system before they figure out how to actually pay fewer people while keeping more profits.

[–] [email protected] 8 points 1 year ago (1 children)

Companies will always keep all the savings and pass on all the expenses. That's just how they operate. You're not going to be able to change that system short of a revolution.

[–] [email protected] 13 points 1 year ago

That's what change the system means

[–] [email protected] 19 points 1 year ago (1 children)

I can't believe this tech bubble will burst. All the other ones have fared so well.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

Because they were far more useful to the average person, than the glorified spam making machine. Also it's not like something like this happened for the first time...

EDIT: forgot to grammar

[–] j4k3 18 points 1 year ago (1 children)

Feels like a minds war waged by billionaires in this space when you're actually playing with this stuff. All this hype is a joke as are the proprietary junk. Get a decent comp and try offline AI yourself and see what it can do. Try Llama2 70B Q4 GGML. You need a machine with more than 10-12+ cores and at least 64GB of system memory. It really helps to have a Nvidia GPU with 16GB+ but you don't have to have that here. This model can write python snippets like you're searching stack overflow but an order of magnitude faster. If you know basic code elements, branching, and looping, this model can code, and resolve its errors when it gets something wrong by prompting it with the error message. A 30B like WizardLM or Vicuna are almost technically useful, but the 70B is a beast.

[–] i_am_a_cardboard_box 5 points 1 year ago (2 children)

That doesn't sound simple by any measure.

[–] Zrybew 15 points 1 year ago* (last edited 1 year ago)

What they're trying to say is: very soon these models will run on your smartphone without internet connection and OpenAi will be no more.

[–] [email protected] 3 points 1 year ago (1 children)

The hardware is massively high end for sure. Question is, is it worth it?

[–] INeedMana 4 points 1 year ago

In 5-10 years time these will be "recommended system requirements"

[–] [email protected] 15 points 1 year ago

Kinda is, sure. The problem is when you become overly reliant on the tech without it being reliable. It's also kinda bad when it causes you to lose skills that you need to maintain it or further it.

[–] [email protected] 11 points 1 year ago

Ultimately, generative AI are tools, not magic. We're now past the hype phase and are now at the leveling out phase of the S-curve as people realizes that these things are limited.

I think ChatGPT is mostly going to be used as an automated copywriter for emails and resumes and such, whereas diffusion models will find their way into digital artists' workflow.

Life goes on.

[–] [email protected] 10 points 1 year ago (14 children)

Isn't ChatGPT's launch only less than 6 months old or something...

load more comments (14 replies)
[–] Carighan 9 points 1 year ago* (last edited 1 year ago) (3 children)

I wonder, could AI actually "collapse"? As in, once companies and people start leaving the AI hype space, could the external input become small enough so that the AI to AI input takes over to such a degree that all trained models become essentially useless?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

I find that unlikely. AI is a subject much like space tech. It may not always be the giant it is now but it's a baseline research countries will be conducting. Even if only as a means to defend themselves.

[–] [email protected] 2 points 1 year ago

no, you could just stop training on input and revert to a precious state if that happened

load more comments (1 replies)
[–] [email protected] 9 points 1 year ago

AI doesn’t seem to do well when it trains on its own data so I do think there’s a possibility it’s a one trick pony. Once there’s too much AI content in the data it’s trained on it will devolve into nonsense.

[–] [email protected] 9 points 1 year ago (5 children)

I’m curious about the development of artificial intelligence in the future, and I’m looking forward to seeing what GPT-5 can do. If it’s a huge leap forward, then I will agree that the future will be very different from what we have now. But if it’s only a slight improvement, like Llama 1 vs Llama 2, then large language models (LLMs) might face the same challenges as self-driving cars. They are somewhat functional, but not reliable enough to let you sleep on your commute, and they won’t be for a long time.
It might be impossible to eliminate all the hallucinations from LLMs, but if the next versions are incredibly useful, then we will learn to live with them. For example, currently 30% of chips fail on a wafer, but we still produce more CPUs and they are groundbreaking technology. But even GPT4+ will have a significant impact on our future, especially in education. Every kid will have an AI in their phone that is ready to answer all their questions with minimal effort. This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level. But this will not make us all lose our jobs in 10 years.

[–] [email protected] 4 points 1 year ago

This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level.

I don't think that accessibility in AI somehow correlates with the intelligence of the subjects using it. It can actually work in the completely opposite way where people blindly trust it or people get used to using it in a degree that they're unable to do anything without the help from the technology. Like people who are unable to navigate 2 blocks from their house if they don't use google maps navigation even though they do the same route every day.

[–] newDayRocks 2 points 1 year ago

Kids as well as everyone already have AI on their phones.

We've had it for quite a while now. Even before chatgpt, what question could you not find an answer for?

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago (2 children)

Genuine question: How hard is it to fix A.I. Hallucinations?

[–] eating3645 15 points 1 year ago* (last edited 1 year ago) (1 children)

Very difficult, it's one of those "it's a feature not a bug" things.

By design, our current LLMs hallucinate everything. The secret sauce these big companies add is getting them to hallucinate correct information.

When the models get it right, it's intelligence, when they get it wrong, it's a hallucination.

In order to fix the problem, someone needs to discover an entirely new architecture, which is entirely conceivable, but the timing is unpredictable, as it requires a fundamentally different approach.

[–] joe 3 points 1 year ago (2 children)

I have a weak and high level grasp of how LLMs work, but what you say in this comment doesn't seem correct. No one is really sure why LLMs sometimes make things up, and a corollary of that is that no one knows how difficult (up to impossible) it might be to fix it.

[–] eating3645 6 points 1 year ago* (last edited 1 year ago)

Let me expand a little bit.

Ultimately the models come down to predicting the next token in a sequence. Tokens for a language model can be words, characters, or more frequently, character combinations. For example, the word "Lemmy" would be "lem" + "my".

So let's give our model the prompt "my favorite website is"

It will then predict the most likely token and add it into the input to build together a cohesive answer. This is where the T in GPT comes in, it will output a vector of probabilities.

"My favorite website is"

"My favorite website is "

"My favorite website is lem"

"My favorite website is lemmy"

"My favorite website is lemmy."

"My favorite website is lemmy.org"

Woah what happened there? That's not (currently) a real website. Finding out exactly why the last token was org, which resulted in hallucinating a fictitious website is basically impossible. The model might not have been trained long enough, the model might have been trained too long, there might be insufficient data in the particular token space, there might be polluted training data, etc. These models are massive and so determine why it's incorrect in this case is tough.

But fundamentally, it made up the first half too, we just like the output. Tomorrow some one might register lemmy.org, and now it's not a hallucination anymore.

[–] BetaDoggo_ 3 points 1 year ago

LLMs only predict the next token. Sometimes those predictions are correct, sometimes they're incorrect. Larger models trained on a greater number of examples make better predictions, but they are always just predictions. This is why incorrect responses often sound plausable even if logically they don't make sense.

Fixing hallucinations is more about decreasing inaccuracies rather than fixing an actual problem with the model itself.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

Oohh i really love when im listening to music and click on an article and it starts autoplaying it -_-

load more comments
view more: next ›