this post was submitted on 30 May 2024
43 points (95.7% liked)

AI

4006 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
top 28 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 6 months ago (1 children)

No fucking shit, garbage in garbage out. All the hype is an echo chamber of executives jerking each other off

[–] [email protected] 4 points 6 months ago (2 children)

And then there's Lemmy, which is an echo chamber of people jerking off in the other direction.

[–] [email protected] 8 points 6 months ago

I agree but only partially. Lemmy is a collection of echo chambers, everyone has the choice to jerk off in all directions.

[–] [email protected] -4 points 6 months ago (1 children)
[–] [email protected] 8 points 6 months ago

Being aware of being in an echo chamber isn't a bad thing

[–] [email protected] 9 points 6 months ago* (last edited 6 months ago) (6 children)

I'm still so lost on what the use case for chatGPT is unless its like, learning a language (considering it's a language model as i understand it).

It does not reliably source accurate information.

It does not create nuanced artistic writing.

It does not produce reliable code.

I'm certain 90% of its value is in everyone wanting very badly for it to be something that its not, but it just isn't.

It's like if someone invented a claw hammer and people bought into it because "Oh wow, this could be used as a door stop! This could be used to cook my stir fry! This could be used to play a piano!" and yes, you could use it for those things, but really the thing was built for hammering nails and thats about all its actually good at.

This is why I think there is hype, but little usage, because no one wants to use it for what it might actually be good at, and they don't even market it as such because its more profitable to pretend its an "everything" tool.

It's like going to a coffee shop, but for some reason there's pizza on the menu, and of course when you order it, the pizza is dog shit.

[–] [email protected] 10 points 6 months ago* (last edited 6 months ago) (3 children)

I use it almost daily.

It does produce good code. It does not reliably produce good code. I am a programmer, it makes my job 10x faster and I just have to fix a few bugs in the code it usually generates. Over time, I learned what it is good at (UI code, converting things, boilerplate) and what it struggles with (anything involving newer tech, algorithmic understanding, etc.)

I often refer to it as my intern: It acts like an academically trained, not particularly competent, but very motivated, fast typing intern.

But then I am also working on the field. Prompting it correctly is too often dismissed as a skill (I used to dismiss it too). It needs more understanding than people give it credit for.

I think that like many IT tech it will go from being a dev tool to everyday tool gradually.

All the pieces of the puzzle to be able to control a computer by voice using only natural language are there. You don't realize how big it is. Companies haven't assembled it yet because it is actually harder to monetize on it than code it. I think probably Apple is in the best position for it. Microsoft is going to attempt and will fail like usual and Google will probably put a half-assed attempt at it. I'll personally go for the open source version of it.

[–] Ranta 3 points 6 months ago* (last edited 6 months ago) (2 children)

Yes, thank you, this.

All the criticism for artificial intelligence and deployments of it like ChatGPT right now I see as people not being able to hold something in their hand. This is far more of an abstraction than a new phone and when people can't grock that immediately or they play with it for 5 minutes and dismiss it because it gave them a form-fill looking answer when they gave it some para-literate 5 word question, then they're obviously going to be unimpressed and walk away.

If you spend any amount of time actually trying to figure out what to say to it in order to get it to produce actual information it's one of the most compelling new ways to interface with a computer since the MOAD and I would imagine ultimately will be the most compelling in the end.

Like put it this way, I don't know if this will actually end up producing AGI but, like... This thing is a 3 year old.

And it's a 3 year old that can write basic coding implementations and give you at least, maybe in some cases much better than, high school level comprehension s of most of the English (and quickly building to other languages) written world.

This is the dumbest it will ever be...

[–] [email protected] 1 points 3 months ago

Now you made me interested in learning how to prompt these things. From what I have tried, I saw that appending some more descriptional sentences after the actual prompt usually makes loads of sense. But once you add too many sentences, the model tends to write way longer replies too. This is obviously something which happens in real life too, so maybe that is just the natural way...

[–] [email protected] 1 points 6 months ago (1 children)

Also, as a side effect, we just solve speech recognition. In a year or two, speaking to machines will be the default interface.

[–] Ranta 1 points 6 months ago* (last edited 6 months ago) (1 children)

Style TTs2 for output. Localllama with a high quant on 4x 4090'. Personal AI assistant running on your local homelab for <30k.

I kinda see it as a home appliance or vehicle level purchase.

[–] [email protected] 1 points 6 months ago

I am pretty sure that there are ASIC being put in production as we speak with Whisper embeded. Expect a 4 dollars chip to add voice recognition and a basic LLM to any appliance.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

Yea, having worked in the IT field and knowing a few languages myself, I think that as far as code goes, it can be ok for basically laying out the structure of what you are trying to do. It's typically the details that it misses in my experience. In that sense, it definitely can be used similarly to an IDE.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

Hey I heard that intern metaphor before somewhere... No Boilerplate?

EDIT: Dumb me, I replied before reading the enitre message. What you say is exactly how I feel, there are some real big possibilities here. Currently the closest thing to that using a computer with only your voice would be something like ollama combined with open web ui and their calling feature and some tool functions.

[–] [email protected] 2 points 3 months ago

Install text-generation-webui, check their "whisper stt" option, and you can talk with a computer. As a non native I prefer to read the english output than listen to it but they do provide TTS as well.

[–] moonburster 3 points 6 months ago (1 children)

It is quite good for niche use cases. My gf feeds it her mails and then let's it create a mail based on her style. This way she can send out a mail in a matter of minutes.

Is it wrong a bunch of time, yes

Does it still save tons of time just to review a generated doc, also yes

I mostly just argue with gpt because it lies about basic stuff

[–] [email protected] 1 points 6 months ago

Ahh yes, formatting/styling makes sense as a use case, that is pretty neat.

[–] tehmics 2 points 6 months ago (1 children)

I use it every day in the same way I used to use Google. It's great for quick syntax reminders or low stakes 'searches'. If I want to know where to go in a video game, I'm always going to get an answer faster from chatGPT than Google, and it's not the end of the world if it hallucinates, although that has never even been a problem for me yet. Meanwhile Google just indexes 'recipe blog' type articles where you have to scroll past dozens of ads and drivel before you get to the actual info you're looking for.

If you treat it like old Google with a healthy dose of skepticism it's a very powerful tool. My only complaint is that it doesn't source it's info so I can't follow up or dive deeper very easily.

[–] [email protected] 2 points 6 months ago

It would be great if it cited its information for sure.

[–] Gigasser 2 points 6 months ago

Ehh it's okay for making certain templates like for characters, especially if I'm too lazy to type it all out. Sometimes I use it to give me a word that I can't quite remember, but I can describe it's use or something. Sometimes I get it to "talk" like how a specific character may "talk" so that I can get an idea of how a character might "sound" like if I'm writing a story. Otherwise it's use for me is pretty sporadic.

[–] Anticorp 2 points 6 months ago

It's incredibly useful for programmers.

[–] [email protected] 1 points 6 months ago (1 children)

I mainly use it to create placeholder graphics. It's much better than looking around for open-source clipart. It's placeholder because most of the output is pretty plastic and unreal. When time comes, will be hiring a real designer who can create actually original content that best fits a specific look and feel.

It's not a reliable source for actual data, news, or even a good programming aide. Every single time I tried it, it confidently spit out incorrect stuff. Will see how it does generating test cases for a server application project.

[–] [email protected] 1 points 6 months ago

hiring a real designer who can create actually original content

You're doing it right IMO, shaking out the idea with generative AI before hiring a designer probably saves the designer a lot of headaches.

[–] sudo42 7 points 6 months ago

From Crytocurrency, to NFTs to this latest “Computer AI is about to change the world!!!” (5th time?) the hype-cycle boom-to-bust window is shortening quickly.

[–] [email protected] 3 points 6 months ago (1 children)

This is the best summary I could come up with:


Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests.Researchers surveyed 12,000 people in six countries, including the UK, with only 2% of British respondents saying they use such tools on a daily basis.But the study, from the Reuters Institute and Oxford University, says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech.Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it.The study examined views on generative AI tools - the new generation of products that can respond to simple text prompts with human-sounding answers as well as images, audio and video.Generative AI burst into the public consciousness when ChatGPT was launched in November 2022.The attention OpenAI's chatbot attracted set off an almighty arms race among tech firms, who ever since have been pouring billions of dollars into developing their own generative AI features.What is AI?However this research indicates that, for all the money and attention lavished on generative AI, it is yet to become part of people’s routine internet use.

"Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," Dr Fletcher said.

The new generation of AI products has also sparked an intense public debate about whether they will have a positive or negative impact.Predicted outcomes have ranged, for the optimists, from a boost to economic growth to the discovery of new live-saving drugs.The pessimists, meanwhile, have gone so far as to suggest the tech is a threat to humanity itself.This research attempted to gauge what the public thinks, finding:The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and scienceMost said they think generative AI will make their own lives betterWhen asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic"People’s hopes and fears for generative AI vary a lot depending on the sector," Dr Fletcher told the BBC.

"People are generally optimistic and about the use of generative AI in science and healthcare, but more wary about it being used in news and journalism, and worried about the effect it might have on job security.

"He said the research showed it was important for everyone, including governments and regulators, to apply nuance to the debate around AI.The findings were based on responses to an online questionnaire fielded in six countries: Argentina, Denmark, France, Japan, the UK, and the USA.


The original article contains 442 words, the summary contains 445 words. Saved -1%. I'm a bot and I'm open source!

[–] Rolando 8 points 6 months ago (2 children)

The original article contains 442 words, the summary contains 445 words. Saved -1%.

I don't think you did a very good job there...

[–] tehmics 4 points 6 months ago

Saved me a click through, I'm not mad about it

[–] [email protected] 2 points 6 months ago

That's LLMs for ya