this post was submitted on 01 Dec 2024
193 points (92.1% liked)

Ask Lemmy

27095 readers
2564 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about current US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

30 Nov 2022 release https://openai.com/index/chatgpt/

top 50 comments
sorted by: hot top controversial new old
[–] jg1i 41 points 2 days ago (4 children)

I absolutely hate AI. I'm a teacher and it's been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don't bother to think critically about the answers the AI gives and just assume it's 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

People seem to think there's an "easy" way to learn with AI, that you don't have to put in the time and practice to learn stuff. News flash! You can't outsource creating neural pathways in your brain to some service. It's like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don't allow any electronic devices during exams.

[–] [email protected] 10 points 1 day ago

As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I've seen people wasting hours like that. Its insane.

[–] mrvictory1 3 points 1 day ago (1 children)

Are you teaching in university? Also you said "%99 of students uses ChatGPT", are there really very few people who don't use AI?

[–] ComradeMiao 1 points 13 hours ago

In classes I taught in university recently I only noticed less than %5 extremely obvious Ai helped papers. The majority is too bad to even be ai, and around 10% of good to great papers.

[–] [email protected] 4 points 1 day ago

I'm generally ok with the concept of externalizing memory. You don't need to memorize something if you memorize where to get the info.

But you still need to learn how to use the data you look up, and determine if it's accurate and suitable for your needs. Chat gpt rarely is, and people's blind faith in it is frightening

load more comments (1 replies)
[–] [email protected] 15 points 1 day ago

I have a guy at work that keeps inserting obvious AI slop into my life and asking me to take it seriously. Usually it’s a meeting agenda that’s packed full of corpo-speak and doesn’t even make sense.

I’m a software dev and copilot is sorta ok sometimes, but also calls my code a hack every time I start a comment and that hurts my feelings.

[–] LovableSidekick 30 points 2 days ago* (last edited 2 days ago)

Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre's entrance, with the previously described characters reacting in their own ways.

I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I'm short on time.

My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it's a super valuable tool.

[–] 2ugly2live 11 points 1 day ago (1 children)

I used it once to write a polite "fuck off" letter to an annoying customer, and tried to see how it would revise a short story. The first one was fine, but using it with a story just made it bland, and simplified a lot of the vocabulary. I could see people using it as a starting point, but I can't imagine people just using whatever it spots out.

load more comments (1 replies)
[–] [email protected] 10 points 1 day ago* (last edited 9 hours ago)

It's changed my job: I now have to develop stupid AI products.

It has changed my life: I now have to listen to stupid AI bros.

My outlook: it's for the worst; if the LLM suppliers can make good on the promises they make to their business customers, we're fucked. And if they can't then this was all a huge waste of time and energy.

Alternative outlook: if this was a tool given to the people to help their lives, then that'd be cool and even forgive some of the terrible parts of how the models were trained. But that's not how it's happening.

[–] [email protected] 9 points 1 day ago

It's my rubber duck/judgement free space for Homelab solutions. Have a problem: chatgpt and Google it's suggestions. Find a random command line: chatgpt what does this do.

I understand that I don't understand it. So I sanity check everything going in and coming out of it. Every detail is a place holder for security. Mostly, it's just a space to find out why my solutions don't work, find out what solutions might work, and as a final check before implementation.

[–] [email protected] 7 points 1 day ago

Not much. Every single time I asked it for help, it or gave me a recursive answer (ex: If I ask "how do I change this setting?" It answers: by changing this setting), or gave me a wrong answer. If I can't already find it on a search engine, then it's pretty useless to me.

[–] Caboose12000 16 points 2 days ago* (last edited 2 days ago)

I got into linux right around when it was first happening, and I dont think I would've made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.

probably a very friendly expert or mentor or even just a regular established linux user could've done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me

[–] [email protected] 12 points 2 days ago

After 2 years it's quite clear that LLMs still don't have any killer feature. The industry marketing was already talking about skyrocketing productivity, but in reality very few jobs have changed in any noticeable way, and LLM are mostly used for boring or bureaucratic tasks, which usually makes them even more boring or useless.

Personally I have subscribed to kagi Ultimate which gives access to an assistant based on various LLMs, and I use it to generate snippets of code that I use for doing labs (training) - like AWS policies, or to build commands based on CLI flags, small things like that. For code it gets it wrong very quickly and anyway I find it much harder to re-read and unpack verbose code generated by others compared to simply writing my own. I don't use it for anything that has to do communication, I find it unnecessary and disrespectful, since it's quite clear when the output is from a LLM.

For these reasons, I generally think it's a potentially useful nice-to-have tool, nothing revolutionary at all. Considering the environmental harm it causes, I am really skeptical the value is worth the damage. I am categorically against those people in my company who want to introduce "AI" (currently banned) for anything other than documentation lookup and similar tasks. In particular, I really don't understand how obtuse people can be thinking that email and presentations are good use cases for LLMs. The last thing we need is to have useless communication longer and LLMs on both sides that produce or summarize bullshit. I can totally see though that some people can more easily envision shortcutting bullshit processes via LLMs than simply changing or removing them.

[–] [email protected] 99 points 2 days ago (9 children)

Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there's been no impact whatsoever in my personal life.

In my professional life as an ICT person with over 40 years experience, it's helped me identify which people understand what it is and more specifically, what it isn't, intelligent, and respond accordingly.

The sooner the AI bubble bursts, the better.

load more comments (9 replies)
[–] Norin 62 points 2 days ago* (last edited 2 days ago) (2 children)

For work, I teach philosophy.

The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.

Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.

As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.

[–] AFKBRBChocolate 27 points 2 days ago

My son is in a PhD program and is a TA for a geophysics class that's mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn't have enough data to give an answer and refers to ways the person could find out. It's honestly pretty sad.

[–] [email protected] 16 points 2 days ago

convinced that every student needs to use the LLMs in order to find a career after graduation.

Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?

After all, the bosses will find the automated machine product "good enough." It's "just a tool, you guys."

Sheesh. I hope these students aren't paying tuition, and even then, they're still getting ripped off by admin-brain.

I'm sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!

[–] [email protected] 6 points 1 day ago

Main effect is lots of whinging on Lemmy. Other than that, minimal impact.

[–] [email protected] 71 points 2 days ago (1 children)

As a software developer, the one usecase where it has been really useful for me is analyzing long and complex error logs and finding possible causes of the error. Getting it to write code sometimes works okay-ish, but more often than not it's pretty crap. I don't see any use for it in my personal life.

I think its influence is negative overall. Right now it might be useful for programming questions, but that's only the case because it's fed with Human-generated content from sites like Stackoverflow. Now those sites are slowly dying out due to people using ChatGPT and this will have the inverse effect that in the future, AI will have less useful training data which means it'll become less useful for future problems, while having effectively killed those useful sites in the process.

Looking outside of my work bubble, its effect on academia and learning seems pretty devastating. People can now cheat themselves towards a diploma with ease. We might face a significant erosion of knowledge and talent with the next generation of scientists.

[–] Tyfud 12 points 2 days ago* (last edited 2 days ago)

I wish more people understood this. It's short term, mediocre gains, at the cost of a huge long term loss, like stack overflow.

[–] [email protected] 26 points 2 days ago

I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

And, even if they pull it, I don't think I'll ever go back. No more cloud drives, no more 'apps'. Webpages and local files on a file share I own and host.

[–] [email protected] 52 points 2 days ago (4 children)

It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of "prompt engineering". I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn't stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn't really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for "attitude problems".

[–] [email protected] 20 points 2 days ago

im sorry.

managers tend to be useless fucking idiots.

load more comments (3 replies)
[–] Nostalgia 56 points 2 days ago (2 children)

AI has completely killed my desire to teach writing at the community college level.

load more comments (2 replies)
[–] [email protected] 5 points 1 day ago

my face hurts from all the extra facepalms

[–] AFKBRBChocolate 23 points 2 days ago (3 children)

I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it's a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

You can't put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I've had employees use it to come up with an algorithm or find an error, but I think it's risky to have one generate large pieces of code.

load more comments (3 replies)
[–] [email protected] 21 points 2 days ago (2 children)

I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I'm AI in chat, which makes sense. A surprising number think I'm a bot when they call in, because I guess my voice sounds like a recording.

Most of the time it's just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can't believe the abuse people hurl out when they assume it's not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.

It's not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people's exhaustion with being forced to deal with robots from my own experiences when I've needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we're treating the people offering them decently so they want to continue offering that to us.

load more comments (2 replies)
[–] [email protected] 41 points 2 days ago (1 children)

Impact?

My company sells services to companies trying to implement it. I have a job due to this.

Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don't get me anywhere on my projects.

load more comments (1 replies)
[–] GreenKnight23 26 points 2 days ago

I worked for a company that did not govern AI use. It was used for a year before they were bought.

I stopped reading emails because they were absolute AI generated garbage.

Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+

they still did nothing to curb AI use.

they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.

company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.

today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.

overall, I would say banning AI was the right choice.

IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.

[–] IMNOTCRAZYINSTITUTION 34 points 2 days ago (2 children)

My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers' work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren't even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.

[–] MintyAnt 2 points 1 day ago

Luckily we don't need accurate info for training reference manuals, it's not like safety is involved! ..oh wait

load more comments (1 replies)
[–] frickineh 32 points 2 days ago (1 children)

I used it once to write a proclamation for work and what it spit out was mediocre. I ended up having to rewrite most of it. Now that I'm aware of how many resources AI uses, I refuse to use it, period. What it produces is in no way a good trade for what it costs.

load more comments (1 replies)
[–] weeeeum 12 points 2 days ago (1 children)

Scam emails are a lot more coherent now

load more comments (1 replies)
[–] frog_brawler 4 points 1 day ago* (last edited 1 day ago)

I get an email from corporate about once a week that mentions it in some way. It gets mentioned in just about every all hands meeting. I don’t ever use it. No one on my team uses it. It’s very clearly not something that’s going to benefit me or my peers in the current iteration, but damn… it’s clear as day that upper management wants to use it but they don’t know how to implement it.

[–] Sludgehammer 22 points 2 days ago (1 children)

Searching the internet for information about... well anything has become infuriating. I'm glad that most search engines have a time range setting.

load more comments (1 replies)
[–] glitchdx 2 points 1 day ago

I have a book that I'm never going to write, but I'm still making notes and attempting to organize them into a wiki.

using almost natural conversation, i can explain a topic to the gpt, make it ask me questions to get me to write more, then have it summarize everything back to me in a format suitable for the wiki. In longer conversations, it will also point out possible connections between unrelated topics. It does get things wrong sometimes though, such as forgetting what faction a character belongs to.

I've noticed that gpt 4o is better for exploring new topics as it has more creative freedom, and gpt o1 is better for combining multiple fragmented summaries as it usually doesn't make shit up.

[–] [email protected] 26 points 2 days ago

it works okay as a fuzzy search over documentation.
...as long as you're willing to wait.
...and the documentation is freely available.
...and doesn't contain any sensitive information.
...and you very specifically ask it for page references and ignore everything else it says.

so basically, it's worse than just searching for one word and pressing "next" over and over, unless you don't know what the word is.

[–] Skanky 7 points 2 days ago

It's made our marketing department even lazier than they already were

load more comments
view more: next ›