Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn't help people finish their tasks.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
It's like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!
They tried implementing AI in a few our our systems and the results were always fucking useless. What we call "AI" can be helpful in some ways but I'd bet the vast majority of it is bullshit half-assed implementations so companies can claim they're using "AI"
The one thing "AI" has improved in my life has been a banking app search function being slightly better.
Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You're telling me you can make infinite free pictures of big tittied goth girls and you only included a few?
Generating multiple pictures of the same character is actually pretty hard. For example, let's say you're making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We'll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.
And all of this is multiplied ten times over if you want granular changes to a character. Let's say you're making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You're going to need to be extremely specific with the AI and it's probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn't understand the assignment well enough.
Large "language" models decreased my workload for translation. There's a catch though: I choose when to use it, instead of being required to use it even when it doesn't make sense and/or where I know that the output will be shitty.
And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.
I always said this in many forums yet people can't accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.
The workload that's starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.
"But it works!"
'It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library'
I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.
Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.
TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do
yay!! do more stupid shit faster and with more baseless confidence!
You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!
Do you think Sam Altman might have… gasp lied to his investors about its capabilities?
Me: no way, AI is very helpful, and if it isn't then don't use it
created challenges in achieving the expected productivity gains
achieving the expected productivity gains
Me: oh, that explains the issue.
It's hilarious to watch it used well and then human nature just kick in
We started using some "smart tools" for scheduling manufacturing and it's honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.
Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.
The trick is to be the one scamming your management with AI.
“The model is still training…”
“We will solve this with Machine Learning”
“The performance is great on my machine but we still need to optimize it for mobile devices”
Ever since my fortune 200 employer did a push for AI, I haven’t worked a day in a week.
Not working and getting paid? Sounds like you just became a high level manager
The study identifies a disconnect between the high expectations of managers and the actual experiences of employees
Did we really need a study for that?
The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.
because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced
AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn't made my workload more...
Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being "figure out how to bypass the AI to get to the actual human who can fix my problem".
I've thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn't necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I've met who were actually capable of that.
They've got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it's a financial tech firm handling twelve figures a year of business-- the last place where people will put up with "plausible bullshit" in their products.
I grudgingly installed the Copilot plugin, but I'm not sure what it can do for me better than a snippet library.
I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify "yes, there are n return values, so write n test cases" and "You're going to actually have to CALL the function under test", but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn't need to burn half a barrel of oil for that.
I'd be hesitant to trust it with "summarize this obtuse spec document" when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn't suitable.
Maybe the problem is that I'm too close to the specific problem. AI tooling might be better for open-ended or free-association "why not try glue on pizza" type discussions, but when you already know "send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW" having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.
I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent "here's why it's broken" sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.
Tell me when Stable Diffusion figures out that "Carrying battleaxe" doesn't mean "katana randomly jutting out from forearms", maybe at that point AI will be good enough for code.
For anything more that basic autocomplete, copilot has only given me broken code. Not even subtly broken, just stupidly wrong stuff.
The billionaire owner class continues to treat everyone like shit. They blame AI and the idiots eat it up.
Lmao, so instead of ai taking our jobs, it made us MORE jobs.
Thanks, "ai"!
Except it didn't make more jobs, it just made more work for the remaining employees who weren't laid off (because the boss thought the AI could let them have a smaller payroll)
I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.
If used correctly, AI can be helpful and can assist in easy and menial tasks
I mean if it's easy you can probably script it with some other tool.
"I have a list of IDs and need to make them links to our internal tool's pages" is easy and doesn't need AI. That's something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation
It also helps you getting a starting point when you don't know how ask a search engine the right question.
But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.
This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can't figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.
Admittedly I only skimmed the article, but I think one of the major problems with a study like this is how broad "AI" really is. MS copilot is just bing search in a different form unless you have it hooked up to your organizations data stores, collaboration platforms, productivity applications etc. and is not really helpful at all. Lots of companies I speak with are in a pilot phase of copilot which doesn't really show much value because it doesn't have access to the organizations data because it's a big security challenge. On the other hand, a chat bot inside of a specific product that is trained on that product specifically and has access to the data that it needs to return valuable answers to prompts that it can assist in writing can be pretty powerful.
The other 23% were replaced by AI (actually, their workload was added to that of the 77%)