this post was submitted on 10 Feb 2025
136 points (89.5% liked)
Technology
62075 readers
4774 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't know how you can say this when programming is one of the best uses for AI
Eh, copilot is still more miss than hit whenever I use it. It's probably equally dogshit for other uses as well unless your goal is to just generate bullshit and you need to hit a specific word count.
As a senior dev, I have no use for it in my workflow. The only purpose it would serve for me is to reduce the amount of typing I do. I spend about 5-10% of my time actually writing code. The rest of my dev time is spent in architecting, debugging, testing, or documenting. LLMs aren't really good at most of those things once you move past the most superficial levels of complexity. Besides, I don't actually want something to reduce the amount I'm typing. If I'm typing too much and I'm getting annoyed then it's a sure sign that I've done something bad. If I'm writing boilerplate then it's time to write an abstraction to eliminate that. If I'm writing repetitive tests then it's a sign I need to move to a property based testing framework like Hypothesis. If the LLM spits all of this out for me, I will end up writing code that is harder to understand and maintain.
LLMs are fine for learning and junior positions where you'll have more experienced folks reviewing code, but it just is not that helpful past a certain point.
Also, this is probably a small thing, but I have yet to find an LLM that writes anything other than shitty, terrible shell scripts. Please for the love of God don't use an LLM to write shell scripts. If you must, then please pass the results through shellcheck and fix all of the issues there.
I've seen it mainly used to assist with python scripts which work well not sure on how well it does shell scripts
Python is my primary language. For the way I write code and solve problems, it's the language where I need the least help from an LLM. Python lets you write code that is incredibly concise while still being easy to read. There's more of a case to be made for something like Go, since it seems like every single god damned function call ends up being
variable, err := someFuckingShit()
and then aif err!=nil
and manually handling it instead of having nice exception handling. Even there, my IDE does that for me without requiring a computationally expensive LLM to do the work.Like, some people have a more conversational development style and I guess LLMs work well for them. I end up constantly context switching between code review mode and writing code mode which is incredibly disruptive.
Heh for me even the newest models like the new Claude are only really useful when I did the thinking and the initial code writing, and i ask it to simplify it or to make it use more efficient libraries/features. Because when asking it to do my work it produces shit, and im very junior level
yeah it's definitely an assistant not a cheap developer... or is it :O
Devin just came to take your software job… will code for $8/hr https://www.youtube.com/watch?v=GhIm-Dk1pzk /s
I'm learning javascript and love it, so much easier to query Mistral/Qwen/Deepseek Distilled than scrolling through endless search results hoping someone ran into the same problem I did
I also run the AI models in LM Studio on my own machine so I'm happy with that as well, I try to self host where I can
I know it can't take my job because I tried to make it do my job. Spoiler, it can't. And that's because most jobs aren't doing things that have been done so often that Claude has an example in its training data set. If your job is that basic then yes, an AI will take it from you. Most of the programming job is actually solving a problem within the context of the codebase, not the coding itself. I am working with old and archaic technology from the 60s to the 90s and let me tell you, using the official doc is way more factual than asking any AI model about information because it will start spewing bullshit after the second prompt
A lot of comments in that YouTube thread for Devin are not positive.
Every time I try to use it it hallucinates bugs. It’s a waste of time for me.
Sorry but no.
It’s good when what you are trying to do has been done in the past by thousand of people (thanks to the free training data). But it’s really bad for new use case. After all it's a glorified and expensive auto-complete tool trained on code they parsed. It’s not magic, it’s math.
But you don’t get intelligence, creativity from these tools. It’s math! Math is the least creative domain on earth. Since when being a programmer is just typing portion of code from boilerplate / examples from internet?
It’s the logical thinking, taking into account all the parameters and constraints, breaking problems into piece of code, checking it, testing it, deploying it, supporting it.
Ok, programming goal is to solve a problem. But usually not all the parameters of the problem can be reduced to its mathematical form.
IA are far from being able to do that and the ratio gain/cost is not proven at all. These companies are so committed to AI (in term of money invested) that THEY MUST make you use their AI products, whatever its quality. They even use a marketing term to hide their product bad answer: hallucinations. Hallucination is just a fancy word to not say: totally wrong.
Do you find normal to buy a solution that never produces 100% good results (more around 20% of failure)?
In my industry, this IA trend (pushed mainly from managers not knowing what really is programming and of course "AI") generate a lot of bad quality code from our junior devs. And it's not something i want to push in production.
In fact, a lot of PoC around ML never goes from the R&D phase to the real production. It’s too risky for the business (as human life could be impacted).