this post was submitted on 11 Sep 2023
402 points (94.3% liked)
Technology
59166 readers
2327 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It was impossible for a computer to be smart enough to beat grandmasters at chess, until it wasn't. It was impossible to beat Go Masters at Go, until it wasn't.
No software engineering jobs are getting replaced this year or next year. But considering the rapid pace of AI development, and considering how much code development is just straight up redundant... looking at 20 years from now, it's not so bright.
It would be way better to start putting AI legislation in place this year. That or it's time to start transitioning to UBI.
I am an actual (senior) software engineer, with a background in ML to boot.
I would start to worry if we were anywhere close to even dreaming of how AGI might actually work, but we're not. It's purely in the realm of science fiction. Until you meet the bar of AGI, there's absolutely no risk of software engineering jobs being replaced.
Go or Chess are games with a fixed and simple ruleset and are very suited to what computers are really good at. Software engineering is the art of making the ambiguous and ill-defined into something entirely unambiguous and precisely defined, and that is something we are so far from achieving in computers it's not even funny. ML is ultimately just applied statistics. It's not magic, and it's far from anything we would consider "intelligence".
I do think we need legislation targeting ML, but not because of "omg our jobs". Rather we need legislation to combat huge tech companies vacuuming any and all data on the general public and using that data to manipulate and control the public.
Also, LOL at "how much code development is straight up redundant". If you think development amounts to just writing a bunch of boilerplate as though we were some kind of assembly line putting together the same thing over and over again, you're sorely mistaken.
I think you overestimate what the average software developer is doing.
Do I think in 10 years ai will be patching the Linux kernel or optimizing aws scaling functions, no. Do I think it will be creating functional crud apps with Django or Ruby on rails, yes, and I think that's what a large amount of software developers are doing. Even if it's not a majority a lot of the more precarious developers without a cs degree will probably lose their job. Not every developer is a senior engineer working on ML.
This isn't proof of anything, I would just like to point out that a lot of science fiction has become reality in the last few decades.
At the end of the day, what is a computer except a machine with a fixed and simple ruleset: logic gates.
You don't need AI to write you perfect C or JavaScript or HTML. You just need it to create an interface for an end user to make the computer do what they want. I predict the AI itself won't write the languages, it will tend to replace the languages. Many orders of magnitude more computationally expensive, but the hardware is quickly becoming cheaper to buy than paying software engineers.
Obviously not, that's why libraries and OOP and frameworks exist, I'm aware, not pretending like I have anything to teach you about it either.
And I'll take the L if you have the insider knowledge that there's a requirement for massive creativity behind the scenes in widespread fundamental overhauls of the way software works. But afaik, the fundamentals of code haven't changed in decades. The way users interact has not changed much since smartphones became standard. I don't see a capitalistic incentive to pay for lots of new creativity, instead of just making usable products.
It was impossible for computers to beat chess and go masters when the computers were trying to play like humans -trying to model high level understanding of strategy and abstract values. The computers started winning when they got fast enough to brute force games - to calculate all of the possible outcomes from all of the possible moves, and to choose the best one.
This is basically the same difference between LLMs and 'true' general AI. The LLMs are brute forcing the next line of a screenplay, with no way to incorporate abstract concepts like truth or logic. If you confuse an LLM for an AI, then you're going to be disappointed in its performance. If you accept that an LLM is a way to average past communications, and accept that a lot of its training set were fiction, then it's an amazing tool for generating consensus text (given that the consensus includes fantasies and lies). It's not going to write new code, but it will give you an approximation of all the existing examples of some algorithm. An approximation that may introduce errors, like copy-pasting sequential lines from every stackexchange answer.
Computer graphics, computer game opponents, they're still doing the same things they were doing decades ago, and the improvements are just doing it all faster. General AI needs to do something different than LLMs and most other ML algorithms.