this post was submitted on 15 Sep 2023
466 points (97.2% liked)
Technology
60123 readers
4921 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Devil's advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn't work and then ???. That "???" at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.
It's qualitatively different if you can just talk to the black box as though it were a programmer. It's less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn't work, the "???" step doesn't just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today's LLMs hit the wall pretty quick there, and maybe they always will. There's certainly the viewpoint that "all they do is model text and they can't really learn anything".
I think that's fundamentally wrong. I'm a pretty solid programmer. I have a PhD in Computer Science, and I've worked as a software engineer and an architect throughout a pretty long career. And everything I've ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I've learned, but it's fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that's inevitable.
Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.
If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it's possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we're a long way off from a true AI, so everyone acting like it's going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.