this post was submitted on 22 Sep 2024
10 points (57.8% liked)
Futurology
1852 readers
82 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh no, they'll write really average essays! What ever shall we do!!!
Or maybe they'll produce janky videos that don't make any sense so have to be shorter than 10 seconds to cover up the jank!!!
Language models aren't intelligent. They have no will of their own, they don't "understand" anything they write. There's no internal thought space for comprehension. They're not learning. They're "trained" to mimick statistically average results within a search space.
They're mimicks, and can't grow beyond or outdo what they've been given to mimick. They can string lots of information together but that doesn't mean they know what they're saying, or how to get anything done.
Given that in the past 15 years we went from "solving regression problems a little bit better than linear models some of the time" to what we have now, it's not unfounded to think 15 years from now people could be giving LLMs access to code execution environments
People have been doing that already, checkout Devin.