this post was submitted on 31 Aug 2023
6 points (87.5% liked)
Science
1224 readers
5 users here now
This magazine is dedicated to discussions on scientific discoveries, research, and theories across various fields, including physics, chemistry, biology, astronomy, and more. Whether you are a scientist, a science enthusiast, or simply curious about the world around us, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on a wide range of scientific topics. From the latest breakthroughs to historical discoveries and ongoing research, this category covers a wide range of topics related to science.
founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is a pretty good article. A little light on the problems of 'AI' not understanding the question you ask it nor the answer it gives you, and making shit up. It undoubtedly is a powerful tool for some tasks, for others it takes longer to correct than it would to start from scratch. Vague acknowledgements that it is good at some things don't do much to help people work out what it is really, really bad at, or why.
Current LLM models are tools, and you have to understand how to operate a tool in order to use it effectively. Swinging a hammer at a screw doesn't work, and we'll learn how to use the tools eventually.
We also currently don't give these LLMs much structure in which to work. What you call "making shit up" I'd call imagination. Paired with other specialized systems, it will form an important part of the whole. Your brain makes shit up all the time, its just that you have other specialized structures that take that imagined thought and process it and use it for constructive ends. Every time you do a google search to fact check something, part of your mind has to imagine what it is you might be looking for so that you can then go find it. AI systems will eventually be able to do the same thing.
Oh stop it.
It's a high tech magic 8 ball. It can only regurgitate plausible sounding text based on what has been said before. It cannot create anything new. It doesn't understand anything. It's just a parlour trick..
I doubt you'd consider any particular clump of specialized neurons in your head a sentient being either, on its own. And yet when structured in particular ways, impressive things come out of the collective. I'm not talking about a single LLM achieving consciousness; that was obvious from my comment. But if you want to be a contrarian just for the sake of being a contrarian, I can't stop you.
Wut?
I'm trying to make a point that these AI models are tools with specific purposes and functions. That when combined and structured in novel ways, their "cognition" will improve and come to emulate our own. You are stuck on calling LLMs, as they exist today, parlor tricks.
There are excellent uses of 'AI. They can be very good at doing a vast quantity of repetitive, deterministic tasks very fast. But they can't apply judgement, deal with nuance, or understand context. They're just never going to be able to do what you want them to do. The idea that they can is an illusion. An accidental illusion for sure. But an illusion all the same.
If when properly structured and interconnected they're still an illusion, so is human intelligence. There's no magic sauce or ghost in the shell with human cognition, sorry.
And again, you ignore my argument and put another in my mouth. I'm talking about a sum of networks, not individual ones. You're arguing in bad faith, which is tiresome.