I suggest the tests themselves are lacking.
Given that “AI” is basically software that learns by example, it is fundamentally incapable of any true creativity - at least in its current form.
Social Experiment. Become Me. What I see, you see.
I suggest the tests themselves are lacking.
Given that “AI” is basically software that learns by example, it is fundamentally incapable of any true creativity - at least in its current form.
Humans, whether you want to acknowledge it or not, are also led by example. No act of artistic creation has ever been entirely original or without precedent.
Humans also learn by example.
I could come up with something new and "creative", like have you ever heard of a Sxygun Gepyas?
But with zero context and reference to things you already know it's meaningless. So at least part of creative ideas must base on existing things, otherwise the most creative ideas would come from a random generator.
An artificial neural network (such as the AI in question) is grounded in the same abstract mathematics that run our brains, sans all the erratic behaviour caused by hormones and the slow ion channels that cap our thinking speed.
The only thing we’re capable of, that an ANN can’t do yet, is spontaneity; and I’m pretty sure that this will be addressed in the future. That’s if you even consider a spontaneous AI to be a good thing, which most AI researchers don’t. Perhaps this lack of spontaneous behaviour (and consequently curiosity) is the reason AIs are still predictable/controllable.
That's a misunderstanding of both what creativity is and what gpt4 is doing.
if you ask machine to solve a test and will show it how other people solved it. It will solve it like other people. Absolutely the opposite of creative thinking.
The value of creativity comes from the creator's perspective and context. I would never read a book written by an AI. It has nothing to say or contribute, just regurgitating phrases spat out by an algorithm given false meaning
That’s not how it works. It doesn’t just copy and paste.
It doesn’t copy paste. These larger models seem to have a context capable of quoting, but it still doesn’t copy paste.
It essentially reads incoming data and builds a huge multidimensional map over the words and the relationships between them. Then when given an input it traverses this map to figure out the most likely destination.
Depending on how you tune it the outcome will differ from one occasion to the next.
It’s a gross oversimplification but a closer approximation than “copy paste.”