354
Hot take: LLM technology is being purposefully framed as AI to avoid accountability
(self.technology)
This is a most excellent place for technology news and articles.
Well LLMs and particularly GPT and its competitors rely on Transformers, which is a relatively recent theoretical development in the machine learning field. Of course it's based in prior research, and maybe there even is prior art buried in some obscure paper or 404 link, but if that's your measure then there is no "novel theoretical approach" for anything, ever.
I mean I'll grant that the available input data and compute for machine learning has increased exponentially, and that's certainly an obvious factor in the improved output quality. But that's not all there is to the current "AI" summer, general scientific progress played a non-minor part as well.
In summary, I disagree on data/compute scale being the deciding factor here, it's deep learning architecture IMHO. The former didn't change that much over the last half decade, the latter did.
Now as I stated in my first comment in these threads, I don’t know terribly much about the technical details behind current LLM’s and I’m basing my comments on my layman’s reading.
Could you elaborate on what you mean about the development of of deep learning architecture in recent years? I’m curious; I’m not trying to be argumentative.
Transformers. Fun fact, the T in GPT and BERT stands for "transformer". They are a neural network architecture that was first proposed in 2017 (or 2014 depending on how you want to measure). Their key novelty is the method of implementing an attention mechanism and a context window without recursion, which was the method most earlier NNs used for that.
The wiki page I linked above is admittedly a bit technical, this articles explanation might be a bit more friendly to the layperson.
Thanks for the reading material: I’m really not familiar with Transformers other than the most basic info. I’ll give it a read when I get a break from work.