this post was submitted on 23 Jul 2023
39 points (95.3% liked)
Programming
17313 readers
166 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Not quite ELI5 but I'll try "basic understanding of calculus" level.
The GPT model learns complex relationships between words (or tokens to be more specific, explained below) as probability scores ranging from 0 to 1. In very broad terms, you could think of these as the likelihood of one word appearing next to another in the massive amounts of text the model was trained with: the words "apple" and "pie" are often found together, so they might have a high-ish probability of 0.7, while the words "apple" and "chair" might have a lower score of just 0.2. Recent GPT models consist of several billions of these scores, known as the weights. Once their values have been estabilished by feeding lots of text through the model's training process, they are all that's needed to generate more text.
When feeding some input text into a GPT model, it is first chopped up into tokens that are each given a number: for example, the OpenAI tokenizer translates "Hello world!" into the numbers [15496, 995, 0]. You can think of it as the A=1, B=2, C=3... cipher we all learnt as kids, but with numbers also assigned to common words, syllables and punctuation. These numbers are then inserted into a massive system of multivariable equations where they are multiplied together with the billions of weights of the model in a specific manner. This results in probability scores for each token known by the model, and one of the tokens with the highest scores is chosen as the model's output semi-randomly. This cycle is then repeated over and over to generate text, one token at a time.