this post was submitted on 09 Dec 2024
9 points (76.5% liked)

Free Open-Source Artificial Intelligence

3028 readers
4 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.

top 5 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

Is prompt engineering even a scientifically backed discipline? Or is this some esoterics like homeopathy? I mean sure, putting in the correct prompt is crucial in making LLMs perform well. But this highly depends on the exact model and how it was fine-tuned. And it takes quite some effort and a methodical approach to test these things. I wonder if these courses consist of anecdotal evidence. Or if people actually studied and tested their advice... Because a lot of what I read in internet forums and alike, is trial and error, everyone has their own truths and lore...

[–] [email protected] 2 points 1 month ago (1 children)

You are completely right and it is mostly about trial and error. I'd assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It's very much an overblown topic and companies pretend it's actually a fancy thing to learn and be good at.

The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to

  • Use examples of what you want
  • Use near-zero temperature for almost everything
  • For complex tasks, tell it to provide its internal thought proccess before providing the answer (or just use the QwQ model)
  • maybe SCREAM AT THE LLM IN ALLCAPS if something is really important
[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

I skimmed the link you provided. Yes, that seems to include solid advice. Good for beginners, nothing new to me, since I (somewhat) followed the AI hobby enthusiast community since LLaMA1. But I have to look up what writing all caps does, I suppose that severely messes with the tokenizer?! But I've seen the big companies do this, too, in some of the leaked prompts.

And I guess with the "early" models from 2023 and before, it was much more important to get the prompts exactly right. Not confuse it etc. That got way better as models improved substancially, and now these models (at least) get what I want from them almost every time. But I think we picked the low hanging fruits and we can't expect the models itself to improve as fast as they did in the past. So it's down to prompting strategies and other methods to improve the performance of chatbots.

[–] [email protected] 2 points 1 month ago (1 children)

ooh, leaked prompts? which ones are you talking about?

[–] [email protected] 1 points 1 month ago

Seems I forgot to bookmark what I read... There are people telling the chatbots to repeat the previous text, so it outputs the prompt. It got way more complicated than that, but there are a lot of methods and leaked prompts.