this post was submitted on 11 Oct 2023
506 points (92.6% liked)

Technology

59196 readers
3776 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] FLX 0 points 1 year ago (8 children)

Indispensable, nothing less. lmao

Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent ๐Ÿ‘

[โ€“] warbond 5 points 1 year ago (5 children)

Sorry, I'm not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.

[โ€“] FLX 0 points 1 year ago (4 children)

Because it's too fragile and not ready to be use at scale without causing massive damage

Not useless for now (even if i'd like to know more about the domains where it's really "indispensable"), but as useless as a drill with a dead battery the day they decide to cut it.

I don't find it future-proof, as impressive as some results are

[โ€“] [email protected] 4 points 1 year ago (1 children)

Nowdays LLM can be ran on consumer hardware, so the "dead battery" analogy fall short here too.

[โ€“] FLX 2 points 1 year ago (2 children)

With the same efficiency ? I'm interested in an example

Why everyone using these crappy SaaS then ?

[โ€“] AdrianTheFrog 3 points 1 year ago* (last edited 1 year ago)

Llama 2 and its derivatives, mostly. Simple local ui available here.

Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

I tried it out using the 'Open-Orca/OpenOrcaxOpenChat-Preview2-13B' 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

There are also some models tuned specifically to actually answer your requests instead of the 'As an AI language model' kind of stuff.

Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

[โ€“] [email protected] 1 points 1 year ago

For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.

load more comments (2 replies)
load more comments (2 replies)
load more comments (4 replies)