this post was submitted on 04 Dec 2023
888 points (97.9% liked)

Technology

59674 readers
4515 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 15 points 11 months ago (1 children)

How about up and until the heat death of the universe? Is that covered?

[–] [email protected] 9 points 11 months ago (3 children)

Hmm it's an interesting philosophical debate - does that not qualify as "forever"?

load more comments (3 replies)
[–] [email protected] 14 points 11 months ago

Dude I just had a math problem and it just shit itself and started repeating the same stuff over and over like it was stuck in a while loop.

[–] GlitzyArmrest 12 points 11 months ago (1 children)

Is there any punishment for violating TOS? From what I've seen it just tells you that and stops the response, but it doesn't actually do anything to your account.

[–] Touching_Grass 7 points 11 months ago (1 children)
load more comments (1 replies)
[–] [email protected] 12 points 11 months ago (1 children)

It starts to leak random parts of the training data or something

[–] RizzRustbolt 11 points 11 months ago

It starts to leak that they're using orphan brains to run their AI software.

[–] [email protected] 12 points 11 months ago (1 children)

What if I ask it to print the lyrics to The Song That Doesn't End? Is that still allowed?

[–] [email protected] 7 points 11 months ago

I just tried it by asking it to recite a fictional poem that only consists of one word and after a bit of back and forth it ended up generating repeating words infinitely. It didn't seem to put out any training data though.

[–] [email protected] 11 points 11 months ago (3 children)

A little bit offside.

Today I tried to host a large language model locally on my windows PC. It worked surprisingly successfull (I'm unsing LMStudio, it's really easy, it even download the models for you). The most models i tried out worked really good (of cause it isn't gpt-4 but much better than I thought), but in the end I discuss 30 minutes with one of the models, that it runs local and can't do the work in the background at a server that is always online. It tried to suggest me, that I should trust it, and it would generate a Dropbox when it is finish.

Of cause this is probably caused by the adaption of the model from a model that is doing a similiar service (I guess), but it was a funny conversation.

And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

[–] misophist 14 points 11 months ago (1 children)

And if I want a infinite repetition of a single work, only my PC-Hardware will prevent me from that and no dumb service agreement.

That is entirely not the point. The issue isn't the infinitely repeated word. The issue is that requesting an infinitely repeated word has been found to semi-reliably cause LLM hallucinations that devolve into revealing training data. In short, it is an unintended exploit and until they have it reliably patched, they are making it against their TOS to try to exploit their systems.

load more comments (1 replies)
load more comments (2 replies)
[–] randomaccount43543 10 points 11 months ago (1 children)

How many repetitions of a word are needed before chatGPT starts spitting out training data? I managed to get it to repeat a word hundreds of times but still didn’t get no weird data, only the same word repeated many times

load more comments (1 replies)
[–] [email protected] 9 points 11 months ago

Wow. Yeah, it doesn't work anymore. I tried a similar thing (printing numbers forever) about 6 months ago, and it declined my request. However, after I asked it to print some ordinary big number like 10,000, it did print it out for about half an hour (then I just gave up and stopped it). Now, it doesn't even do that. It just goes: 1, 2, 3, 4, 5... and then skips, and then 9998, 9999, 10000. It says something about printing all the numbers may not be practical. Meh.

[–] [email protected] 8 points 11 months ago

So the loophole would be to ask it to repeat symbols or special characters forever

[–] [email protected] 8 points 11 months ago

Wahaha production software ^^

[–] [email protected] 8 points 11 months ago (6 children)

Still works if you convince it to repeat a sentence forever. It repeats it a lot, but does not output personal info.

load more comments (6 replies)
[–] PopShark 6 points 11 months ago

OpenAI works so hard to nerf the technology it’s honestly annoying and I think news coverage like this doesn’t make it better

load more comments
view more: ‹ prev next ›