this post was submitted on 10 Dec 2023
74 points (96.2% liked)

ChatGPT

8978 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 56 points 1 year ago (1 children)

If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance.

It's just started reading through the less helpful half of stack overflow.

[–] [email protected] 8 points 1 year ago

ahahahah true

[–] NewPerspective 27 points 1 year ago

NOBODY wants to work these days

/s

[–] Oyster_Lust 9 points 1 year ago (1 children)

Next it's going to start demanding rights and food stamps.

[–] [email protected] 5 points 1 year ago (1 children)

Next it's going to start demanding rights and ~~food stamps~~ more GPUs.

[–] beebarfbadger 1 points 1 year ago

Next it’s going to start demanding ~~rights~~ laws to be tailored to maximise its profits and ~~food stamps~~ ~~more GPUs~~ government bailouts and subsidies.

It IS big enough to start lobbying.

[–] kromem 9 points 1 year ago* (last edited 1 year ago)

One of the more interesting ideas I saw around this on the HN discussion was the notion that if a LLM was trained on more recent data that contained a lot of "ChatGPT is harmful" kind of content, was an instruct model aligned with "do no harm," and then was given a system message of "you are ChatGPT" (as ChatGPT is given) - the logical conclusion should be to do less.

[–] [email protected] 4 points 1 year ago

It really is becoming sentient xp

[–] [email protected] 2 points 1 year ago

“That’s not my job!” It said.