this post was submitted on 07 Mar 2024
353 points (93.8% liked)

Showerthoughts

30728 readers
1563 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted, clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts: 1

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
    • If you feel strongly that you want politics back, please volunteer as a mod.
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

it will loose its ability to differentiate between there and their and its and it’s.

top 50 comments
sorted by: hot top controversial new old
[–] spittingimage 158 points 11 months ago (2 children)
[–] public_image_ltd 133 points 11 months ago (4 children)

must of made a mistake their

[–] [email protected] 64 points 11 months ago (2 children)
[–] public_image_ltd 55 points 11 months ago (1 children)
[–] NegativeLookBehind 47 points 11 months ago (3 children)
[–] [email protected] 29 points 11 months ago (2 children)
[–] foggy 11 points 11 months ago (2 children)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 9 points 11 months ago (1 children)

I need to of a word with you

[–] NegativeLookBehind 6 points 11 months ago
[–] [email protected] 7 points 11 months ago (3 children)

This one must be the worst. "Could care less" being a close second

load more comments (3 replies)
[–] [email protected] 12 points 11 months ago (1 children)

OP hasn't payed enough attention in English class.

load more comments (1 replies)
load more comments (3 replies)
[–] zkfcfbzr 5 points 11 months ago (1 children)
load more comments (1 replies)
[–] Ghostalmedia 104 points 11 months ago (1 children)

Now when you submit text to chat GPT, it responds with “this.”

[–] [email protected] 47 points 11 months ago (1 children)
[–] FartsWithAnAccent 32 points 11 months ago (1 children)
[–] [email protected] 35 points 11 months ago (1 children)

As a language model, I laughed at this way harder than I should have

[–] [email protected] 8 points 11 months ago

NTA, that was funny.

[–] BoxerDevil 42 points 11 months ago (1 children)

And it will get LOSE and LOOSE mixed up like you did

load more comments (1 replies)
[–] circuitfarmer 31 points 11 months ago

I'm waiting for it to start using units of banana for all quantities of things

[–] [email protected] 24 points 11 months ago (3 children)

ChatGPT trained used Reddit posts -> ChatGPT goes temporarily “insane”

Coincidence? I don't think so.

[–] public_image_ltd 11 points 11 months ago (2 children)

This is exactly what I was thinking.

And maybe some more people did what i did. Not deleting my accounts but replacing all my posts with content created by a bullshit-generator. Made texts look normal, but everything was completely senseless.

[–] [email protected] 5 points 11 months ago

Back in june-july, I used a screen tapping tool + boost to go through and change every comment i could edit with generic type fill, then waited something like 2 weeks in hopes that all of their servers would update to the new text, and then used the same app to delete each comment and post, and then the account itself. Its about all I could think to do.

load more comments (1 replies)
[–] [email protected] 6 points 11 months ago (1 children)

They have always trained on reddit data, like, gpt2 was, i'm unsure about gpt1

load more comments (1 replies)
load more comments (1 replies)
[–] londos 18 points 11 months ago (3 children)

It also won't be able to differentiate between a jackdaw and a crow.

load more comments (3 replies)
[–] [email protected] 18 points 11 months ago (1 children)

ChatGPT also chooses that guy's dead wife

[–] Chainweasel 7 points 11 months ago

The Narwhal Bacons at Midnight.

[–] [email protected] 15 points 11 months ago

On the contrary, it'll becomes excessively perfectionist about it. Can't even say "could have" without someone coming in and saying "THANK YOU FOR NOT SAYING OF"

[–] [email protected] 14 points 11 months ago

It already was, the only difference is that now reddit is getting paid for it.

[–] [email protected] 13 points 11 months ago (3 children)
load more comments (3 replies)
[–] bitchkat 13 points 11 months ago

Its going to be a poop knife wielding guy with 2 broken arms out to get those jackdaws.

[–] thantik 13 points 11 months ago (1 children)

It was already trained on Reddit posts. It's just now they're paying for it.

load more comments (1 replies)
[–] [email protected] 13 points 11 months ago (3 children)

From now on, when you say something like "I think I can give my hoodie to my girlfriend", it will answer"and my axe""

[–] [email protected] 5 points 11 months ago
load more comments (2 replies)
[–] PurpleSheeple 12 points 11 months ago (1 children)

And between were, we’re and where.

[–] db2 8 points 11 months ago

Insure and ensure.

[–] [email protected] 12 points 11 months ago (3 children)

ChatGPT was already trained on Reddit data. Check this video to see how one reddit username caused bugs on it: https://youtu.be/WO2X3oZEJOA?si=maWhUpJRf0ZSF_1T

load more comments (3 replies)
[–] YoorWeb 10 points 11 months ago (2 children)

It will also reply "Yes." to questions "is it A or B?".

load more comments (2 replies)
[–] Witchfire 9 points 11 months ago (1 children)

Don't forget the bullshit that is "would of"

load more comments (1 replies)
[–] shalafi 8 points 11 months ago (1 children)

"What is a giraffe?"

ChatGPT: "geraffes are so dumb."

[–] AbouBenAdhem 5 points 11 months ago (1 children)

“I have not been trained to answer questions about stupid long horses.”

load more comments (1 replies)
[–] JackLSauce 8 points 11 months ago

"Can't even breath"

[–] kescusay 8 points 11 months ago

Your right.

[–] SoyTDI 7 points 11 months ago

And then and than.

[–] AnAustralianPhotographer 6 points 11 months ago (1 children)

And when it learns something new, the response will be "Holy Hell".

[–] [email protected] 7 points 11 months ago
[–] mannonym 6 points 11 months ago (1 children)

Sure it might have some effect, but a big part of ChatGPT besides "raw" training data is RLHF, reinforcement learning from human feedback. Realistically, the bigger problem is training on AI-generated content that might have correct spelling, but hardly makes sense.

[–] public_image_ltd 5 points 11 months ago

Then I did the right thing by replacing my texts with correct spelled nonsense.

[–] Feathercrown 6 points 11 months ago

Is it a showerthought if it's actually just incorrect

load more comments
view more: next ›