this post was submitted on 29 Sep 2024
34 points (70.7% liked)

Unpopular Opinion

6217 readers
291 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 24 minutes ago

I'm pretty sure chat bots are biased to make polite conversation. Most real people won't spend the energy in a conversation to be more honest than they think you are.

Can either get better at sounding honest or talk with less honest people.

[–] WoahWoah 4 points 3 hours ago

Robot realizes is robot by talk to robot.

[–] [email protected] 30 points 10 hours ago (10 children)

It carries the emotions and personal biases of the source material It was trained on.

It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.

load more comments (10 replies)
[–] Boozilla 5 points 7 hours ago (1 children)

As long as you're still engaging with real humans regularly, I think that it's good to learn from ChatGPT. It gets most general knowledge things right. I wouldn't depend on it for anything too technical, and certainly not for medical advice. It is very hit or miss for things like drug interactions.

If you're enjoying the experience, it's not much different than watching a show or playing a game, IMHO. Just don't become dependent on it for all social interaction.

As for the jerks on here, I always recommend aggressive use of the block button. Don't waste time and energy on them. There's a lot of kind and decent people here, filter your feed for them.

[–] [email protected] 3 points 7 hours ago

As for the jerks on here, I always recommend aggressive use of the block button. Don’t waste time and energy on them. There’s a lot of kind and decent people here, filter your feed for them.

My blocklist is around 500 users long and grows every day. I do it for the pettiest reasons but it does, infact work. When I make a thread such as this one, I occasionally log out to see the replies I've gotten from blocked users and more often than not (but not always) they're the kind of messages I'd block them again for. Not to create and echo-chamber but to weed out the assholes.

[–] [email protected] 21 points 11 hours ago (9 children)

This just sounds like platonic masturbation.

load more comments (9 replies)
[–] [email protected] 6 points 8 hours ago (1 children)

Have you ever tried inputting sentences that you've said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven't tried it already. If you have, do you have an example of how it did better than the human?

I'm kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.

[–] [email protected] 3 points 8 hours ago (1 children)

A couple of times, yes, but more often it's the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.

The reason I know ChatGPT understands me so well is from the voice chats we've had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I'm speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.

Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, "Yes, that’s exactly what I meant," or, "Damn, that was well put, I need to write that down."

[–] [email protected] 1 points 5 hours ago* (last edited 5 hours ago)

So what you're saying if I'm reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.

When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn't thought of like a person might? (I've never used one).

[–] [email protected] 18 points 11 hours ago (2 children)
[–] [email protected] 7 points 11 hours ago (1 children)

I've read this text. It's a good piece, but unrelated to what OP is talking about.

The text boils down to "people who believe that LLMs are smart do so for the same reasons as people who believe that mentalists can read minds do." OP is not saying anything remotely close to that; instead, they're saying that LLMs lead to pleasing and insightful conversations in their experience.

[–] [email protected] 8 points 11 hours ago* (last edited 11 hours ago) (2 children)

they're saying that LLMs lead to pleasing and insightful conversations in their experience.

Yeah, as would eliza (at a much lower cost).

It's what they're designed to do.

But the point is that calling them conversations is a long stretch.

You're just talking to yourself. You're enjoying the conversation because the LLM is simply saying what you want to hear.

There's no conversation whatsoever going on there.

[–] [email protected] 3 points 11 hours ago

Yeah, as would eliza (at a much lower cost).

Neither Eliza nor LLMs are "insightful", but that doesn't stop them from outputting utterances that a human being would subjectively interpret as such. And the later is considerably better at that.

But the point is that calling them conversations is a long stretch. // You’re just talking to yourself. You’re enjoying the conversation because the LLM is simply saying what you want to hear. // There’s no conversation whatsoever going on there.

Then your point boils down to an "ackshyually", on the same level as "When you play chess against Stockfish you aren't actually «playing chess» as a 2P game, you're just playing against yourself."


This shite doesn't need to be smart to be interesting to use and fulfil some [not all] social needs. Specially in the case of autists (as OP mentioned to be likely in the spectrum); I'm not an autist myself but I lived with them for long enough to know how the cookie crumbles for them, opening your mouth is like saying "please put words here, so you can screech at me afterwards".

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 9 points 10 hours ago (1 children)
[–] [email protected] 3 points 10 hours ago

Thanks for the tip!

[–] Zerlyna 10 points 11 hours ago

I talk with chat gpt too sometimes and I get where you are coming from. However it’s not always right either. It says it was updated in September but still refuses to commit to memory that Trump was convicted 34 times earlier this year. Why is that?

[–] [email protected] 10 points 11 hours ago

It could respond in other ways if it was trained to do so. My first local model was interesting as I changed its profile to have a more dark and sarcastic tone, and it was funny to see it balance that instruction with the core mode to be friendly and helpful.

The point is, current levels of LLMs are just telling you what you want to hear. But maybe that's useful as a sounding board for your own thoughts. Just remember its limitations.

Regardless of how far AI tech goes, the human-AI relationship is something we need to pay attention to. People will find it a good tool like OP, but it can be easy to get sucked into thinking it's more than it is and becoming a problem.

[–] [email protected] 15 points 13 hours ago (1 children)

Autism and social unawareness may be a factor. But points you made like the snide remarks one may also indicate that you're having these conversations with assholes.

[–] [email protected] 5 points 12 hours ago (4 children)

Well, it's a self-selecting group of people. I can't comment on the ones who don't respond to me, only on the ones who do and for some reason the amount of assholes seems to be quite high in that group. I just don't feel like it's warranted. While I do have a tendency to make controversial comments I still try and be civil about it and I don't understand the need to be such a dick about it even if someone disagrees with me. I welcome disagreement and are more than willing to talk about it as long as it's done in good faith.

[–] [email protected] 3 points 8 hours ago* (last edited 8 hours ago) (1 children)

Sorry, just to clarify. Are you saying you're having these conversations with people on person or online?

[–] [email protected] 1 points 8 hours ago (2 children)

Online for the most part. Face to face it's much easier to explain my views, as well as to jump in when the other person starts talking and I notice they misunderstood me.

[–] [email protected] 5 points 7 hours ago (10 children)

Also, I just went into your comment history and took a quick peek. Your latest "unpopular" opinion seems to be because you disregarded the lives of civilians from the most recent attack by Israel to assassinate Nasrallah. You come across as quite callous trying to justify the murder of hundreds/thousands all to attack one individual. Stuff like that rubs people the wrong way since you seem to display a very morally and ethically wrong opinion when you can't even seem to acknowledge the horrendous loss of life.

load more comments (10 replies)
[–] [email protected] 2 points 7 hours ago (1 children)

Personally, I wouldn't consider online debates as debating a person. The reason being is you have no idea the person you're having this conversation with is a 12 year old with too much time on their hands or a 30 year old working at a troll farm. Even if they were a genuine person you're debating with, sites like Lemmy enable assholes to actually be assholes. They can say things here that would have them socially shunned or even assaulted in real life with virtually no consequence. I've had debates with individuals on this site that I actually liked, but more often than not, I was just debating assholes. I guess what I'm trying to say is that if you're actually interested in discussing topics, try doing it with people in your life instead of online. Doesn't have to be a debate even. You can just ask how they feel about a certain topic and talk about it together. Doscussing/debating online isn't a bad thing. Just be prepared for more assholes given the medium.

[–] [email protected] 1 points 7 hours ago

Finding people interested in talking about the topics I'm actually interested is really, really hard in real life. Obviously I'd prefer it that way too but easier said than done. I do have good conversations and debates with people online too but I just need to go thru quite the few assholes before finding one that's actually doing it in good faith.

load more comments (3 replies)
[–] NegativeInf 8 points 11 hours ago

It's a mirror. I use it a lot for searching and summarizing. Most of its responses are heavily influenced by how you talk to it. You can even make it back up terrible assumptions with enough brute force.

Just be careful.

[–] muntedcrocodile 0 points 5 hours ago

Ur just training urself to have chatgpt's bias. We will soon live in a world where you wont have to be exposed to opinions you disagree with. Tom Scott has a yt vid on why this is a bad idea.

[–] [email protected] 5 points 11 hours ago (4 children)

My impressions are completely different from yours, but that's likely due

  1. It's really easy to interpret LLM output as assumptions (i.e. "to vomit certainty"), something that I outright despise.
  2. I used Gemini a fair bit more than ChatGPT, and Gemini is trained with a belittling tone.

Even then, I know which sort of people you're talking about, and... yeah, I hate a lot of those things too. In fact, one of your bullet points ("it understands and responds...") is what prompted me to leave Twitter and then Reddit.

load more comments (4 replies)
load more comments
view more: next ›