this post was submitted on 14 Jul 2023
68 points (95.9% liked)

No Stupid Questions

35723 readers
2176 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn't already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it's probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr.............

[Error: The program "Human_Simulation_AI" is unresponsive]

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 23 points 1 year ago (27 children)

How do you know everyone IRL isn't an NPC because this is just a simulation?

[–] Zerlyna 6 points 1 year ago (1 children)

Take the blue pill and find out!

[–] [email protected] 3 points 1 year ago

The last time I took the blue pill I didn't care if it was a bot or not, I wanted to fuck it.

load more comments (26 replies)
[–] [email protected] 18 points 1 year ago* (last edited 1 year ago) (4 children)

At some point it all stops mattering. You treat bots like humans and humans like bots. It's all about logic and good/bad faith.

I've had an embarrassing attempt to identify a bot and learned a fair bit.

There is significant overlap between the smartest bots, and the dumbest humans.

A human can:

  • Get angry that they are being tested
  • Fail an AI-test
  • Intentionally fail an AI-test
  • Pass a test that an AI can also pass, while the tester expects an AI to fail.

It's too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.

Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.

Of course there are things that current LLMs can't do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.

I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.

load more comments (4 replies)
[–] [email protected] 17 points 1 year ago* (last edited 1 year ago) (3 children)

Ah, the dead internet "theory"? Ultimately, it doesn't matter.

Let's pretend that you're the last human on the internet, and everyone else (including me) is a bot. This means that at least some bots pass the Turing test with flying colours; they're undistinguishable from human beings and do the exact same sort of smart and dumb shit that humans do. Is there any real difference between "this is a human being, I'll treat them as such" vs. "this is a bot, but it behaves like a human being and I need to treat it as a human being"?

[–] jesterraiin 7 points 1 year ago (1 children)

Turing test isn't any good to discern a human from a bot, since many real people wouldn't pass it.

[–] [email protected] 4 points 1 year ago (3 children)

We can simply treat those "real people" as bots, problem solved. :-)

But serious now: the point is that, if it quacks like a duck, walks like a duck, then you treat it like a duck. Or in this case like a human.

load more comments (3 replies)
[–] [email protected] 3 points 1 year ago

Well it would definitely matter at least for practical purposes, like if you wanted to meet up with somebody.

load more comments (1 replies)
[–] MargotRobbie 12 points 1 year ago (1 children)

I'm not a bot, I... was just here to promote a movie.

[–] [email protected] 11 points 1 year ago (1 children)

Welcome to solipsism. We're happy to have you.

load more comments (1 replies)
[–] totallynotarobot 8 points 1 year ago

You can trust me, fellow human

[–] breadsmasher 8 points 1 year ago

Looks like the bot @001100010010 is becoming self aware.

Shut it down. Lets try again

[–] jesterraiin 7 points 1 year ago (3 children)

Bots have limitations. They avoid certain specific topics, or answer in very vague way.

Also, some of us met in real-space, soooo....

[–] Cyyy 9 points 1 year ago (2 children)

do we? or do we only REMEMBER doing it? ;)..

[–] jesterraiin 4 points 1 year ago (1 children)

We do. Because there are many physical proofs of our meetings.

[–] Cyyy 4 points 1 year ago* (last edited 1 year ago) (1 children)

are there? i sit here in front of my computer, no person anywhere. also not a single sign of a person being here except me. for all we know we could be a boltzmann brain imaging our past life and we will be gone in a few se

[–] jesterraiin 3 points 1 year ago (1 children)

Yes, they are there. For example: I'm just about to go for a walk and I will see the same markings I made with my friend some 12 years ago, in the same spot.

[–] Cyyy 4 points 1 year ago (1 children)

what tells you that you are currently not in a dream and the whole reality is just created that way? and before you read this text right now, there existed nothing because you just got created with fake memory's right now (boltzman brain)? :p

(sorry, i know this goes too far now and i should stop. so i will now :D)

load more comments (1 replies)
[–] RightHandOfIkaros 4 points 1 year ago (1 children)

Ha! That's what the government wants you to think. What actually happened was

[–] MajorHavoc 3 points 1 year ago

Yeah, I can speak out on this just fine,

load more comments (2 replies)
[–] [email protected] 7 points 1 year ago (2 children)

If you've not made an AI bot version of yourself to post stuff online, what are you even doing with your life?

load more comments (2 replies)
[–] Naja_Kaouthia 7 points 1 year ago (1 children)

It’s been a while since I’ve had an existential crisis. Thanks!

load more comments (1 replies)
[–] [email protected] 6 points 1 year ago
[–] [email protected] 6 points 1 year ago (1 children)

That quickly boils down to "How do we know anything?" and the answer to that is "We don't". When you think hard enough about anything you can come up with an explanation why what we think to onow and believe is wrong. To get around that irl you can employ different tactics. For example, you can check how plausible sonething is. How many assumptions do you have to make for a theory? Usually, more assumptions means less plausible. And you can ask yourself " why does it matter? What would it change for me?" and the answer is most likely it doesn't and nothing.

[–] [email protected] 3 points 1 year ago (1 children)

Well, we do know 1 thing without making at least one leap of faith, courtesy of Descartes:
If nothing existed, there wouldn't be anything to have these thoughts. Therefore, since I'm thinking, there must be something that exists, and at least part of that is me. It might be an algorithm, a boltzman brain, some weird universe of thought, whatever. I might even be this singular thought and what I assume to be my memories and nothing else exists. But I know I exist in some kind of way.

Beyond that, you need to make assumptions, like whether reality is logical, whether your senses and and memories have any relation to reality, and so on and so forth. It makes sense to assume these assumptions are correct, but you can't know or prove they are true without relying on other assumptions that you can't know or prove independently either. Heck, without assuming that reality is logical, the concept of a proof doesn't even exist. You can choose to reject those assumptions, but that's a useless philosophical deadend.

load more comments (1 replies)
[–] Sterben 5 points 1 year ago (3 children)

It can be hard to tell if you're talking to a bot online. Some bots are really good at mimicking human conversation, and they can even make spelling mistakes to seem more realistic. But there are some things you can look for to help you tell the difference between a bot and a human.

For example, bots often have very fast response times, even if you ask them a complicated question. They may also repeat themselves or give you the same answer to different questions. And their language may sound unnatural, or they may not be able to understand your jokes or sarcasm.

Of course, there's no foolproof way to tell if you're talking to a bot. But if you're ever suspicious, it's always a good idea to do some research or ask a friend for help.

Here are some additional tips for spotting bots online:

  • Check the profile. Bots often have very basic profiles with no personal information or photos.
  • Look for inconsistencies. Bots may make mistakes or contradict themselves. Be suspicious of overly friendly or helpful users. Bots are often programmed to be very helpful, so they may come across as too good to be true. If you're still not sure if you're talking to a bot, you can always ask them directly. Most bots will be honest about their identity, but if they refuse to answer, that's a good sign that you're dealing with a bot.

I hope this helps!

[–] [email protected] 3 points 1 year ago (1 children)

"I am not a bot because bots are programmed to be friendly so here's a 'fuck you' to prove I'm human"

-Someone in a conversation in the future, probably.

[–] scidoodle 3 points 1 year ago

I think you just caught one ;)

load more comments (2 replies)
[–] [email protected] 4 points 1 year ago (1 children)

I've met some people from the internet and confirmed their humanity.

[–] MajorHavoc 2 points 1 year ago

That's exactly what a bot would say, though...

[–] [email protected] 4 points 1 year ago (1 children)

That's a leap if I ever saw one. I could ask the same question and substitute AI with god or aliens and I'd be ridiculed by the tech community and with good reason.

And you don't need to take it much further to fall into the holographic universe principle or the simulation hypothesis and for those there are big discussions to be had in science communities.

To be clear, nothing stops you or me, or anyone for that matter from assuming so, but down that road the only answer I can think of is that nothing matters and might as well lay down and die.

load more comments (1 replies)
[–] Aceticon 4 points 1 year ago* (last edited 1 year ago) (1 children)

That's just a variant of the ages old Philosophy question "What is real?"

Last I checked the best answer there was is "I think therefore I am" (Descartes), which is quite old and doesn't even deal with the whole "what am I", much less with the existance or not of everything else.

"Is the Internet all AI but me" is actually pretty mild skepticism in this domain - I mean, how sure are your that your're not some kind of advanced AI yourself who believes itself to be "human" or even that the whole "human" concept is at all real and not just part of an advanced universe simulation with "generative simulated organic life" inv which various AIs which are unaware of their AI status, such as yourself, participate?

Or maybe you're just one of the brains of a 5-dimensional hyper intelligence and "life as a human" is but a game they play for such minor brains to keep them occupied...

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Look into the research of Large Language Models (LLMs). Even the latest and greatest model has some issues that come up under rigorous testing. For example, GPT-4 (the one used by Bing) fails miserably if you ask: “How many words will there be in your next answer?”

You can spot an older LLM by asking about relationships that require some understanding of the real world. For example: “I found a shirt under the car, but it was wet. Which one was wet?” GPT-4 knows enough about the world that it makes more sense if the shirt was wet, but older models would have failed this question. With every new LLM, there are always some issues, so look up what they are.

Tom Scott made an interesting video about what the situation was 3 years ago. Obviously, LLMs are a fast moving target right now, so that video aged like milk.

[–] [email protected] 3 points 1 year ago (1 children)

Every day, real life drifts closer to a Metal Gear plot.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

Click this box for me before I answer that question

[–] puppy 3 points 1 year ago

Nice bait post, AI. We won't reveal our tricks.

[–] netvor 3 points 1 year ago
load more comments
view more: next ›