this post was submitted on 15 Dec 2023
1581 points (99.1% liked)

Mildly Infuriating

35725 readers
1425 users here now

Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.

I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!

It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means: -No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...


7. Content should match the theme of this community.


-Content should be Mildly infuriating.

-At this time we permit content that is infuriating until an infuriating community is made available.

...


8. Reposting of Reddit content is permitted, try to credit the OC.


-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.

...

...


Also check out:

Partnered Communities:

1.Lemmy Review

2.Lemmy Be Wholesome

3.Lemmy Shitpost

4.No Stupid Questions

5.You Should Know

6.Credible Defense


Reach out to LillianVS for inclusion on the sidebar.

All communities included on the sidebar are to be made in compliance with the instance rules.

founded 2 years ago
MODERATORS
 

I also reached out to them on Twitter but they directed me to this form. I followed up with them on Twitter with what happened in this screenshot but they are now ignoring me.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 148 points 1 year ago (7 children)

Nah, it's just a old school chat bot following a predefined flow chart. And in this flowchart someone implemented an improper email check.

It's pretty much the same as if there was just a website with an email field which then complains about a non valid email which in fact is very valid. And this is pretty common, the official email definition isn't even properly followed by most mail providers (long video but pretty funny and interesting if you're interested in the topic).

[–] [email protected] 28 points 1 year ago* (last edited 1 year ago) (4 children)

You can use symbols like [ ] . { } ~ = | $ in the local-part (bit before the @) of email addresses. They're all perfectly valid but a lot of email validators reject them. You can even use spaces as long as it's using quotation marks, like

"hello world"@example.com

A lot of validators try to do too much. Just strip spaces from the start and end, look for an @ and a ., and send an email to it to validate it. You don't really care if the email address looks valid; you just care whether it can actually receive email, so that's what you should be testing for.

[–] [email protected] 18 points 1 year ago (1 children)

Not even a dot: TLDs are valid email domains. joe@google is a correct address.

[–] RubberElectrons 0 points 1 year ago (1 children)

Mmm... That doesn't seem right, it's usually gotta be fully expanded to at least a particular A record/MX.

How would you tie the tld itself to an MX?

[–] TwitchingCheese 15 points 1 year ago (1 children)

TLD is just another DNS layer, try an SOA or NS lookup for "com." those are obviously hosted somewhere. Hell the "." at the end is even another layer with the root nameservers. You'd probably trip up a bunch of systems that filter on common convention rather than the actual RFC, but you could do it.

[–] RubberElectrons 2 points 1 year ago (1 children)

How the hell were the original rfc designers so creative as to result in such a flexible system?? It's gets crazier the more you look at it.

[–] [email protected] 5 points 1 year ago

It makes the system as a whole simpler. Your computer only needs to remember one root DNS server (although most computers allow setting 4 for redundancy) as opposed to one DNS server for each TLD, and it also makes adding TLDs easier.

[–] darkpanda 12 points 1 year ago

To this point, there’s a website dedicated to the subject. Some of the regexes get pretty wild…

https://emailregex.com/

[–] douglasg14b 2 points 1 year ago (1 children)

Don't forget +

Super handy with Google email.

[–] [email protected] 1 points 1 year ago (1 children)

A lot of providers support plus‑aliasing, although it‌'‌s usually in a company‌'‌s best interest to block plus‑aliases.

[–] [email protected] 4 points 1 year ago

+ symbols aren't always used for aliasing though, and companies that strip them out can break the email address. There's no guarantee that [email protected] is the same person as [email protected].

I have a catchall domain and used to use email addresses like [email protected] with a Sieve rule to filter it into a "shopping" folder, but these days I just do [email protected] without the category or filtering.

[–] tomi000 1 points 1 year ago (2 children)

Yea but most of the time its more important to block code injection than to have the last promille of valid mail adresses be accepted.

[–] [email protected] 5 points 1 year ago

You're not going to get code injection via an email address field. Just make sure you're using prepared statements (if you're using a SQL database) and that you properly escape the email if you output it to a HTML page.

[–] [email protected] 3 points 1 year ago

I think emailregex.com offers best of both worlds.

[–] [email protected] 12 points 1 year ago (1 children)

interesting if you're interested in the topic

The first rule of tautology club is the first rule of tautology club.

[–] elephantium 0 points 1 year ago

I'm listening ;)

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

Yeah that video is great. My favourite part is the Russian post address thing.

He has a lot of interesting and funny talks like that.

[–] sacbuntchris 1 points 1 year ago

The problem is their website also implemented an invalid email check when I try to login which is what got me to this point

[–] [email protected] 0 points 1 year ago

Here is an alternative Piped link(s):

properly followed by most mail providers

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] force -1 points 1 year ago* (last edited 1 year ago) (1 children)

Nah, it's just a old school chat bot following a predefined flow chart.

yes but that would be an AI still

[–] stom 5 points 1 year ago (1 children)

A bunch of IF statements don't qualify as an AI. That's not how that works.

[–] force 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah mate you're talking out of your ass. A bunch of if statements can, in fact, constitute an AI depending on the context. You don't know what you're talking about, stop trying to pretend you do.

AI is a broad concept, a pathfinding algorithm can be considered AI, a machine learning image generator can be considered AI, a shitty chatbot with predefined responses (like this one) can be considered AI. Reducing something to a stupid sentence like "just a bunch of if statements" to try to make it seem absurd is. I can reduce something like ChatGPT the same way and it'd be pretty much as accurate as your take.

You can draw any AI as a predefined flowchart, that's literally the point, they just make decisions based off of data. Large NLP algorithms like ChatGPT are no exception, they're just very large involving incomparably heavier mathematics.

Here is a good stackoverflow answer to it that actually gives credible sources (including from the people who pioneered AI themselves): https://stackoverflow.com/a/54793198

AI is very broad. You can use many different definitions of varying specificity to describe AI which can all be correct, even a shitty chatbot counts as AI despite being so basic. There's no bottom limit for the complexity of AI.

[–] stom -2 points 1 year ago* (last edited 1 year ago) (1 children)

Selecting a canned-text response based on simple keywords is a long way from AI, and it's foolish to ~~equivocate~~ equate the two of them.

Also, chill tf out, and don't be so aggressively presumptious. I have enough experience with the topics in question to point out how misleading this statement is.

[–] force 1 points 1 year ago* (last edited 1 year ago) (1 children)

I suppose you didn't click the link I sent – either that, or you think you know better than some of the leading figures in the field of AI... it's not "a long way from AI", it IS AI in its design and its purpose. It's misleading to assert that it isn't AI because it doesn't meet your arbitrary complexity standard.

I doubt you have any relavant experience in AI research or engineering based off of how you treat the concept of AI and even data science in general here... boiling the bot down to "just a series of if statements" – and then implying that lack of complexity makes it not an AI – is extremely naïve and is itself misleading, you can do that for anything, every program is ultimately just a bunch of if-else/goto and simple math operations. It's just an attempt to conceptually reduce it so much that it seems absurd that it could be in the same category as more advanced AI. Despite the name, AI doesn't have to meet some bar for "smartness", it's a ridiculously broad term and any program intended to mimic human behaviour falls under AI (no matter how poorly it does it).

You confidently and rudely/condescendingly asserted something that is very blatantly ignorant of the subject of AI, I find it reasonable for me to assume that you had no idea what you were talking about, and I find it reasonable to very plainly call you out.

Also you misused "equivocate"... it's not a word used to compare two things, it means using double speak/speaking evasively, "to equivocate the two [AI vs. chatbots]" doesn't mean anything. Did you mean "equate"?

[–] stom 1 points 1 year ago (1 children)

I did click your link. The accepted answer there states:

"The term artificial intelligence denotes behavior of a machine which, if a human behaves in the same way, is considered intelligent.

Again, I don't think that selecting basic responses based on keywords found in the string meets the criteria for being qualified as an AI, as anyone with experience of a chat bot this simple knows it won't hold up the illusion of "intelligence" for very long.

I did mean "equate", you're correct. The rest of my point remains - a very simple chat-bot like this is leaps and bounds from what would be termed an AI these days. To equate the two is misleading.

[–] force 0 points 1 year ago* (last edited 1 year ago) (1 children)

The answer you're referring to (not the accepted answer but the highest voted yes) also says

Tic-Tac-Toe is a very simple game, so it is very easy to make a simple application behave exactly the same as an intelligent human would. So, if this is the definition of artificial intelligence to which you subscribe, then yes, you would be justified in calling your "jumble of if/else statements" an AI.

In this case I feel like it is a safe, if somewhat useless, application of the term.

The ambiguity arises when you ask what it means for "if a human behaves the same way". If you word it like that then something like ChatGPT or Stable Diffusion wouldn't count, because you can easily see they're not human even if you didn't know first, but then this tic-tac-toe bot would count. It's a definition they didn't elaborate on enough so we don't know what they mean by "intelligent human behaviour". Maybe "intelligent human behaviour" extends to just giving somewhat relevant answers based on certain words/lexemes in the sentence? Certainly that intelligence is human, I mean a dog or seal can't do that, only a human. As it stands there is no complex art or chat AI that can't be distinguished from a human, so if we want to restrict it to actually acting like a human then AI doesn't exist, unless we're talking about simple tasks like tic-tac-toe, and there are programs that surpass humans like chess engines which also wouldn't be considered AI, which I find a silly definition to go by. "Human intelligence" doesn't mean "as smart as the average human", it means sentient-like capacity to make decisions, even if it's extremely simple. The task itself doesn't change what counts.

That is why I find the take by the pioneers of AI a lot more useful – they don't put some arbitrary subjective limit on complexity that disqualifues seemingly obvious examples of AI like the IEEE's ambiguously worded definition does.

What's in "these days" doesn't exactly matter – sure, average people nowadays often only use AI to mean complex ML/NLP AI and not the other types of AI, but that doesn't stop other AI from existing and being AI lol. And especially since people use it the previously common way too still – people who play video games will still call the bots/NPCs "AI", or call the pathfinding algorithm "pathfinding AI", for example. And a majority of data science/AI literature will still call simple AI like this one in the post "AI".

It's easy to see why asserting your poor definition of AI as the correct one and anything else (even the definition that most professionals in AI agree to, which the comment I sent has a link to multiple with reasons to their credibility over others, one is literally the 4th most cited book in this century) as "misleading" is pretty annoying. You're trying to gatekeep AI and put your own subjective interperetation of one specific definition on it and ignore multiple leading AI professionals' definitions lol...

[–] stom 1 points 1 year ago (1 children)

Im not attempting to "gatekeep" anything. I'm pointing out that drawing a parallel between a keyword-based chat it script and a full LLM is disingenuous.

[–] force 1 points 1 year ago* (last edited 1 year ago) (1 children)

Wdym drawing a parallel? I literally never did that lol, I just said it's AI even if it's not LLM-level AI despite "just being a bunch of if statements". They don't have to be the same complexity in order to be in the same grouping. My original comment was exactly "it is an AI tho", I didn't say or imply "it's an advanced neural network capable of taking on the greatest of commercial LLMs"

[–] stom 2 points 1 year ago (1 children)

You're totally, technically correct and I apologise :)

Reading this back it seems I've had a kneejerk reaction to seeing the word "AI" slapped onto a basic chatbot. I appreciate that yes, by all metrics it's an AI - yet it draws a parallel between this kind of Hello-World chat-bot and the current state of AI, which I felt is misleading. Like comparing a canoe to a cruise ship, you know?

[–] force 1 points 1 year ago

All good, I get you