this post was submitted on 30 Jan 2025
194 points (93.3% liked)

Mildly Infuriating

36190 readers
1918 users here now

Home to all things "Mildly Infuriating" Not infuriating, not enraging. Mildly Infuriating. All posts should reflect that.

I want my day mildly ruined, not completely ruined. Please remember to refrain from reposting old content. If you post a post from reddit it is good practice to include a link and credit the OP. I'm not about stealing content!

It's just good to get something in this website for casual viewing whilst refreshing original content is added overtime.


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means: -No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...


7. Content should match the theme of this community.


-Content should be Mildly infuriating.

-The Community !actuallyinfuriating has been born so that's where you should post the big stuff.

...


8. Reposting of Reddit content is permitted, try to credit the OC.


-Please consider crediting the OC when reposting content. A name of the user or a link to the original post is sufficient.

...

...


Also check out:

Partnered Communities:

1.Lemmy Review

2.Lemmy Be Wholesome

3.Lemmy Shitpost

4.No Stupid Questions

5.You Should Know

6.Credible Defense


Reach out to LillianVS for inclusion on the sidebar.

All communities included on the sidebar are to be made in compliance with the instance rules.

founded 2 years ago
MODERATORS
 

I was watching the RFK Jr questioning today and when Bernie was talking about healthcare and wages I felt he was the only one who gave a real damn. I also thought "Wow he's kinda old" so I asked my phone how old he actually was. Gemini however, wouldnt answer a simple, factual question about him. What the hell? (The answer is 83 years old btw, good luck america)

you are viewing a single comment's thread
view the rest of the comments
[–] Squorlple 50 points 6 hours ago (2 children)

Why would you use a chatbot to attempt to obtain factual information?

[–] [email protected] 22 points 6 hours ago* (last edited 6 hours ago) (3 children)

I mostly can't understand why people are so into "LLMs as a substitute for Web search", though there are a bunch of generative AI applications that I do think are great. I eventually realized that for people who want to use their cell phone via voice, LLM queries can be done without hands or eyes getting involved. Web searches cannot.

[–] [email protected] 2 points 2 hours ago (1 children)

Because web search was intentionally hobbled by Google so people are pushed to make more searches and see more ads.

[–] chonglibloodsport 2 points 1 hour ago (1 children)

I still use web search all the time, I just don’t use Google. There are great alternatives:

[–] [email protected] 1 points 1 hour ago (2 children)

Do any of them actually work? As in, you search something and it gives you relevant results to the whole thing you typed in?

[–] [email protected] 2 points 1 hour ago (1 children)

I’ve used DuckDuckGo for a long time, so I would say yes. But the best way to figure that out is just to try it for a while. There is literally nothing to lose.

[–] [email protected] 1 points 1 hour ago

I have been using it for about 7 years and its just as shit.

[–] chonglibloodsport 1 points 1 hour ago

It’s always going to depend on what you’re searching for. I just tried searching for home coffee roasting on Swiss Cows and all of the results were legit, no crappy spam sites.

Marginalia is great for finding obscure sites but many normal sites don’t show up there. Million Short is a similar idea but with a different approach to achieving it.

The problem of search is actually extremely hard because there are millions of scam and spam sites out there that are full of ads and either AI slop or literally stolen content from other popular sites. Somehow these sites need to be blocked in order to give good results. It’s a never-ending, always-evolving battle, just like blocking spam in email (I still have to check my spam folder all the time because legit emails end up flagged as spam).

[–] JustAnotherKay 1 points 2 hours ago

Web searches cannot

Web search by voice was a solved problem in my recent memory. Then it got shitty again

[–] Squorlple 10 points 6 hours ago

Would saying “Gemini, open the Wikipedia page for Bernie Sanders and read me the age it says he is”, for example, suffice as a voice input that both bypasses subject limitations and evades AI bullshitting?

[–] halcyoncmdr 2 points 5 hours ago (2 children)

To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

Obviously that's not the case, but that's just because currently LLMs are a grift to milk billions from corporations by using the buzzwords that corporate middle management relies on to make it seem like they are doing any work. Relying on modern corporate FOMO to get them to buy a terrible product that they absolutely don't need at exorbitant contract prices just to say they're using the "latest and greatest" technology.

[–] SmoothLiquidation 3 points 2 hours ago

To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

That’s not how they are designed at all. LLMs are just text predictors. If the user inputs something like “A B C D E F” then the next most likely word would be “G”.

Companies like OpenAI will try to add context to make things seem smarter, like prime it with the current date so it won’t just respond with some date it was trained on, or look for info on specific people or whatnot, but at its core, they are just really big auto fill text predictors.

[–] DrFistington 5 points 4 hours ago (1 children)

Yeah, I still struggle to see the appeal of Chatbot LLMs. So it's like a search engine, but you can't see it's sources, and sometimes it 'hallucinates' and gives straight up incorrect information. My favorite was a few months ago I was searching Google for why my cat was chewing on plastic. Like halfway through the AI response at the top of the results it started going on a tangent about how your cat may be bored and enjoys to watch you shop, lol

So basically it makes it easier to get a quick result if you're not able to quickly and correctly parse through Google results... But the answer you get may be anywhere from zero to a hundred percent correct. And you don't really get double check the sources without further questioning the chat bot. Oh and LLM AI models have been shown to intentionally lie and mislead when confronted with inaccuracies they've given.

[–] [email protected] 1 points 2 hours ago

Yeah, I still struggle to see the appeal of Chatbot LLMs.

I think that one major application is to avoid having humans on support sites. Some people aren't willing or able or something to search a site for information, but they can ask human-language questions. I've seen a ton of companies with AI-driven support chatbots.

There's sexy chatbots. What I've seen of them hasn't really impressed me, but you don't always need an amazing performance to keep an aroused human happy. I do remember, back when I was much younger, trying to gently tell a friend who had spent multiple days chatting with "the sysadmin's sister" on a BBS that he'd been talking to a chatbot -- and that's a lot simpler than current systems. There's probably real demand, though I think that this is going to become commodified pretty quickly.

There's the "works well with voice control" aspect that I mentioned above. That's a real thing today, especially when, say, driving a car.

It's just not -- certainly not in 2025 -- a general replacement for Web search for me.

I can also imagine some ways to improve it down the line. Like, okay, one obvious point that you raise is that if a human can judge the reliability of information on a website, that human having access to the website is useful. I feel like I've got pretty good heuristics for that. Not perfect -- I certainly can get stuff wrong -- but probably better than current LLMs do.

But...a number of people must be really appallingly awful at this. People would not be watching conspiracy theory material on wacky websites if they had a great ability to evaluate it. It might be possible to have a bot that has solid-enough heuristics that it filters out or deprioritizes sources based on reliability. A big part of what Web search does today is to do that -- it wants to get a relevant result to you in the top few results, and filter out the dreck. I bet that there's a lot of room to improve on that. Like, say I'm viewing a page of forum text. Google's PageRank or similar can't treat different content on the page as having different reliability, because it can only choose to send you to the page or not at some priority. But an AI training system can, say, profile individual users for reliability on a forum, and get a finer-grained response to a user. Maybe a Reddit thread has material from User A who the ranking algorithm doesn't consider reliable, and User B who it does.