Fuck AI

1374 readers
8 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
20
For Starters (self.fuck_ai)
submitted 8 months ago by VerbFlow to c/fuck_ai
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
 
 

What a wonderful piece of Linux propaganda 😁. Look at this piece of shit spying on me at work doing who knows what that it needs more than one process.

5
6
 
 

Meta has historically restricted its LLMs from uses that could cause harm – but that has apparently changed. The Facebook giant has announced it will allow the US government to use its Llama model family for, among other things, defense and national security applications.

7
 
 

The CEO of AI search company Perplexity, Aravind Srinivas, has offered to cross picket lines and provide services to mitigate the effect of a strike by New York Times tech workers.

8
 
 

Cross-posted from "AI-driven bot network trying to help Trump win US election" by @[email protected] in [email protected]


Summary

Researcher Elise Thomas uncovered a network of AI-driven bots on X (formerly Twitter) promoting Donald Trump ahead of the U.S. presidential election.

The bots, which sometimes inadvertently reveal their AI origins, were identified through telltale signs like outdated hashtags and accidental “refusals.” The accounts, many of which were blue check-verified, act as amplifiers for central “originator” accounts.

Though suspended by X after Thomas reported them, the network highlights the potential of AI to automate disinformation, making it challenging to attribute and detect such operations in future elections.


If anyone didn't see this coming, I would like to know what the property taxes are like on the rock you're living under.

9
 
 
10
 
 

Meta is one of several tech companies vying for a nuclear boost.

11
 
 

Meta has faced a setback in its plan to build data centers run on nuclear power. The FT reports that CEO Mark Zuckerberg told staff last week that the land it was planning to build a new data center on was discovered to be the home of a rare bee species, which would have complicated the building process.

12
 
 

OpenAI’s Whisper tool may add fake text to medical transcripts, investigation finds.

13
14
25
AI search could break the web. (www.technologyreview.com)
submitted 1 week ago by [email protected] to c/fuck_ai
15
16
17
9
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/fuck_ai
 
 

This article is talking about phishing websites made by scammers with obvious signs that it was made by LLMs.

I thought it might be interesting here.

18
 
 

Meta is “working with the public sector to adopt Llama across the US government,” according to CEO Mark Zuckerberg.

The comment, made during his opening remarks for Meta’s Q3 earnings call on Wednesday, raises a lot of important questions: Exactly which parts of the government will use Meta’s AI models? What will the AI be used for? Will there be any kind of military-specific applications of Llama? Is Meta getting paid for any of this?

When I asked Meta to elaborate, spokesperson Faith Eischen told me via email that “we’ve partnered with the US State Department to see how Llama could help address different challenges — from expanding access to safe water and reliable electricity, to helping support small businesses.” She also said the company has “been in touch with the Department of Education to learn how Llama could help make the financial aid process more user friendly for students and are in discussions with others about how Llama could be utilized to benefit the government.”

She added that there was “no payment involved” in these partnerships.

yeah fck them, for now until the government relies on their AI

19
 
 
  • A new OpenAI study using their SimpleQA benchmark shows that even the most advanced AI language models fail more often than they succeed when answering factual questions, with OpenAI's best model achieving only a 42.7% success rate.
  • The SimpleQA test contains 4,326 questions across science, politics, and art, with each question designed to have one clear correct answer. Anthropic's Claude models performed worse than OpenAI's, but smaller Claude models more often declined to answer when uncertain (which is good!).
  • The study also shows that AI models significantly overestimate their capabilities, consistently giving inflated confidence scores. OpenAI has made SimpleQA publicly available to support the development of more reliable language models.
20
 
 

These are better than those weird videos.

21
22
 
 

Somehow it missed the massive forest fire this summer that destroyed much of the park and the town ... until it was reminded.

Remind me why anybody takes this tech seriously?

23
 
 

    (archived link)

24
 
 

Google could preview its own take on Rabbit’s large action model concept as soon as December, reports The Information. “Project Jarvis,” as it’s reportedly codenamed, would carry tasks out for users, including “gathering research, purchasing a product, or booking a flight,” according to three people the outlet spoke with who have direct knowledge of the project.

If a robot ever buys something on my behalf, I'm lawyering the fuck up.

25
 
 

cross-posted from: https://lemmy.world/post/21301373

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

view more: next ›